Some time ago I read an article which suggested that humankind was fast reaching "peak knowledge". It explained that the depth of discovered knowledge meant the time required to specialise in a field had exceeded the lifetime of a human itself. In effect, we do not live long enough to learn all that is possible to learn. A curiosity of nature that it appears to have built biological boundaries for our abilities to seek knowledge further than a predetermined point.
It is this very finite frontier which I believe makes AI (Artificial Intelligence) such an alluring concept, while we as a species may be limited by our current place in the evolutionary tree when it comes to speed and capacity. We can create something which is not. A form of advancement by proxy.
I was reminded of this theory when recently Google Brain, the AI research department for Google published a paper called "Learning to protect communications with adversarial neural cryptography". Using neural computer networks (AI) in their labs, they performed an experiment whereby two of these networks were instructed to communicate using encryption, which they would negotiate with each other, and a third was instructed to attempt to crack this encryption. Therefore decrypting the communication.
What was discovered was remarkable. The two communicating networks were very quickly able to devise a unique encryption algorithm for their specific communication channel. A encryption algorithm which Google described as "not in common with human generated algorithms". In addition, the cracking neural network ability to break the encryption worsened as the original two developed their method.
Put simply, what was observed was two machines developing a previously unknown encryption algorithm on the fly between themselves, which even a peer machine could not reverse.
Decryption of encrypted traffic is a touchy subject today. Law enforcement agencies would like to have better decryption abilities, particularly when combating crime and terrorism. So much so that the NSA has been outspoken in its attempts to fund the construction of a quantum computer for such purposes (The Register, 2014). While quantum computing doesn't spell the end of existing encryption algorithms, it is expected to have harmful effects.
For example a quantum computer would reduce the brute force key discovery time of widely used symmetric algorithms such as AES (Advanced Encryption Standard) by half. Yet this only works on the basis that half of the puzzle is solved in that the encryption algorithm is known. Google Brain's experiment shows that the algorithm can be just as fluid as the key, compounding the complexity in breaking it and making it far more quantum-safe than symmetric algorithms are currently considered.
The glass half-full perspective is a fantastic leap forward in secure communications which would see businesses and individuals being able to ensure security as each communication channel could be utilising a completely new and unknown algorithm and key. Each file, transmission and message would require so much computing power and time to break into it, it would be a impossible endeavour.
However, great power can be wielded for both good and bad. Thinking beyond future iPhones which the FBI would never be able to break into as each device would be completely unique. Imagine a world where AI based encryption is used in tools such as ransomware, your hard disk encrypted by an algorithm previously unknown and even impossible to crack even by another similar neural network.
It was once thought that existing encryption algorithms would become irrelevant with time, that the dawn of quantum computing may even herald it's extinction. But just as that article all those years back revealed, this is only applicable to the boundaries of human abilities. With a little dose of AI, that which we thought was obsolete, just needed an upgrade.