Researchers from the Google Brain deep learning project are reportedly working on an AI-generated encryption that is independent of any human feedback.
Google's AI Generated Encryption
According to a new research paper, David G. Andersen and Martín Abadi, researchers at Google Brain deep learning project, have tested three neural networks named Alice, Bob and Eve that create their own AI-generated encryption. The three neural networks passed each other notes encrypted through a method they created themselves.
The Google Brain based out in Mountain View is separate from Google's Deep Mind Project based in London. For the experiment, Andersen and Abadi from Google Brain have set Alice, Bob and Eve neural networks to share the same "mix and transform" neural network architecture. However, Alice and Bob have been set to have one advantage over Eve, as they used symmetric encryption by starting with a shared secret key.
The three AI systems were not told what crypto techniques to use or how to encrypt stuff. They were just given a failure condition (a loss function). In Eve's case, the loss function was the distance measured in correct and incorrect bits, between its guess and Alice's original input. A more complex loss function had been set for Alice and Bob, creating an adversarial generative network (GAN).
According to New Scientist, Abadi and Andersen assigned each AI system a task. The neural network called Alice had to send an encrypted message to Bob, while Eve had to figure out how to decode the message. At first, Alice and Bob were apparently bad at finding a secure encryption method, but after 15,000 attempts Alice was able to find a good encryption strategy. Eve was only able to guess half of the 16 bits in the encrypted message.
AI Encryption Better Than Human Encryption?
The experiment had mixed results but most of the time Alice and Bob did manage to communicate with very few errors. When Eve showed an improvement over random guessing in some of the tests, Alice and Bob responded by improving their cryptography technique. The encryption methods devised by Alice and Bob were not exhaustively analyzed by the researchers, but they observed that a specific run was both plaintext- and key-dependent. "
According to Engadget, even the researchers who started the cryptography experiment don't really know what kind of encryption method Alice devised, because of the way the machine learning works. This means that, for now, Alice's encryption method won't be very useful in practical applications. This interesting experiment is just the first step in having AI systems creating better encryption than humans.
According to Tech Crunch, many of the encryption techniques used by the three neural networks involved in the cryptography experiment were quite unexpected and odd. They were using algorithms and calculations that are not usually found in "human generated" encryption. This means that robots will be able to talk to each other in ways impossible to crack.
The conclusion of the AI encryption experiment is that neural networks seem good at creating crypto methods, but not as good at breaking the code. It might be interesting to conduct similar experiments on an even larger scale, using open-source deep learning tools such as Microsoft's Cognitive Toolkit. But for now, we don't have yet to worry about the machines having secrets behind our backs.