A team from Google Brain, has shown that machines can learn how to protect their messages. Researchers Martín Abadi and David Andersen demonstrate that neural networks, or “neural nets”, computing systems that are loosely based on artificial neurons, can work out how to use a simple encryption technique.
In their experiment, computers were able to make their own form of encryption using machine learning, without being taught specific cryptographic algorithms. The encryption was very basic, especially compared to our current human-designed systems. Even so, it is still an interesting step for neural nets, which the authors state “are generally not meant to be great at cryptography”.
The Google Brain team started with three neural nets called Alice, Bob and Eve. Each system was trained to perfect its own role in the communication. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop.
To make sure the message remained secret, Alice had to convert her original plain-text message into complete gobbledygook, so that anyone who intercepted it (like Eve) wouldn’t be able to understand it. The gobbledygook – or “cipher text” – had to be decipherable by Bob, but nobody else. Both Alice and Bob started with a pre-agreed set of numbers called a key, which Eve didn’t have access to, to help encrypt and decrypt the message.
Initially, the neural nets were fairly poor at sending secret messages. But as they got more practice, Alice slowly developed her own encryption strategy, and Bob worked out how to decrypt it.
After the scenario had been played out 15,000 times, Bob was able to convert Alice’s cipher text message back into plain text, while Eve could guess just 8 of the 16 bits forming the message. As each bit was just a 1 or a 0, that is the same success rate you would expect from pure chance. The research is published on arXiv.
We don’t know exactly how the encryption method works, as machine learning provides a solution but not an easy way to understand how it is reached. In practice, this also means that it is hard to give any security guarantees for an encryption method created in this way, so the practical implications for the technology could be limited.
“Computing with neural nets on this scale has only become possible in the last few years, so we really are at the beginning of what’s possible,” says Joe Sturonas of encryption company PKWARE in Milwaukee, Wisconsin.
Computers have a very long way to go if they’re to get anywhere near the sophistication of human-made encryption methods. They are, however, only just starting to try.