Google DeepMind may have already conquered the world of Go, but its next accomplishment may be walking around in a game of Doom or GoldenEye 007. The artificial intelligence system successfully navigated a 3D maze without cheating,it didn’t have access to the digital world’s internal code.
Instead, it walked around walls and into rooms by "sight," as New Scientist reports. In the maze, DeepMind is rewarded for finding apples and portals as it attempts to get a high score in just one minute. DeepMind moves around the labyrinth via a reward-based method, asynchronous reinforcement learning, and a neural network that recognizes patterns in the digital space.
Yep, DeepMind actually learns from its past experiences. However, the asynchronous method doesn’t rely on examining previous run-throughs, a process that takes a ton of computing power. Instead, asynchronous reinforcement learning allows the system to see multiple outcomes at once and choose the most efficient path forward.
Google DeepMind used a similar reinforcement learning method in 2015 to play a handful of classic Atari games. The team has since improved and streamlined that program, allowing the AI to advance to a Doom-like maze.