Research aims to develop AI can learn common sense from animals

AI researchers developing reinforcement learning agents could learn a lot from animals.

That’s according to recent analysis by Google’s DeepMind, Imperial College London, and University of Cambridge researchers assessing AI and non-human animals.

“This is especially true in a reinforcement learning context, where, thanks to progress in deep learning, it is now possible to bring the methods of comparative cognition directly to bear,” the researchers’ paper reads.

DeepMind introduced some of the first forms of AI to combine deep learning and reinforcement learning, like the deep Q-network algorithm, a system that played numerous Atari games at superhuman levels.

AlphaGo and AlphaZero also used deep learning and reinforcement learning to train AI to beat a human Go champion and achieve other feats.

More recently, DeepMind produced AI that automatically generates reinforcement learning algorithms.

At a Stanford HAI conference earlier this month, DeepMind neuroscience research director Matthew Botvinick urged machine learning practitioners to engage in more interdisciplinary work with neuroscientists and psychologists.

Unlike other methods of training AI, deep reinforcement learning gives an agent an objective and a reward, an approach similar to training animals using food rewards.

Studies exploring animals’ cognitive abilities may also inspire AI researchers to look at problems differently, especially in the field of deep reinforcement learning.

As researchers draw parallels between animals in testing scenarios and reinforcement learning agents, the idea of testing AI systems’ cognitive abilities has evolved.

Published in CellPress Reviews, the research team’s paper – “Artificial Intelligence and the Common Sense of Animals” – cites cognition experiments with birds and primates.

Training agents to grasp the concept of common sense is another hurdle, along with identifying the kinds of environments and challenges best suited to the task.

A prerequisite for training agents to use common sense will be 3D-simulated worlds with realistic physics.

The researchers argue that while common sense is not a uniquely human trait, it depends on some basic concepts, like understanding what an object is, how the object occupies space, and the relationship between cause and effect.

The challenge of endowing agents with such common sense principles can be cast as the problem of finding tasks and curricula that, given the right architecture, will result in trained agents that can pass suitably designed transfer tasks.

“Although contemporary deep RL agents can learn to solve multiple tasks very effectively, and some architectures show rudimentary forms of transfer, it is far from clear that any current RL architecture is capable of acquiring such an abstract concept. But suppose we had a candidate agent, how would we test whether it had acquired the concept of a container?”.

Researchers believe training should rely on approaches that require understanding without exposure to many examples, also known as few-shot or zero-shot learning.

In other recent reinforcement learning developments, UC Berkeley professor Ion Stoica spoke at VentureBeat’s Transform conference about why supervised learning is far more commonly used than reinforcement learning.

Stanford University researchers also introduced LILAC to improve reinforcement learning in dynamic environments, and Georgia Tech researchers combined NLP and reinforcement learning to create AI that excels in text adventure games.