Ever wonder how your brain creates your thoughts, based on everything that’s happening around you (and within you), and where these thoughts are actually located in the brain? Computational neuroscientist Hava Siegelmann has, and she created a geometry-based method for doing just that.
Her team did a massive data analysis of 20 years of functional magnetic resonance imaging (fMRI) data from tens of thousands of brain imaging experiments. The goal was to understand how abstract thought arises from brain structure, which could lead to better ways to identify and treat brain disease and even to new deep-learning artificial intelligence (AI) systems.
Basically, fMRI detects changes in neural blood flow, which relates to specific brain activities (such as imagining what an object looks like, or talking). More blood flow means higher levels of neural activity in that specific brain region. While fMRI-based research has done an impressive job of relating specific brain areas with activities, surprisingly, “no one had ever tied together the tens of thousands of experiments performed over decades to show how the physical brain could give rise to abstract thought,” Siegelmann notes.
For this study, the researchers took a data-science approach. First, they defined a physiological directed network (a form of a graph with nodes and links) of the whole brain, starting at input areas and labeling each brain area with the distance (or “depth”) from sensory inputs. For example, the visual cortex is located far away from the eyes while the auditory cortex is relatively close to the ears (although routing via the thalamus makes this more complex).
OK, so what does that mean in terms of thinking? To find out, they processed a massive repository of fMRI data from about 17,000 experiments, representing about one fourth of the fMRI literature).
“The idea was to project the active regions for a cognitive behavior onto the network depth and describe that cognitive behavior in terms of its depth distribution,” says Siegelmann.
“We momentarily thought our research failed when we saw that each cognitive behavior showed activity through many network depths. Then we realized that cognition is far richer; it wasn’t the simple hierarchy that everyone was looking for. So, we developed our geometrical ‘slope’ algorithm.”
The researchers summed all neural activity for a given behavior over all related fMRI experiments, then analyzed it using the slope algorithm. “With a slope identifier, behaviors could now be ordered by their relative depth activity, with no human intervention or bias,” she adds
. They ranked slopes for all cognitive behaviors from the fMRI databases from negative to positive and found that they ordered from more tangible to highly abstract. An independent test of an additional 500 study participants supported the result.
She and colleagues found that cognitive function and abstract thought exist as a combination of many cortical sources ranging from those close to sensory cortices to far deeper from them along the brain connectome, or connection wiring diagram.
The authors say their work demonstrates that all cognitive behaviors exist on a hierarchy, starting with the most tangible behaviors (such as finger tapping or pain), then to consciousness, and extending to the most abstract thoughts and activities such as naming. This hierarchy of abstraction is related to the connectome structure of the whole human brain, the connections between different regions of the brain, they add.
Creating a massively recurrent deep learning network
Siegelmann says this work will have great impact in computer science, especially in deep learning. “Deep learning is a computational system employing a multi-layered neural net, and is at the forefront of artificial intelligence (AI) learning algorithms,” she explains. “It bears similarity to the human brain in that higher layers are agglomerations of previous layers, and so provides more information in a single neuron.
“But the brain’s processing dynamic is far richer and less constrained because it has recurrent interconnection, sometimes called feedback loops. In current human-made deep learning networks that lack recurrent interconnections, a particular input cannot be related to other recent inputs, so they can’t be used for time-series prediction, control operations, or memory.”
Her lab is now creating a “massively recurrent deep learning network,” she says, for a more brain-like and superior learning AI, along with a new geometric data-science tool, which may find widespread use in other fields where massive data is difficult to view coherently due to data overlap.