When discussions about artificial intelligence (AI) turn toward its ability to simulate human thought, they are usually referring to a particular type of AI called neural networking.
Neural networks are, in fact, designed like the human brain, with thousands of algorithmic nodes processing data independently but in a coordinated fashion.
But just because this similarity exists doesn’t mean AI has developed human-like – let alone god-like – thought capabilities. There are many differences between natural and artificial brains, both in structure and in scope, which means we are a long, long way from seeing AI coming even close to the power and complexity of the human mind.
Fast and Powerful Artificial Intelligence
Artificial neural networks (ANNs) are useful in a wide range of applications. Since they can break down complex data patterns and subject them to rapid analysis, they are more adept at fast-moving situations like autonomous vehicle operation and real-time dialogue than other types of AI.
Most neural network architectures consist of various layers, nodes, and functional elements that help them to account for bias, data loss, and updates, says Akash Takyar, CEO of digital solutions developer LeewayHertz.
In most cases, these designs are inspired by the neurons, synapses, and hierarchical structures of the human brain. Input data flows through each layer of the ANN, where it is processed and transformed into some form of output – usually a decision, recommendation, or prediction.
In this way, it is still a computer processing bits and bytes, but the pathways it uses to convert raw data to actionable intelligence are more complex.
Brain Training
While this may look like a simulated human brain, recent studies suggest this might not be the case. A team at MIT recently examined more than 11,000 neural networks and found that they only exhibited the cell-like processing characteristics of human thought when they were trained to do so.
As research associate Rylan Schaeffer explained:
“What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices.”
Without those restraints, few networks developed the cell-like activity that can be used to predict actual brain functionality, which evolves naturally without pre-conditions.
This study suggests that data scientists should probably dial back claims that neural networks mimic the human brain to a great extent. When given the right parameters, they can produce results based on natural neural pathways, but absent those parameters, they can still deliver results without forming these brain-like architectures.
Ila Fiete, senior author of the paper and a member of MIT’s McGovern Institute for Brain Research, said:
“When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions or even shedding light on what it is that the brain is optimizing.”
Learning Differences
Another key difference between neural networks and living brains is the way they learn. According to Maxim Bazhenov, Ph.D. and professor of medicine at the University of California San Diego’s School of Medicine, ANNs overwrite old data as new data is received while a brain engages in continuous learning and incorporates new data to achieve greater levels of understanding.
This leads to a phenomenon in neural networks called “catastrophic forgetting” that causes them to suddenly fail at performing previously known tasks or to change once-accurate predictions.
Oddly, one of the solutions to this problem is to incorporate a simple biological function into the artificial model: sleep.
By alternating the training routine between spikes of new data and off-line periods, researchers see a drop in catastrophic forgetting because the model replays old memories without using old training data. This emulates the same kind of “synaptic plasticity” that occurs when we sleep.
Small Minds
Despite these similarities, the fact remains that the human brain is far more powerful than even the most advanced neural network.
When researchers at Hebrew University in Jerusalem set out to determine how complex a neural net would have to be to reach the computational power of a single human neuron, they were shocked by the results. While some neurons are equivalent to “shallow” neural networks, meaning they don’t have highly layered architectures, those in the cerebral cortex required deep, seven-layered networks, with each layer holding up to 128 computational units.
And this is for just one neuron. There are more than 10 billion neurons in the average brain, each one requiring deep networks of five to eight layers. In this light, computer science has quite a way to go before it can create an artificial equivalent to the human brain.
The Bottom Line
But this does not mean AI is a false promise or that it does not warrant caution in its development and implementation. Even an artificial reptilian brain can do significant harm if left unchecked, just like a crocodile can.
What it does mean is that the artificial intelligence we have today, even the type that is modeled on the human brain, is still in its infancy and has nowhere near the intuitive, intellectual acumen of our minds.
Far from being a threat, AI stands to vastly enhance our own innate cognitive powers – and yes, just like a natural brain, those powers can still be used for good or ill.