If you’re paying attention to what people are talking about in the technology space, you may have heard some version of the concerns Elon Musk, Bill Gates and others have about superintelligent AI technologies – although recent reports show Gates has cooled off a little bit on all of that Cassandra stuff, there is still abundant concern and reasoning behind it.
Questions abound: Will robots become smarter than humans? Will AI take over our jobs and our lives? Will technology start to control humans, and will problems with misused AI lead to violence and destruction?
For many experts, the answer is a resounding “no” based on the actual ways that we are developing today’s technologies. Most would agree that we need ethical, explainable frameworks to direct AI and ML technologies – but they don’t agree that robot overlords are a given outcome.
Let’s look at some of the debate around superintelligence and see why many technologists are confident that humans will still hold the reins in a couple of hundred years.
Humans Take the Lead
When you look at reporting around AI concerns, one name that comes up quite a lot is Grady Booch. Booch pioneered the Unified Modeling Language (UML) and worked on key technologies for IBM early in the millennium.
A TED talk by Booch illustrates some of his optimism about the types of AI that we used to think of as science fiction.
First, he argues, human training will inflect its own ethics and norms into the functioning of AI systems.
“If I want to create an artificially intelligent legal assistant, I will teach it some corpus of law but at the same time I am fusing with it the sense of mercy and justice that is part of that law,” Booch says. “In scientific terms, this is what we call ground truth, and here’s the important point: In producing these machines, we are therefore teaching them a sense of our values. To that end, I trust an artificial intelligence the same, if not more, as a human who is well-trained.” (For more on the future (and past) of AI, check out Thinking Machines: The Artificial Intelligence Debate.)
Later in the talk, Booch brings up another very different argument for why we need not fear a takeover by technologies.
“(An existential threat to humanity from technology) would have to be with a superintelligence,” Booch says. “It would have to have dominion over all of our world. This is the stuff of Skynet from the movie ‘The Terminator,’ in which we had a superintelligence that commanded human will, that directed every device that was in every corner of the world. Practically speaking, it ain’t gonna happen. We are not building AIs that control the weather, that direct the tides, that command us capricious, chaotic humans. And furthermore, if such an artificial intelligence existed, it would have to compete with human economies, and thereby compete for resources with us … in the end (don’t tell Siri this) we can always unplug them.”
Our Brains, Our Bodies
Another major argument for the supremacy of human cognition over technology is related to exploring the human brain.
If you go to YouTube and listen to the late renowned engineer Marvin Minsky, an early ML pioneer and an example for Ray Kurzweil and other AI gurus of today, you can hear him talking about the human brain. Minsky stresses that real human intelligence is not one powerful supercomputer, but hundreds of different computers interlinked together in complex ways. AI, he explains, can replicate some of those machines, but is nowhere close to replicating all of them.
To a lot of technology experts, AI will never be able to truly mimic the complexity of the human brain, and therefore will always be innately less powerful.
“AIs are usually not designed to survive but instead to solve very specific and person-centred problems like playing chess,” wrote Luc Claustres, Ph.D. late last year. “As such they can’t even adapt to slight changes in their environment without reprogramming them, while humans do manage imprecision or changing rules easily on themselves.”
AI and Intuition
There’s also a corollary argument that relies on what you might call the “crossing guard problem” – it expounds on the limits of what artificial intelligence can do. AI and ML are great at pulling insights from a diverse pool of data – but they’re not good at intuition, something that humans are known for. So, in other words, if you hired a computer as a crossing guard, you might have some functionality – but you would probably have some pretty dangerous gaps – that you wouldn’t trust your kids with! (For more on AI’s potential for human-like thought, see Can Creativity Be Implemented in AI?)
As such, computer programs can’t understand our human quirks and idiosyncrasies in the ways we communicate and the ways that we live – so that’s another key limitation.
For more on why superintelligence concerns may be overblown, a Wired article last year by Kevin Kelly goes over some assumptions that would need to hold true for AI to really take over in any practical way. These include the following:
- That artificial intelligence is already overmastering human cognition
- That intelligence can be expanded without limit
- That super intelligence can solve most of the problems that humans face
Reading through the article, you see all of these assumptions exploded and treated to show, again, why human cognition is so special.
It’s not that technology won’t become powerful – it will. It’s a question of all of the different dimensions that have to be mastered in order to make AI more powerful than humans. Humans evolved over millions and millions of years – artificial intelligence has been around for about 20 years, and although it has made enormous advances, humans still have the upper hand, and probably will forever.
If you read back through some of these links and look at what people were saying, what we really should be more concerned about is ourselves. There is the abundant potential for humans to misuse technology – in fact, many of us would say we already misuse a lot of the technologies that we have. So that may really be a better place to put one’s anxiety and one’s action when it comes to creating ethical AI.