One of the chief contributors to misunderstanding and conflict is improperly defined terminology. Words matter; when the wrong word is used to describe an object or event, that misconception breeds confusion and inflated or deflated expectations.
A case in point these days is the term “Artificial Intelligence.” Its origin has been traced to the mid-1950s when Stanford computer scientist John McCarthy hosted the first academic conference on the subject.
Since then, it has become a marketer’s dream because it invokes an entirely new level of computing technology above simple processing and yet is still vague enough to avoid a firm definition.
Is AI Simply an Illusion of Intelligence?
Now that AI is finally making its way from the test bed into everyday life and seems capable of comprehending the world and expressing itself, the question is more relevant than ever: is AI actually intelligent, or are we simply engaging in a digital form of anamorphism?
According to Bradley Efron and Trevor Hastie, a pair of computer scientists also from Stanford, there is a big difference between algorithms and inference. Algorithms are what statisticians do, while inference is why they do them.
When you get right down to it, everything an intelligent algorithm does – from regression analytics to neural networking – is based on mathematical formulas that use one set of variables to predict the behavior of other sets.
If this is intelligence, they argue, it is intelligence without understanding, which is a contradiction in terms. If a mind cannot understand what it is doing and why, it simply does not meet the intelligence threshold.
AI: An Intentional Misnomer?
This is part of why AI has invoked such fear among the populace and why it could become such a letdown when people finally start to engage it meaningfully.
Moneycontrol.com’s Parmy Olson argues that the term “AI” is a mirage, along with “metaverse” and “Web3” – designed more to generate revenue than produce a better understanding of the technology. And terms like “neural networking” and “deep learning” aren’t helping either.
The problem is more than just academic, Olson says. It allows companies to shift the blame, and perhaps the liability, for bias and other flaws in their models away from themselves and onto these supposedly independent-thinking creations.
At the same time, it fuels both the fear of AI annihilation and the expectation of AI utopia – neither of which is likely to materialize.
AI Can Offer Misplaced Trust
To be sure, what AI does is impressive. It can quickly ingest vast amounts of data, far more than even the most intelligent human mind, and then produce results in plain, accurate, and insightful language. But this is just mimicry, says author Peter Cawdron, and mimicry is not intelligence. Parrots can speak as well; it doesn’t make them intelligent.
The worst thing that could happen with AI is if humans started surrendering their intelligence to many algorithms. This tendency is already beginning to surface at home and in the office as people simply accept the results of any AI-driven process without acknowledging that it can get things wrong just as easily, or more so, than humans.
Are We Intelligent?
But if AI is not really intelligent, is it so easy to declare that humans are? Sure, we can speak and opine and pontificate about all kinds of things, but do we really understand them?
To quote the Scarecrow in The Wizard of Oz: “Some people without brains do an awful lot of talking, don’t you think?” Is it possible that we merely have an arbitrary definition of intelligence because this is how our minds work?
It gets even more complicated when we consider that psychiatrists say there are between eight and 12 kinds of intelligence. Which one is the real thing? Or are they merely different aspects of overall intelligence?
These questions are probably best left to artists and philosophers. For the moment, “artificial intelligence” is the term we’re stuck with, for good or for ill.
If it turns out that AI is just another kind of intelligence, one geared more toward machines than life forms, then human development, not computing technology, is truly at an inflection point: for the first time, we will be sharing our planet with another intelligence.
And that makes even some very smart people rather uncomfortable.