Artificial intelligence (AI) is far older than most people imagine. Fantasies on artificial human-like beings go back to antiquity, while computerized machines appeared shortly after World War II. It’s just a popular misconception that AI is brand new.
Here’s the real story.
The Dawn of AI
Humans have long fantasized on artificial beings. In the “Iliad,” Homer wrote of mechanical tripods serving the gods dinner. Most famously, Mary Shelley’s “Frankenstein” is of an automaton that destroys its creator. Years later, Jules Verne and Isaac Asimov wrote about robots, as did L. Frank Baum of the “Wizard of Oz.”
Philosophers also theorized on artificially created life forms. Leibniz and Pascal created the first calculating machines, while the Abbe de Condillac imagined a humanoid with no memory or consciousness, into which scientists injected sensations. In 1920, Czech playwright Karel Capek coined the term “robot” in his play “R.U.R.,” where factories cranked out artificial people that, eventually, extinguished the human race.
While these may not exactly qualify as “artificial intelligence” as we think of it today, they all demonstrate humanity’s long history of fantasizing about creating autonomous beings with the purpose of aiding humans.
Old-Fashioned AI
Six months after WWII ended, the University of Pennsylvania released the world’s first programmable computer, called ENIAC (for Electronic Numerical Integrator and Calculator). The so-called “giant brain” was huge. It filled a 50-foot long basement room — and weighed more than 60 grizzly bears. The machine calculated arithmetic around 1,000 times faster than contemporary calculating devices and excited the press because it performed 20,000 multiplications per second. (For more on ENIAC, check out The Women of ENIAC: Programming Pioneers.)
In those heady days of early computing, AI’s researchers typed scores of articles on “intelligent machinery.” Of all the scientists, John von Neumann, Alan Turing and Claude Shannon stood out for their philosophical and technical contributions. Each went on to play a key role in developing AI. Shannon became known as the “father of information theory.”
Gradually, artificial intelligence grew to the point where it subdivided into different fields. And it was at AI’s first official event — the Dartmouth Conference in 1956 — that researcher John McCarthy labeled this new field “artificial intelligence.”
DARPA’s Involvement
It was the generosity of the US Defense Department’s Advanced Research Projects Agency (DARPA) that really got technology rolling. In fact, one of the companies I write for — SYSTRAN — dates its first translating machine to that era.
June 1963, DARPA unleashed a $2.2 million grant for “machine-aided cognition.” In 1968, SYSTRAN produced its first Russian-English machine translation for the US Air Force. Other AI innovations of that period included a program that arranged colored, differently shaped blocks with a robotic hand. Computers like STUDENT, SAINT and ANALOGY tackled arithmetic and logic. An English-Russian translation machine intended to promote world peace amazed visitors at New York World’s Fair in 1964.
Famously, there was also SIR, that seemed to understand basic sentences.
Here’s an example of how SIR worked:
[User] Every boy is a person. [Machine] I UNDERSTAND. [User] A finger is part of a hand. [Machine] I UNDERSTAND. [User] How many fingers does John have? [Machine] THE ABOVE SENTENCE IS AMBIGUOUS. BUT I ASSUME (HAS) MEANS (HAS AS PARTS). I DON’T KNOW WHETHER FINGERS IS PART OF JOHN.In other words, the computer was programmed on the reasoning of the human brain. Scientists believed that humans perceive and reason through internal symbolic representations. Namely, that we make mental images of sensory aspects that are not present at the time. If the brain is like a computer, scientists believed they could teach the computer to think like the brain.
This was the era of good old-fashioned AI.
AI’s First Winter
In 1965, Herbert Simon proclaimed that in just twenty years’ time, machines would be capable “of doing any work a man can do.” Marvin Minsky added that “within a generation, the problem of creating Artificial Intelligence will substantially be solved.” However, five years later money dried up, as Nixon abolished the Office of Science and Technology and slashed budgets across applied research.
Thankfully, big business pumped AI back to life with computer programs that worked side by side with people and helped industrialists improve their work. Examples of such software include Dendral, MYCIN and XCON that used probabilistic rules to help doctors, engineers and pharmacists eliminate implausible data in their fields and develop hypotheses on the remainder.
Dipmeter Advisor, another AI-programmed inference engine, helped mine owners analyze data during oil explorations, while Grain Marketing Advisor guided farmers with their crops. Venture capitalists, recruiters and media flocked in on the buzz.
This new AI field made its practitioners as wealthy as PC upstart entrepreneurs Bill Gates and Steve Jobs.
And then for the second time money dried up.
The Rise of Modern AI
AI’s second winter was worse than its first. Two of AI’s leading expert system companies — Teknowledge and Intellicorp — lost millions of dollars in 1987. Other AI companies filed for bankruptcy. Daedalus, the official journal of the American Academy of Arts and Sciences, canceled its first issue on AI. At its nadir in 1996, the Association for the Advancement of Artificial Intelligence (AAAI) had only 4,000 members worldwide!
And, then, in the summer of 1997, two Stanford University graduates innovated a webpage search engine from their garage in Menlo Park, California. The garage was so small, they left its door open for ventilation. Sergey Brin and Larry Page released their first programming version of a project in August 1996 on the Stanford website. They called it BackRub.
We call it Google.
Over the next dozen years, Google launched numerous products and has experimented widely with AI, making many of its projects freely available so that anyone can contribute to — or benefit from — Google’s latest AI innovations.
By 2019, AI had reached, what the New York Times called, “a frenzy.” By layering itself on sophisticated machine learning algorithms, AI has produced incredible inventions that have included wearable devices for healthier lives, autonomous vehicles, smarter computers, home automation; and helpful technologies for customer support.
In fact, AI grew to the point where researchers started to partition it into specialties that include informatics, knowledge-based systems, cognitive systems and computational intelligence. Problems solved in these areas include data mining, logistics, industrial robotics, banking software, medical diagnostics and speech recognition.
More recently, AI researchers differentiate between “weak AI” and “strong AI.” The first is AI as we know it — a program like Siri that operates on a narrowly defined problem. The second is the not-yet-existent category that tends to scare some people. It is where the machine can perform general intelligent actions — and, to an extreme, experience consciousness. Just like the fantasies of the past. (There’s no need to worry about strong AI taking over — at least not yet! Learn more in Why Superintelligent AIs Won’t Destroy Humans Anytime Soon.)
Summary
Far from modern, AI is actually older than the invention of pizza. It’s the stuff of bedtime stories. More intriguingly, scholar Allen Newell noted that AI’s history models the mores of its times. There was the age of Cartesian Mechanism, where people saw the world as machines; the age of Behaviorism; the age of Engineering; and the age of Reason vs. Emotion and Feeling. In the 1980s, you had Toys (i.e., Greed). Then came Performance, Neuroscience and Problem-solving, among other matrices. In each age, AI copied the dominant ideologies. So AI, one can say, has always been around, but has shifted as it developed.
It’s tantalizing to ponder what our AI of the future will be.