Vinge, a professor of mathematics and computer science as well as a respected science fiction writer, coined the term in a 1993 lecture given at the VISION-21 Symposium. His key conclusion was that there will be a merger of human and machine intelligences into a new entity. This, according to Vinge, is The Singularity and because machines will be so much more intelligent than we are, there's no way for us lowly humans to predict what comes after it.
From Robots to Machine IntelligenceWhile Vinge brought together the concept of a combination of human and machine intelligence, the concept of autonomous, intelligent artificial beings has been with us since ancient times, when Leonardo da Vinci sketched plans for a mechanical knight around 1495. Czech playwright Karel Capek gave us the word "robot" in his 1920 play R.U.R. ("Rossum's Universal Robots"). The word "robot" has been in use ever since.
The advent of the fictional robot led to both a plethora of fiction about such creatures, and the beginnings of scientific and mechanical work to create them. Almost immediately, the questions began in the general public. Could these machines be given real intelligence? Could this intelligence surpass human intelligence? And, perhaps most of all, could these intelligent robots become a real threat to human beings? (Read about more futuristic ideas in Astounding Sci-Fi Ideas That Came True (and Some That Didn't.)
The prolific science and science fiction author Isaac Asimov both coined the term "robotics" for the scientific study of robots and, in his science fiction short stories and novels, created and used the "Three Laws of Robotics," which have continued to guide both fiction writers and robotic scientists and developers from the 1942 introduction in the short story "Runaround" right to the present.
- A robot may not harm a human being or, through inaction, allow a human being to come to harm.
- A robot must obey a human being, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second law.
Building a Better HumanWhile these writers and scientists were busying themselves with robot developments, others were looking at the other half of the equation by seeking out ways to improve the human body. Computer scientist/mathematician/philosopher and science fiction author Rudy Rucker coined the term "wetware" in the 1988 novel of the same name. So, while the human mind contains the "software" that governs our actions, the material that surrounds it - skin, blood, bone, organs, - provides a home for the brain. That's wetware. While Rucker’s novels do not deal with humans bearing the benefit of new devices to correct or enhance their wetware such as artificial limbs, artificial hearts, pacemakers and hearing implants, these technologies were all becoming commonplace during that time.
In fact, University of Edinburgh philosophy professor Andy Clark, in his 2003 "Natural-Born Cyborgs: Minds, Technologies and the Future of Human Intelligence," dwells on the fact that humans are the only species with the capacity to fully incorporate technology and tools into their existence. We make our cell phones, our tablets, our Google capabilities, etc. part of us, part of our mental lives, and our mind expands to use these tools. Clark points out how the measurement of time has changed the landscape of human experience and how today’s tools do the same. He also points out all the other technology that we have taken in and adapted to, and sees the same future for neural implants and devices that improve cognition.
The person who ties all of these threads together is Ray Kurzweil, inventor, futurist, writer, artificial intelligence guru and, most recently, Google’s director of engineering. If Vinge is The Singularity’s father, Kurzweil is its superhero. His books, particularly "The Age of Spiritual Machines: When Computers Exceed Human Intelligence" and the massive "The Singularity Is Near: When Humans Transcend Biology," as well as his television, TED and other media appearances, have brought the concept of The Singularity to the attention of the general public and the technological community.
While "The Age of Spiritual Machines" was published in early 2000, it is still worth a read, if only for the great timeline that appears in the back of the book. In the timeline, Kurzweil traces all actual scientific and technology developments from the the Big Bang up to 1999 and then extends the period until 2030, filling it with his projections.
"The Age of Spiritual Machines" proved to be only a warm-up for "The Singularity is Near," which was published in 2005 and laid out all the factors that Kurzweil sees coming into play to bring the singularity into actuality in 2045. Kurzweil arrives at that date by first explaining that the continuing impact of Moore’s Law will lead to a personal computer with the processing capability of a human being by 2020. Then, every doubling will allow us to come closer to reverse engineering the functions of the human brain, which Kurzweil predicts will happen by 2025.
Following this scenario, we could have "the requisite hardware and software to emulate human intelligence" and will thus "have effective software models of human intelligence by the mid 2020s." This will allow us to marry together the incredible ability of the human brain to recognize patterns with the computer’s ability to "remember billions of facts precisely and recall them instantly." He even sees millions of computers tied together through the Internet forming one "super brain" with the ability to then disengage to perform separate functions - all by 2045.
Rather heady stuff! To move this development forward, Kurzweil and others established Singularity University to provide graduate, postgraduate and corporate executive courses and training. The first courses began in 2009.
The Post-Human Brain PunditsWhile Kurzweil certainly presents a compelling case for The Singularity, there are many other reputable pundits who strongly disagree with his conclusions. In October 2011 in an MIT Technology Review piece called "The Singularity Isn’t Near," Microsoft co-founder Paul Allen, writing with Mark Graves, took issue with many of Kurzweil’s points, saying,
Kurzweil’s reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these "laws" will work until they don’t. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer’s hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.
Kurzweil responded to Allen’s piece with "Don’t Underestimate The Singularity" the following week.
Writing in the same publication in February 2013 in an article by Antonio Regalado titled "The Brain Is Not Computable," Miguel Nicolelis, a top neuroscientist at Duke University, is quoted as saying that computers will never replicate the human brain and that the technological Singularity is "a bunch of hot air ... The brain is not computable and no engineering can reproduce it."
While only time will tell how accurate (or inaccurate) Kurzweil’s view of the immediate future is, I think Singularity supporters are right about one thing. They say that if the Singularity occurs, the future beyond that point won't be predictable. When it comes to what we can expect from future technology, that, at least, seems like a likely scenario.