Question

Why are some people worried about artificial intelligence?

Answer
Why Trust Techopedia

Change is not easy, and many people have difficulty adapting to it. This is particularly true of technological change. The noted Harvard professor Calestous Juma wrote a book on the subject called “Innovation and Its Enemies: Why People Resist New Technologies.” The book outlines the tension between innovation and the social order, drawing on 600 years of history. The impact of technology is especially troublesome when it comes to artificial intelligence.

Not everyone is adverse to these advances. Ray Kurzweil, author of “The Singularity is Near” and “The Age of Spiritual Machines,” is nothing less than an evangelist promoting the benefits of genetics, nanotechnology, and robotics (a term used for non-biological intelligence). But these are the same issues – and Kurzweil the same person – that caused great concern in the year 2000 for Bill Joy, Chief Scientist at Sun Microsystems.

Joy wrote about his worries in a now-famous article in Wired magazine called “Why The Future Doesn't Need Us.” He believed that the very future of the human race was at stake. Referring to the competition of evolutionary history, Joy wrote, “Biological species almost never survive encounters with superior competitors.” Scientific breakthroughs that result in machines surpassing humans come with significant risks. Visions of a dystopian future come to mind.

Suppose scientists are able to develop machines with artificial superintelligence (ASI), a level of intelligence higher than humans. Joy says that one of two things might occur: Either the machines will be allowed to make their own decisions, or humans will maintain control over them. What happens if you hand over the power to the machines? What might be the results?

Bill Joy is not the only one to voice concerns. Tech guru Elon Musk said, “With artificial intelligence we are summoning the demon.” He called it “our biggest existential threat.” Physicist Stephen Hawking told attendees at a technology conference that “we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.” And he told Wired magazine, “I fear that AI may replace humans altogether.”

Are these fears justified? Sci-fi movies like “Transcendence,” where Johnny Depp’s character bonds with artificial intelligence and wreaks havoc, are reminiscent of Kurzweil’s predictions of how humans could meld with machines. Imaginations run wild about all the ways that artificial intelligence could go wrong. What happens when the machines take control?

Two real-life examples illustrate how worry about artificial intelligence may be understandable. In 2007, a robot cannon killed nine people and wounded 14. Some advanced military weapons automatically pick their targets, but wait for a human to pull the trigger. Who was making the decisions here? In 2016, a 300-pound security robot knocked down and ran over a sixteen-month-old toddler. Who was in control in this instance: man or machine?

The reason that some people are worried about artificial intelligence is that the risks are real. The next question: How will we manage those risks?

Related Terms

David Scott Brown
Contributor
David Scott Brown
Contributor

Throughout his career, David has worn many hats. He has been a writer, a network engineer, a world traveler, a musician.As a networking professional, David has had a varied career. David started out troubleshooting frame relay and x.25 with Sprint, and soon moved to Global One, the international alliance with Deutsche Telekom and France Telecom. Since then, he has worked for many national and multinational network providers and equipment vendors, including Sprint, Deutsche Telekom, British Telecom, Equant (Global One), Telekom Austria, Vodafone, o2/Telefonica, ePlus, Nortel, Ericsson, Hutchison 3G, ZTE, and Huawei.As a writer, David's portfolio includes technical articles, short stories,…