New AI tech is constantly being introduced. And just as constantly, warnings and frantic concerns are raised. Sometimes the concerns may seem overwrought or an off-shoot of conspiracy theories. Other times, they may be warranted.
It reminds us of an old joke on Tumblr:
- Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!
- Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.
- Security technicians: *takes a deep swig of whiskey* I wish I had been born in the neolithic.
As technologies are accepted and become commonplace, the concerns dissipate and are replaced by new things that at first seem weird or spooky and torn straight from the pages of a sci-fi novel.
Think of how self-driving vehicles keep getting pushed as the way of the future, while some close to the industry suggest that the technology is not even remotely ready for prime time.
That’s the main point of concern to people understanding the obvious safety risks involved!
Here are six of the AI technologies that seem most ominous to people surveyed by study researchers over the past few months – self-driving vehicles included.
1) The Smart Pillow
New types of bedding and high-tech pillows are offering to do more for us at our most vulnerable – while we’re asleep. Nothing scary about that, right?
In some ways, it’s intuitive to use new AI technology to improve on things like CPAP or BiPAP machines. A lot of people suffer from sleep apnea or other sleep conditions, so why not apply AI to the new medical science for treatment?
Well, to some people, including quite a few fans of dark humor, the idea of machines watching you sleep is just outright creepy. Take a look at smart pillows, for example, that will gently nudge your head in different directions, and can be connected to your smartphone.
As long as their careful solicitations are doing you good, everything is cool, but what if the pillow starts doing things that you wouldn’t sign off on if you were awake?
Apply that concern to any technology that we use to monitor or assist us in our sleep!
2) AI and Simulated Pain
Much has been made of the application of AI to pain management, but what about the opposite – using AI to simulate pain through a person’s central nervous system?
If you’re wondering where this would be applicable to commerce and industry, it’s in the gaming market. We’re getting closer to virtual reality gaming, where people are running around in virtual environments. So some companies are starting to pioneer things like direct heat applications and certain types of impact that will cause a physical response when the player gets shot or stabbed during gameplay, things that are likely to happen to players in a whole host of modern shoot-em-up games.
If you’re the speculative type, you can probably see where this is going. There are lots of ways that these technologies could go overboard and lead to some pretty scary and nefarious AI applications. (Read also: 5 Ways Virtual Reality Will Augment Web3)
3) Self-Driving Vehicles
Here’s where we get back to that overriding concern about having a computer drive your car.
Driving a car is not a simple job. We talk about the ability of self-driving vehicles to navigate the streets, but we tend to gloss over a lot of the intuitive and instinctive parts of the human task of driving.
Watch this video of human drivers using a full Tesla autopilot on Boston streets (not without human intervention!) and you’ll see why many of these self-driving technologies are sadly behind the game when it comes to actually provide safe vehicular conduct through traffic.
It only takes one sensor failure or another glitch to cause a fatality, and that’s one reason that we won’t be using full self-driving vehicles anytime soon, especially not on roads where you would normally encounter pedestrians. Some experts suggest that highway cargo delivery will come first, but even that assumes a level of safety that we may not yet have in today’s AI. (Read: Hacking Autonomous Vehicles: Is This Why We Don’t Have Self-Driving Cars Yet?)
Computer Chip Implants
Concerns about internal microchips are as old as computer technology itself. Many of them are based on something even older, biblical revelations of the “mark of the beast” that gets forcibly implanted under your skin.
Aside from that, though, people have other more prosaic fears about having chip implants in their body, especially for cognitive purposes. A Pew Research Center study shows that internal computer chips were far and away the biggest concern of respondents when presented with a range of issues and the “most scary” AI technology.
Weapons Technology
Here’s one that’s a little different, where AI simply gives humans the ability to do bad things.
A Verge story recently profile the situation where an AI program was able to suggest no less than 40,000 different types of chemical weapons within six hours.
The issue here isn’t that AI would do something threatening or dangerous to humanity. It’s that it gives human bad actors the keys to do those bad things themselves.
AI applications to weapons, as a rule, make those weapons more powerful and weapons, to anybody with a lick of common sense, are just pretty scary in general!
So some of these types of applications are on the radar for people who believe that AI needs to be harnessed for good instead of dangerous applications. (Read also: Is Blockchain the Solution to Gun Control?)
Big Equipment
Some people are scared of self-driving tractors, and others would give a wide berth to a trash compactor that seems to be doing its work without any human management or intervention.
Big equipment, people feel, should be controlled by humans and not some computer algorithm.
In this and many other ways, concerns about AI have to do with the combination of non-human cognitive systems and big physical pieces of hardware.
The Bottom Line
As long as AI’s work is going on in cyberspace, we feel like the technology is more contained. Is that a false sense of security? In some cases, yes, and in other cases, no. These examples are just the tip of the iceberg when it comes to “scary” AI. Other reports include more of these intangible terrors, like machines that can read your mind!
The drive toward explainable and transparent AI is part of the response to these and other scary situations. By implementing a human-in-the-loop scenario, and promoting trusted AI that doesn’t use black-box algorithms, we’re trying to make sure that we’re confident about where new technology is going. And that’s going to make all the difference in how we experience technology in the future!