6 (Scary) Things AI Is Getting Better at Doing
AI is doing a lot of good - but some of its applications are ... troubling.
We’ve heard a lot about artificial intelligence over the past year – some good, and some bad. A lot of people have only a vague knowledge of how artificial intelligence is progressing and what methods and techniques are behind it.
However, some of us have an impending sense of doom. Let’s break that down a little bit – here are some areas of “progress” where AI is advancing very quickly in a way that can seem a little disturbing to us humans!
Software powered by machine learning can transform business processes for the better. Click here to learn more.
Your Smart House
What about the house that can not only tell what you’re saying, but also discern your emotions?
New speech recognition technologies are going beyond the phoneme to focus on inflection and nuance… and in some ways, they’ll get so sophisticated, they can take very subtle cues and respond accordingly.
“The device can not only [interpret] what a user is saying, but it
will also break it down into the expression and context to perceive
variations,” says Ash Turner, CEO of BankMyCell, an electronics trade-in site. “In the years to come, this voice recognition will integrate heavily with electrical objects around us and within our smart homes, turning on lights or TVs, closing electric blinds and so on.”
That’s all well and good unless the algorithm suddenly goes haywire in some 21st century version of a horror movie.
You’ll be tiptoeing around, careful not to wake your house up!
The Robot Coworker
There’s also the tragic case of Wanda Holbrook, who was killed by a robot which crushed her skull while she was inspecting a nearby area of the workplace.
Experts concluded that the robot should not have gone into Holbrook’s sector, but it did, and now the fear of robots haunts many workers who are participating in enterprise automation.
Although the idea of runaway robots can sound whimsical, it’s really nothing to joke about. There have to be stringent standards in place to make sure that robotics systems work the way they’re supposed to, in ways that don’t harm humans. Take self-driving cars – problems with autonomous vehicles can have severe and tragic consequences.
The robot as a machine might scare some people, but when you think about it, the responsibility of tragic mistakes like this one rests on AI. As we program robots in more sophisticated ways, we can become less able to control them through traditional means. That sounds backward – but if a robot (or a self-driving car) does anything that it isn’t supposed to, it probably wasn’t explicitly directed to by a programmer. In other words, it’s the freedom given to the codebase and functionality of the robot that’s physically dangerous.
Isaac Asimov’s old “do no harm” robot manifesto does not automatically permeate the enterprise uses of robots that raise serious liability concerns. That means we have to really be careful about how we use robotic brute force. (To learn more about robots, check out 5 Defining Qualities of Robots.)
Machines at War
A list of scary AI wouldn’t be complete without discussion of the ways that artificial intelligence is evolving military systems.
“I think that the ability of artificial intelligence when it comes to warfare is a field that is really scary,” says Stephen Hart at Cardswitcher.
In a few short years, we’ve seen the development, growth and refinement of relatively autonomous types of technology that are designed to take human lives. It’s widely acknowledged that fully autonomous types of drones, missiles and robot soldiers are only a few years away from us. … I think giving machines the ability to choose who to kill is a very dangerous development. By what criteria will a programmer choose who is a “hostile” and who is a “friend” when developing artificial technology to be used in a warfare setting?
Hart also mentions a 1983 incident popularized by the subsequent movie “War Games” where computerized systems started a nuclear threat and it took humans to unravel the conflict and get the machines to stand down.
“Some experts have expressed concern about AI-led weapons triggering global conflicts, citing the infamous 1983 incident when a malfunction in a Soviet computer system accidentally put out a warning that U.S. missiles were heading towards the USSR and it was only through human intervention that a nuclear war was avoided,” Hart says.
All of this is enormously troubling to people who think about how artificial intelligence is being used in the defense industry. It’s bad enough that we have towers full of nuclear weapons just sitting around – we don’t want IT as a poor intermediary.
AI Medical Malpractice?
Medical malpractice is already a field that is fraught with problematic complexity.
Now, AI may be about to exacerbate some of those challenges.
David Haas is a health investigator with the Mesothelioma Cancer Alliance.
“As futuristic as it may seem, AI is beginning to aid doctors in making medical diagnoses,” Haas says. “Through break-neck data collection and processing speed, AI-equipped supercomputers are able to provide suggestions to doctors that a human may not have thought about. This is streamlining the ability for a faster diagnosis, which could be a life-or-death situation for some patients. However, these machines are still in their infancy and have been noted to make common errors which could seriously impact a patient’s wellbeing.” (For more on AI in medicine, see The 5 Most Amazing AI Advances in Health Care.)
Business as Usual
In some ways, some of the scariest advances are the ones that happen most quietly, without any big warning signs.
We’re already starting to re-evaluate how we use smartphones, with some experts tying smartphone use to mental health outcomes, but now, technology is exploding around us.
“AI will also further integrate with AR in the future to personalize the AR experiences as people experience the world around them,” says Alen Paul Silverstein, CEO of Imagination Park, in a review of what’s already happening in AI. “AR will be transitioning from mobile devices to wearables (i.e. headsets) in the next 5 years, and as people walk thru retail and city environments, personalized advertisements and promotions will be delivered directly to their lenses powered thru the mobile device Bluetooth connection. That is bringing us to the environment shown in the movie ‘Minority Report’ which starred Tom Cruise.”
Nor is Silverstein the only one who is using the “Minority Report” film as a warning – others worry about the uses of AI in law enforcement, as depicted in that movie. There are ample “signals” in the futuristic film that show us how we are already starting to unlock the sentient AI that will eventually become a force to be reckoned with.
Fear What You See
Here’s one that’s a little less commonly understood – the ability of new AI systems to show us disturbing images.
“Automated image recognition – getting a computer to provide a text description of what’s in a digital image – is one of the areas where AI has made dramatic advances in recent years,” says Kentaro Toyama. Toyama is a W. K. Kellogg Professor of Community Information at the University of Michigan School of Information and the author of “Geek Heresy: Rescuing Social Change from the Cult of Technology.”
“In Deep Dream, the creators apply image recognition a bit in reverse,” he explains. “They start with an innocent underlying image, and an AI model that knows how to, say, recognize dogs in an image. But instead of using the AI to recognize what’s in the image, they use it to modify the image so that it becomes more dog-like. Doing this results in the images that are spectacularly like those from human dreams – some of the resulting images are hauntingly beautiful; others are deeply frightening.”
More from AltaML
- What are some of the foundational ways that career pros stand out in machine learning?
- What are some of the dangers of using machine learning impulsively without a business plan?
- Why is so much of machine learning behind the scenes - out of sight of the common user?
- What is 'precision and recall' in machine learning?
- Why are GPUs important for deep learning?
- What’s a simple way to describe bias and variance in machine learning?
- What are some of the main benefits of ensemble learning?
- Will machine learning make doctors obsolete?
- What's the difference between machine learning and data mining?
- How does a weighted or probabalistic approach help AI to move beyond a purely rules-based or deterministic approach?
- Why is data visualization useful for machine learning algorithms?
- How is AI technology going to affect the workplace in the near future?
- Why are some companies contemplating adding 'human feedback controls' to modern AI systems?
- What does 'connectionism' mean for business AI?