One of the hottest new artificial intelligence technologies is a life-sized robot made to look and act like a woman.
Her name is Sophia, and she's produced by Hanson Robotics, a Hong Kong-based company. Why is she Saudi Arabia's robot? Because this Gulf state has given Sophia a key human right: the right of citizenship.
This is making lots of headlines, and triggering all sorts of debates about how fast artificial intelligence is going, and why we should care. One of the big issues is cybersecurity – how will the cybersecurity field adapt to these types of new technologies?
Sophia and similar technologies raise key cybersecurity problems that we haven't previously addressed. Here are some of the things that professionals and experts should be thinking about as they usher in robots that look, speak and act like us.
In general, the new lifelike robotic interface is more sophisticated than what we’ve been used to in the past, and that means a range of new cybersecurity issues. In the technology world, people talk about having a “thin attack surface,” for example, in a hypervisor setup or built into server-side security. A walking, talking robot, on the other hand, is a very thick attack surface – because the interfaces are sophisticated, there are many ways for hackers and bad actors to exploit vulnerabilities.
One very specific type of cybersecurity problem is kind of mixed with a variety of social issues – you could call it the “impostor syndrome,” although that term has popularly been used to describe the shifty workings of illegitimate data scientists.
Whatever you call it, the problem is that as artificial intelligence imitates particular humans with greater degrees of success, it's going to get harder to make sure that we aren't subjected to extremely elaborate deceptions that make us question the truth. You can already see examples of people using brand-new technologies to mimic famous politicians as in this video of Barack Obama featuring comedian Jordan Peele. The impostor problem is only going to grow and expand as artificial intelligence gives us new windows into reverse-engineering human thoughts and behaviors.
Also, in general, these new interfaces and capabilities are going to escalate the already ongoing arms race between security professionals and hackers. James Maude writes about this in an article on Xconomy, calling AI a “double-edged sword” for cybersec, and pointing out that in general, attacking is less costly than defending and noting concerns about privacy and security. Extrapolate some of these arguments to the AI robot, and you can see how with strength and capability comes danger and a need for discipline.
One other big new issue with Sophia and mobile robots is that they’re on the move.
We've gotten used to technologies like IBM's Watson that do extremely high-level cognitive work while remaining seated in a data center or in some stationary hardware structure. That's what we’re used to – from the earliest mainframes to today’s laptops, we've all been using stationary hardware. We have mobile phones now, but they’re really essentially pocket computers. Sentient robotic computers are astoundingly different. They are autonomous moving pieces that can be weaponized by malicious parties. A Reuters article looking at the speed of addressing robot cybersecurity problems shows how, for example, robots could be made to “lurch” or move quickly and inappropriately to potentially cause harm.
In the end, Sophia and robots like her raise a slew of cybersecurity issues and other concerns. How will we distinguish legitimate activity from deceptive and illegitimate activity when the interface isn't a digitally connected network, but a mobile piece of hardware that can trick us into thinking it's acting in human ways? Crossing this bridge will require enormous amounts of technological and ethical work to make sure that humans keep the reins and that we use these very powerful technologies for the civic good.