Machine learning and artificial intelligence are rapidly changing many industries, and really reshaping the ways that we think about technological progress. But they're also engaging some interesting dichotomies and contradictions regarding how we relate to our computers and smartphone devices or other new interfaces that happen to come along.
One of the big questions with artificial intelligence is how it will affect “authenticity” – or how people verify and confirm the reality that exists, in the “meatspace” or in the digital world. When you really dig into how this works, you see an inherent contradiction between the limitations of our technology and the ways that we trust the technology at our disposal.
One of the best examples can be found in a recent Wired article that shows how people with artificial intelligence and machine learning resources are able to take an image of a moving horse and superimpose zebra stripes in a process the author calls “zebrafication.”
It's neat and new, but it also can potentially present a problem. When you see a zebra on a digital screen, how do you know that it's a zebra, and not just a horse with zebra stripes cleverly put on it by some tech-savvy person?
It might seem like a theoretical question, but the same kinds of questions are soon going to apply to the news that we get in digital form – from politics to economics to religion, all of it is going to rely on our ability to sift through information, to fact check and distinguish between truth and fiction, between myth and reality. As new artificial intelligence tools offer more ways to manipulate image and video, this is going to get much more difficult.
Another excellent example is new voice technologies. In an article a few years ago, we covered a budding IT project that took the voices of famous people and built voice model engines that could make those famous people say anything from beyond the grave.
Again, this is neat and interesting technology – it seems like a fun way to use speech processing technologies. But it's really going to present a problem when we make the jump from old analog and undoctored digital voice technology to new synthetic and prefabricated voice. How will you know who's speaking to you – on the telephone, on the TV, or right in your ear?
Specifically, the idea of altering audio, image and video in sophisticated ways can upend some of our most valued ideas as a society. How will people trust what they hear and see in the political world? What about the law – will those accused of crimes have new types of appeals based on the potential alteration of evidence?
Another way to understand some of these problems is to look at science-fiction writing – from Ray Bradbury's “Fahrenheit 451” to George Orwell's “1984” and beyond, the storytellers of past ages have repeatedly warned us that technology could be put to both useful and problematic ends. One reason why so many experts and heads of IT companies are calling for “explainable artificial intelligence” and ethics panels is that they understand the issue – that if we don't control technologies thoroughly, we won't be able to trust them to any extent. Rather than helping us to achieve our goals, they could end up hurting us, partially by causing the kinds of social chaos that exists when we can't really get a handle on truth and reality. Part of the good news, though, is that technologies like blockchain, which provide transactional authentication, may help when applied to digital records.