The dark side of technology was among the topics explored at NYC Media Lab’s second Machines + Media conference, which was sponsored and hosted by Bloomberg at its global headquarters in the city on May 15. Though some of the sessions were more about looking at what tech is currently available to media, even those brought up the shadow of manipulation and misinformation.

A Good Story Is More Compelling than the Whole Truth

In the session entitled “State of the Art,” Gilad Lotan, VP, head of data science, Buzzfeed, pointed out that the problem of fake or misleading news cannot be blamed solely on media companies or new technology. “People aren’t necessarily searching for facts,” he said. ”Readers want stories.”

He added that feeding that desire for stories and favored narratives can result in fake news that is “more manipulation than false information.” In fact, the story delivered may not include anything false at all. What is fake about it is that the facts reported are cherry-picked and removed from the larger “context” in order to lead “readers toward a conclusion.”

Along the same lines, in the “Platform + Media” session, Kelly Born from the William and Flora Hewlett Foundation’s Us Democracy Initiative said, “People like news that reinforces their beliefs, and especially if it triggers outrage.” She warned that media’s capitalization on that tendency can lead to a “perfect storm” that brings out the worst in us. This is exacerbated by the possibility of completely fabricated presentations.

Is Seeing Still Believing?

Technology plays a role in telling the story through images and video. We usually believe what we see unless there are obvious signs of manipulation. However, as AI improves the realism of fake videos, it is now possible to literally put words in the mouths of public figures. (AI isn't only being used for nefarious purposes. Check out some constructive ones in 5 Ways Companies May Want to Consider Using AI.)

The danger of AI’s production of very believable videos was demonstrated in the session entitled “Solutions for the Dark Side of Machine Driven Media.” It began with a video featuring Barack Obama saying things the former president has never actually said. The movement was still slightly out of sync with the words, so it has some marks of a faked video, and most of the audience said they could recognize it as a fake.

That Obama video manipulation didn’t achieve the realism of resurrecting Peter Cushing as Grand Moff Tarkin for “Rogue One.” As a New York Times article explained, the effect was achieved by having an actor wear “motion-capture materials on his head” while in front of the camera “so that his face could be replaced with a digital re-creation of Cushing’s.” The film’s chief creative officer/senior visual effects supervisor, called it “a super high-tech and labor-intensive version of doing makeup.”

Undetectable Fakes

Now, thanks to AI advances, specifically, generative adversarial networks (GANs), it’s easier than ever to achieve movie magic effects. Utilizing GANs, one AI system refines footage until it is good enough to be accepted as real by an AI system designed to detect fakes.

In this video, you can see researchers demonstrate how an actor can direct the movements of a public figure’s face to produce any expression and pronouncement with the level of fidelity not usually found in doctored videos.

While not every person who plays around with videos has such a system available at present, the software will be within reach of more people in the very near future.

Possible Consequences

Dhruv Ghulati, founder and CEO, Factmata, says he is “very worried about” how this ability could be applied to putting not just world leaders’ but CEOs’ and celebrities’ faces into videos as proof of their saying or doing things they have not. The video’s destructive powers would be intensified by achieving a viral effect from botnets.

It means we “can no longer trust our lying eyes,” declared Mor Naaman, associate professor of information science, Cornell Tech. He was concerned that would lead to a loss of trust.

Others are concerned about fanning the flames of violence. Already “just slightly modified videos” are “used to cause violence in the developing world,“ observed Aviv Ovadya, chief technologist, Center for Social Media Responsibility. (Social media can be an effective way of spreading false information. Check out Top 4 Most Devastating Twitter Feed Hacks.)

Sarah Hudson, partner, Investing in US, expressed concern that manipulated images “at scale” pose “an incredible threat to public safety.”

Proactive Planning

It is imperative to “prevent as much of that abuse as possible,” Ovadya said. That’s why it is important to “think ahead of time while developing technology” about “how to mitigate the bad effects” and to “devote resources to addressing those unintended consequences.”

It’s impossible to turn back the clock on progress, but what we have to do is ensure the direction we take is thought out and planned out. That means not just working on technology to see what can be done but to think about what may be done that should be averted.

The way forward for technological innovation is to think through what the effects may be and plan for safeguards before the dangers materialize. To do so, we will need cooperation among the researchers, technology companies, and government agencies to establish standards and best practices that people will adhere to. That should be a priority for all concerned.