Part of:

Can AI Detect Fake News?

Why Trust Techopedia
KEY TAKEAWAYS

Researchers are turning to AI to combat fake news. But can it really help, or will it just make things worse?

Fake news is expected to be a major thorn in the side of the upcoming presidential election, not to mention its overall corrosive effect on our public discourse in general. In today’s connected society, discerning fact from fiction has become increasingly difficult, which is why some researchers are starting to focus on the power of artificial intelligence to address this problem.

The hope, of course, is that machines, or more accurately algorithms, will be better than humans at spotting lies. But is this a realistic expectation, or just another case of throwing technology at a seemingly intractable problem?

To Catch a Thief. . .

One of the ways data scientists are planning to sharpen AI’s acumen in this area is by allowing it to generate fake news. The Allen Institute for AI at the University of Washington has developed and publicly released Grover, a natural language processing engine designed to create false stories on a wide range of topics. While this may seem counterproductive at first, this is in fact a fairly common AI training tactic in which one machine analyzes the output of another. In this way, the analytics side can be brought up to speed much quicker than relying on actual fake news. The institute claims that Grover can already operate at a 92% accuracy rating, but it is important to note that it is only adept at distinguishing between AI-generated content versus human-generated content, meaning that a smart person could still sneak a false story past it. (To learn more check out The Technologies Around Fighting Fake News.)

In the right hands, of course, Grover can quickly advance our understanding of how fake news is created and how it spreads, and this can theoretically be used to thwart it in the real world. But as Futurism.com noted recently, some experts who have taken the system for a test run are alarmed at how effective it is at creating believable lies, and even mimicking the writing styles of legitimate news outlets like the Wall Street Journal and the New York Times.

But since lying is an inherently intuitive and emotion-driven act, is it possible that even the smartest machines, which are still driven by cold, hard logic, can ever achieve the level of contextual understanding necessary to spot a lie? Unbabel’s Maria Almeida noted recently that while some iterations may get pretty good at this, no algorithm can hope to achieve full human understanding. This means AI might be able to make dramatic improvements in fact-checking and comparative analysis, but the final call is best left to trained experts.

Ironically, however, this capability will be most useful in detecting the deep fake videos that are starting to make the rounds on social media. With AI capable of analyzing visual data right down to individual pixels, it will be much more adept at spotting altered images than altered words and concepts.

Advertisements

Still, argues Forbes’ Charles Towers-Clark, the central problem with fake news is not that a few people are creating it, but that so many people are influenced by it. People tend to believe what they want to believe, not what the facts lead them to believe. So even if a highly developed AI engine declares that their belief is wrong, people will be more apt to doubt the machine than themselves.

“Implementing machine learning to combat the spread of fake news is admirable,” he says, “and there is a need to address this problem as the trustworthiness of major media news outlets is called into question. But with the spread of misinformation compounded by social media, can detecting and revealing the sources of fake news overcome the human instinct to believe what we are told?”

The real challenge, then, is not to identify and debunk fake news but to understand why it tends to disseminate across social media so much faster than real news. In part, this is due to the nature of fake news itself, which tends to be exciting and salacious versus the comparative tedium of reality. In the end, is it realistic to expect technology to correct what is essentially a non-technical problem? (For more on how AI is changing media, see 5 AI Advances in Publishing and Media.)

Stopping the Spread

This is why it’s important to focus AI on the technical aspect of fake news, not the human aspect, says ZDNet’s Robin Harris. And indeed, most researchers are training AI to key in on things like distinguishing between natural and artificial propagation patterns through social networks. Key metrics like conversion tree rates, retweet timing and overall response data can be used to identify and neutralize disinformation campaigns even if its source is hidden under layers of digital subterfuge. At the same time, AI can be used to manage other technologies, like blockchain, to maintain traceable, verifiable information channels.

The fact is that fake news is not a new phenomenon. From the muckraking journalism of the early 20th century all the way back to the propaganda of the earliest civilizations, hoodwinking the public is a time-honored tradition for both sitting governments and revolutionaries alike. The difference today is that digital technology has democratized this capability to the point that nearly anybody can post a lie and watch it spread across the globe in a matter of hours.

Technologies like AI can certainly help bring some clarity to this confusion, but only people can fully understand, and judge, the truth.

Advertisements

Related Reading

Related Terms

Advertisements
Arthur Cole
Technology Writer
Arthur Cole
Technology Writer

Arthur Cole is a freelance technology journalist who has been covering IT and enterprise developments for more than 20 years. He contributes to a wide variety of leading technology web sites, including IT Business Edge, Enterprise Networking Planet, Point B and Beyond and multiple vendor services.