Can AI help in the battle against fake news?
Can AI help in the battle against fake news, or is it just making things worse?
Artificial Intelligence (AI) and fake news seem to be inescapably linked together. On one hand, critics of the newest technologies claim that AI and automation processes have been instrumental in unleashing an apocalypse of blatantly false stories upon the helpless public. On the other hand, some of the best scientific minds on the planet, in their relentless quest for truth, are already developing new AI-powered solutions that can detect deceitful stories. Will they be up to the challenge?
To tell the truth, it is still too early to provide a definite answer since those technologies are currently being developed. But it's easy to understand how large the investments are that they're attracting from some of the largest social media powerhouses and content publishers. Google itself recently announced that the Google News platform is going to implement potent machine learning software to discard misleading material.
One of the basic reasons why fake news quickly turned into an epidemic is that it's presented in a way that is more appealing or engaging to readers/viewers. Some AI is built on this assumption, and their machine learning algorithms have already been trained for years by fighting against spam and phishing email.
This method is currently being tested by a collective of experts known as the Fake News Challenge, who have volunteered in the crusade against fake news. Their AI operates through stance detection, an estimation of the relative perspective (or stance) of the body text of an article compared to the headline. Thanks to its text-analysis capabilities, AI can evaluate the likelihood that the message was written by a real human rather than a spambot by comparing the actual content with the headline. It's literally good AI vs evil AI, and if it sounds like Autobots vs Decepticons – well, that's exactly what it is.
Another method includes an automated and quick comparison of all similar news posted on multiple media, to check how much the facts portrayed are different. Ideally, if a specific website is spreading fake news, it could be flagged as an unreliable source and excluded from the news feeds. Google News is probably going to use this method, since it was announced that it will draw content from some yet-to-be-defined "trusted news sources." This way, people will be pushed away from extreme content – like what happened on YouTube with flat-Earthers – and directed toward properly defined "authoritative sources."
Lastly, other simpler algorithms could be used to analyze a text and scour for blatant grammar, punctuation and spelling errors, spot phony or fabricated pictures and cross-check the deconstructed semantic components of an article against reputable sources.
More Q&As from our experts
- Why should we care about Natural Language Disambiguation?
- How can I learn big data analytics?
- How can IT security be measured?
- Artificial Intelligence
- Social Media
- Machine Learning
- Natural Language Processing
Tech moves fast! Stay ahead of the curve with Techopedia!
Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.