Disinformation, fake news, and deepfakes — spreading misleading and often malicious narratives — have become a global epidemic, and they open the door to people around the world for cyberattacks, political and social manipulation, and damage to the geopolitical landscape.
In the U.S. fake local news sites now outnumber legitimate newspapers. Taken to new heights by the use of artificial intelligence, disinformation is not slowing down but mass-scaling and becoming more effective.
The most concerning of all is the normalization of fake news, as it becomes harder every day to tell facts apart from lies.
Key Takeaways
- Fake news and deep fakes are widespread, outnumbering legitimate news sources in some sectors.
- We speak to experts about how people are becoming accustomed to disinformation, which leads to “disinformation fatigue” and hinders the ability to discern truth.
- False narratives can incite protests and social unrest, threatening public safety and democracy.
- Individuals, social media platforms, governments, and educational institutions all need to take steps to combat disinformation.
Fake News Sites Now Outnumber Real News Media
A recent NextGuard study found a staggering 1,265 fake local news websites in the U.S. — 4% more than the websites of real newspapers left operating in the country.
To make matters worse, driven by competition, shifts in media consumer trends, and gaps in revenue, countless American newspapers have faced — or are facing — their final days. 2023 was a bad year for journalism in the U.S., with experts saying that the country will have lost one-third of all newspapers by 2025, as Northwestern University predicted in 2022.
Similarly, the trend affects all countries around the world as new generations choose new ways to be informed online. Traditional journalism media built on principles such as investigation and fact-checking are being left behind by the masses. The NextGuard report explains the impact of the rise of fake news.
“The odds are now better than 50-50 that if you see a news website purporting to cover local news, it’s fake.”
NextGuard explained that fake news can often be funded domestically and be as harmful to voters’ faith in media and democracy as foreign disinformation efforts were in the 2016 and 2020 presidential elections.
“With traditional newspapers disappearing… pink slime sites [disinformation and fake news sites] are rushing in to fill the void,” NewsGuard’s report said. “Consequently, millions of Americans are left without legitimate local coverage.”
Russia: The Master of Disinformation Masters
Russia tops the list of countries engaged in disinformation, fake news, and online manipulation. This year alone, bad actors have launched misinformation campaigns related to the war in Ukraine, the Paris 2024 Olympics, the Hamas-Israel conflict, and the Parliamentary elections in Europe.
For example, in a recent campaign, Russian bots affiliated with a Kremlin disinformation network published 120,000 fake anti-Ukraine quotes falsely attributed to celebrities, including Jennifer Aniston and Scarlett Johansson.
Quotes appeared over celebrity photos, displaying messages calling to end aid to Ukraine and describing a European collapse. The Images were published on the platform X (former Twitter) and then retweeted by bots more than 120,000 times.
A previous NextGuard report found in 2023 that 74% of wartime misinformation — linked to the Hamas-Israel conflict — was spreading throughout the social media platform owned by Elon Musk.
The Dangers of Normalizing Disinformation
Techopedia reached out to experts to talk about the risks of normalizing malicious fake news and disinformation campaigns.
Nick Hyatt, Director of Threat Intelligence at Blackpoint Cyber, advanced MDR+R technology and a human-powered security operations center (SOC) provider, told Techopedia that the biggest risk of disinformation is already upon us.
“The biggest risk of normalizing these types of cyber incidents has already happened — just look at the number of news stories that run every day discussing not only politically motivated disinformation campaigns, but pig butchering schemes, financial fraud schemes, and others.
“The combination of social media, GenAI progress, and [the fact that] normal users do not understand the interrelation of all of them has made it difficult for the everyday person to discern fact from fiction.”
Hyatt explained that the average person cannot trust what they see and read and does not always have the skills or ability to discern fact from fiction.
Disinformation Fatigue Sets In
Andrew Brown, CEO of Propel Tech, a UK-based software development company, also spoke to Techopedia about the issue.
“The normalization of disinformation is hugely concerning and has real-world impacts.”
Brown explained that we are already seeing a type of disinformation fatigue where the public assumes that they cannot trust anything they see online, regardless of whether it is a legitimate news source such as the BBC or an anonymous Facebook post.
“People are becoming desensitized to the dangers of this, and search engines such as Google and platforms such as TikTok are struggling to keep up and remove dangerous inaccurate information.”
Brown added that widespread misinformation campaigns from state actors often precede serious cyberattacks, mudding the waters with confusion. This challenges security experts who are analyzing the attacks and trying to understand who is behind them, what systems were affected, and how severe the incident was.
Meanwhile, companies and governments rushing to patch vulnerabilities and level up defenses are further complicated by large-scale AI-driven fake news attacks.
Brown explained that the risks of misinformation go well beyond clouding facts.
“Personal risks that result from cyberattacks usually center around identity fraud and theft of money or cryptocurrency from online accounts.”
“Public risks can create national security threats by leaking vital security information or by undermining democracy and impacting elections.”
Additionally, public individuals defamed by online disinformation will also feel a personal impact.
“The wider implications are seen in distrust in government, politicians, and public institutions, which could lead to social unrest and civil disobedience as we have seen already with the 6th January insurrection in Washington,” Brown said.
“Realistic and sophisticated deep fakes are not a problem for the future; they are here now, and being used to trick people.”
How Disinformation Spreads Into Violence
Protests around the world are on the rise, with thousands breaking out since the Israel–Hamas conflict began. From 2006 to 2020 the number of yearly global protests has tripled.
Numerous studies such as reports from the Nonviolence Project of the University of Wisconsin-Madison, link the increase in protests to social media.
Morgan Wright, Chief Security Advisor at SentinelOne, the American cybersecurity company listed on NYSE, told Techopedia that when false information becomes commonplace, it erodes the public’s sense of reality and trust.
“Disinformation is created by adversaries with an agenda, but when a human, perhaps inadvertently, sees it, accepts it as fact, and shares it, it becomes misinformation.
“That’s how and why – without even realizing it – so many people contribute to spreading false narratives designed to manipulate behavior and control outcomes.”
Ultimately, the normalization of disinformation threatens the integrity of informed public discourse, which is essential for a functioning and healthy democracy. Techopedia asked Wright from SentinelOne what other security risks this trend represents for users.
“Violence — pure and simple.”
Wright said they are witnessing massive amounts of disinformation related to the war between Israel and Hamas and Russia and Ukraine. “Concerning Israel, disinformation can and probably has resulted in violent demonstrations and false and inflammatory narratives designed to stoke tensions and cause protests,” Wright explained.
So, What Should Be Done? Experts Weigh In
While the findings of recent reports on fake news and disinformation may not come as a surprise, the risks of living in a world where people have a hard time telling facts from fiction are too severe to ignore. Techopedia asked experts what should be done to revert this situation.
Wright from SentinelOne spoke about the importance of a more proactive approach.
“People need to understand that a lot of the content they consume online is designed to reach them and influence them in some way. Whether it’s disinformation from online bots or misinformation from someone you trust, it’s important to verify the information itself before you act on it or spread it.”
Hyatt from Blackpoint Cyber also discussed the role of people in this crisis and aimed at social media companies.
“I think people should realize that social media companies and other places where disinformation campaigns occur do not have their interest in mind.”
Hyatt said that since these services are free, the users are the product — it’s more important to keep people coming back (engagement) rather than be an honest source of information.
“This is not necessarily their fault [the people], as social media companies and generative AI companies have greatly contributed to the success of these types of campaigns,” Hyatt said.
“We have created a monster of our own making.”
Brown from Propel Tech said that businesses, governments, and educational institutes will need to have an ‘always on’ learning culture regarding deep fakes as the technology progresses.
“As technology changes, so will the ways we have to protect ourselves against deep fakes and other disinformation.”
The Bottom Line
Disinformation and fake news have become a global epidemic, eroding trust in institutions and fueling violence. AI-powered deep fakes further blur the line between reality and fiction, making it increasingly difficult to discern truth from lies.
The normalization of disinformation is a major threat to democracy and public safety. Experts urge individuals to be critical consumers of information, verifying content before sharing it. Social media platforms need to take a more proactive role in curbing disinformation campaigns, while governments and educational institutions must adapt to combat this evolving threat.