Question

Does AI Help or Hurt the Fight Against Propaganda?

Answer
Why Trust Techopedia

Artificial intelligence (AI) has evolved to the point where it can generate life-like text and speech, making it a valuable tool for content development and personal assistance.

But can it also be used to create propaganda, producing false or misleading statements to sway public opinion and even fuel hostility between people?

Can AI be Used to Generate Propaganda?

There are no restrictions on what people can create with tools like generative AI (GAI), the technology behind ChatGPT, and other emerging large language models (LLMs) from MidJourney to DALL-E. These technologies are designed to create well-written, convincing narratives dealing with any subject the creator desires.

Ideally, these algorithms draw from vetted sources to prevent misinformation or disinformation from influencing their results. ChatGPT, for example, has, up to now, only ingested data that is at least two years old, given that more recent information is often subject to reinterpretation.

However, this is not a perfect solution. Given the right prompts, GAI can produce false information or even deliberately lie if that’s what it thinks the user wants.

Is There Any Evidence That AI is Being Used for Propaganda Today?

Media watchdogs have already flagged multiple examples of AI-generated deep-fake videos in the past few years. One viral fake purporting to show President Biden making transphobic remarks went viral before it was revealed as a hoax, while a doctored photo of Donald Trump hugging Dr. Anthony Fauci, former head of the U.S. National Institute of Allergy and Infectious Diseases and perennial target of U.S. conservatives, was posted on the campaign website of one of Trump’s political rivals.

When atrocities arise in the world, such as the Israeli-Palestinian conflict in late 2023, it’s very simple for someone — anyone — to create distressing, emotive images and place them on social media.

Even once they are fact-checked, their task of emotionally manipulating an audience has already been done (warning: potentially distressing, although false, image on the link).

Going forward, there is every reason to expect AI-generated text, speech, images, and video to surge on social media and even traditional media whenever rival organizations vie to control the narrative surrounding tumultuous global events.

Are There Ways to Tell When False Information is Being Created or Spread Using AI?

When it comes to photos and videos, alterations can be detected fairly easily. Both JPEG, the most popular image format, and the MPEG video standard provide a wealth of metadata that can be checked for sequencing, profiles, and other conditions to see if any of it has been changed.

As well, minute differences in pixel characteristics can reveal whether and how an image has been altered.

Ironically, AI can be a valuable tool for spotting these kinds of fakes. Enhanced image analysis can automate what would be an otherwise time-consuming process, and AI can quickly check one image against others in the digital universe to see if it’s a match. The problem is that any determination by one group that an image is real or fake is not the final say on the matter. When rival groups proclaim their AI analysis to be the correct one, the public is left to decide on its own.

Can AI Be Used to Identify False Information?

Images are one thing; ideas are quite another. While fact-checking can lay bare the outright lies, the most effective propaganda mixes lies with truth, with the aim of not necessarily convincing the public that a particular claim is true but merely sowing doubt that there is more to a story than is commonly understood.

This doubt leads to fear and suspicion, which can influence public policy, elections, and relationships between societal groups.

This is effectively a judgement call, and AI does not, and probably never will, have the capacity to definitively state what is a worthy idea and what is not. Unfortunate as this is, we are likely stuck in a world in which falsehoods are accepted as truth despite evidence or rationality.

Oppressed groups become enemies through this process, drivers of violence become protectors or even peacemakers, and history becomes skewed in favor of the victors.

Can We Regulate the Use of AI to Prevent Its Misuse for Purposes of Propaganda?

Rules governing the development and use of AI are sprouting up all over the globe, but there isn’t likely to be any way of preventing it from creating and perpetuating propaganda.

And while many companies speak of ethical AI, recent developments suggest ethical bodies are early on the chopping block if employee culls come along.

In fact, some rules by the more autocratic governments around the world might entrench this capacity to manipulate public opinion as a core element of AI.

The Bottom Line

Propaganda is not a new development in human society by any means. Scrolls and tablets of ancient societies tended to play up military victories and downplay defeats, exalt one person over another, and tout the benefits of rote obedience for the collective good.

Technologies like the printing press made it easier to disseminate particular points of view and created the desire to rid the land of traitors, witches, and other disruptors of civil society. Mass media pushed this to an entirely new level, as does today’s digitally interconnected world.

But it all comes down to the same challenge: determining what is real amid a flood of conflicting opinions. And there is no technology that can do that.

Related Terms

Arthur Cole
Technology Writer
Arthur Cole
Technology Writer

Arthur Cole is a freelance technology journalist who has been covering IT and enterprise developments for more than 20 years. He contributes to a wide variety of leading technology web sites, including IT Business Edge, Enterprise Networking Planet, Point B and Beyond and multiple vendor services.