OpenAI Disrupts Iranian Campaign Using ChatGPT to Influence US Election

Why Trust Techopedia
Key Takeaways

  • OpenAI banned a cluster of accounts involved in an Iranian influence operation using ChatGPT.
  • Storm-2035 aimed to influence the upcoming US election through articles and social media posts.
  • OpenAI and Midjourney take further steps to prevent the spread of misinformation through AI-generated content.

OpenAI has banned a cluster of accounts involved in an Iranian influence operation using ChatGPT to generate content focused on the US presidential election.

On August 16, OpenAI claimed it recently identified and banned a cluster of accounts involved in a covert Iranian influence operation.

According to the company’s blog post, the operation utilized ChatGPT to generate content focused on various topics, including the US presidential election. Despite the operation’s efforts, there is no evidence that the content reached a significant audience.

The operation, named Storm-2035, was flagged as part of OpenAI’s broader initiative to prevent the misuse of AI for influence operations, particularly in the context of the 2024 US elections.

The accounts reportedly generated long-form articles and shorter social media comments on topics like U.S. politics, the conflict in Gaza, and more. These were disseminated through social media platforms and websites posing as news outlets, targeting audiences in English and Spanish.

Despite the scale of the operation, OpenAI claims its impact was minimal, with little to no engagement from real users on social media platforms.

Fears of AI Misuse in US Election Campaign

The rise of generative AI technologies, such as deep fakes and AI-generated imagery, has significantly impacted the political landscape, particularly as significant elections approach.

OpenAI, in collaboration with the US government and industry partners, continues to monitor and counteract such activities. The company emphasized the importance of transparency and security when using AI technology.

Recently, a deepfake video of US Vice President Kamala Harris, shared by Elon Musk, sparked widespread online controversy. This incident highlights the growing concern over the use of AI in spreading misinformation. It is just one example of how AI-generated content can be weaponized to manipulate public opinion and potentially influence political outcomes.

Moreover, Midjourney has blocked the generation of images featuring key political figures such as Joe Biden and Donald Trump ahead of the upcoming U.S. elections. This move reflects the increasing awareness and responsibility that tech companies are adopting to curb the potential misuse of AI in politics.