China and Russia have used OpenAI technology as part of campaigns to manipulate the public and the political landscape, the company revealed on Thursday.
OpenAI said it terminated accounts linked to “covert influence operations” that used generative AI in hopes of skewing opinions.
Two of the initiatives came from Russia, OpenAI said. Bad Grammar used the company’s models to write political commentary that was posted to Telegram in the US and Eastern Europe, including Ukraine. Doppelganger, meanwhile, produced multi-language commentary on X and 9GAG, translated associated articles, and turned news stories into Facebook posts.
A Chinese network, Spamouflage, used OpenAI models to both research social network activity and generate posts for platforms like X, Medium, and Blogspot. The group also used the technology behind the scenes for managing data and websites.
Other influence campaigns included the Iran-based International Union of Virtual Media, which used AI to generate and translate articles for an associated website. The Israeli company Stoic was also said to have used OpenAI platforms to produce articles and comments.
Politics played a major role in these campaigns. OpenAI said the generated content touched on hot button issues like Russia invading Ukraine, criticism of China’s government, the war in Gaza, and politics in both Europe and the US.
The influence campaigns don’t seem to have “meaningfully” impacted public discourse, according to OpenAI. While some campaigns were active on more than one platform, they reportedly didn’t infiltrate genuine communities.
The findings nonetheless underscore one of the major concerns this year: that China, Russia, and other counties might use generative AI from OpenAI and others to skew elections. They might spread false statements, or create fake pictures that misrepresent candidates. In the US, Congress and the Federal Communications Commission (FCC) have called for the disclosure of AI use.
Major tech companies like Google, Meta, and Microsoft have stepped up their efforts to catch these manipulators. OpenAI also maintained that it “proactively” intervened against abuses. However, the firm acknowledged that it couldn’t detect every campaign because it didn’t always know how AI-generated material reached the public.