In a move that has surprised some in Silicon Valley, Meta has revealed that it has chosen to tackle the potency of artificial intelligence’s grip on three of the company’s platforms – Instagram, Facebook, and Threads.
Rather than risking the wrath of unnecessarily restricting freedom of speech by removing content, Meta will begin labeling a wider range of video, audio, and image content as ‘Made with AI’ when it “detects industry-standard AI image indicators or when people disclose that they’re uploading AI-generated content”.
Monika Bickert, Vice President of Content Policy at Meta, stated: “We will when we detect.”
Monika Bickert’s comments were later reinforced by Meta’s Head of Global Affairs, former UK deputy prime minister Sir Nick Clegg.
Speaking at an AI event in Meta’s London offices last week, Sir Nick Clegg highlighted Meta’s use of artificial intelligence as a “sword and a shield” against misinformation posted on its platforms before highlighting a noticeable decrease of between 50% and 60% of “bad content over the last two years.”
So what has sparked their change in stance on AI and manipulated media content, and — given the number of elections scheduled for 2024 — how might this decision affect the spread of politically driven Fake News?
Key Takeaways
- Meta will begin labeling AI-generated and manipulated content on its key Facebook, Instagram, and Threads platforms to counter the spread of misinformation.
- The policy change is in response to damning feedback from the Oversight Board of Meta’s handling of a controversial AI-doctored video of President Biden.
- The rise in politically driven AI-generated incidents highlights the need for Meta to update its 2020 Manipulated Media policy.
- The labeling of AI-manufactured content reflects the firm’s commitment to maintaining the freedom of speech in a year when 2 billion people globally head to the voting booths.
Overview of Meta’s Changes & Implementation Timeline
At first glance, this proactive upgrade of its terms by flagging AI content seems highly logical. However, the policy shift is actually in direct response to a scathing review of Meta’s recent handling of the controversial AI-altered video featuring President Biden by the Oversight Board.
Posted on Meta’s Newsroom website, Monika Bickert conceded their AI Labelling protocols are a result of “feedback” from the Oversight Board. Criticism deemed the company’s existing Manipulated Media policy as ‘lacking in persuasive justification’ and ‘incoherent and confusing to users.’
Constructive criticism at its finest.
In fairness, Meta’s Manipulated Media policy was written in 2020, which preceded the rapid evolution and widespread sharing of ‘Deepfake’ videos and other realistic AI-generated content.
While Meta will still “remove manipulated media” that violates their Community Standards, you’ll start seeing the alternate labeling of AI-generated content on its platforms from May 2024.
As a result, the company will stop removing content based on their existing manipulated video policy from July — a timeline designed to allow platform users to adjust to the labelling notifications.
Political Propaganda Deepfake Videos
The timing of Meta’s upgrade of its manipulated media terms and conditions is welcome news for some.
Most notably, fact-checkers monitor domestic and foreign entities that utilize AI-generated content to spread political and social misinformation and sway public opinion.
This is particularly prevalent this year, with approximately 2 billion people (nearly 25% of the global population) heading to the ballot boxes in more than 60 national elections worldwide. Or, as Statista is calling it, a Super Election Year.
Of course, political propaganda is not a new concept, but manufactured AI content and deepfake videos can be compellingly believable — particularly when they go viral.
Just last year, Venezuelan social media accounts and state broadcasting channels were used to spread AI-generated videos of fictitious international news channels.
Here, footage of English-speaking newsreaders with Spanish subtitles was used to counter national anti-government protests demanding improved living conditions, declaring the media were exaggerating the country’s predicament.
Externally, the messages were also intended to convince the international community that Venezuela was indeed thriving.
How did they do it?
Well, it’s easier to do than you might think. In fact, the Venezuelan videos were created by people using a series of avatars created by AI firm Synthesia. With over sixty synthetic video templates featuring over 160 lifelike characters, clients and customers can simply write a script.
From this, Synthesia’s platform produces — to the naked eye — flawless-looking, high-quality clips in minutes.
The firm has terms of service agreements to prevent the production of malicious content and will ban users who violate these. However, once a clip goes viral, it’s almost impossible to advise viewers of its AI origins after the fact.
And Synthesia isn’t alone in this field either, as more and more AI video generation platforms are springing up, including Adobe Animate and, more recently, Mango AI by Mango Animate.
Other AI-Generated Mediums Designed to Confuse Voters
Politically fuelled manipulated media isn’t just confined to digitally manufactured deepfake videos either.
For example, right before the Slovakia elections last year, two key politicians allegedly had their voices cloned, parroting false discussions about how to rig the election. Designed to sway last-minute voters, the excerpt was posted on Facebook and escaped removal at the time, as Meta’s manipulated media policy only targeted fake videos.
In reality, no form of content is safe — if AI can manipulate it, it can be used for political gain.
Another prime example of this has been the recent explosion of Donald Trump supporters circulating AI-generated fake images of the former president socializing with young African Americans, which were piggybacking from the manipulated photo of Trump posing with Dr Martin Luther King Jr, which went viral last summer.
It’s a divisive tactic that uses AI-designed content as social proof to exaggerate Trump’s popularity among African American voters. While skeptics will dismiss it as political propaganda, it’s become an additional tool in a broader disinformation trend ahead of the US presidential elections.
The Bottom Line
Ultimately, all media forms are susceptible to AI manipulation, and it’s important to note that it’s often external entities and political fanatics rather than the politicians themselves using AI-generated content to deceive unsuspecting voters.
Perhaps with the changes in Meta’s manipulated content guidelines, the labeling of deepfakes and other AI-generated or influenced content may restrict its impact on platforms such as Facebook. However, this could see the spawning of even more spurious websites where AI-manipulated media can still be spread.
Nowhere is this clearer than from NewsGuard’s AI Tracking center, where – prior to Meta’s announcement — it had already detected 794 ‘Unreliable AI-Generated News’ Websites. This number is almost certain to multiply as a result of Meta’s new AI labeling policy.
So, while purveyors of political-centric AI-generated misinformation content will still be able to distribute material, Meta’s contextualisation and notification of manipulated media being posted on Facebook, Instagram, and Threads is very much a step in the right direction.