Meta’s Oversight Board has announced two separate investigations after explicit AI-powered images appeared on its platforms.
The probes will focus on how Meta’s systems were inadequate in detecting and responding to the imagery that appeared following incidents on Facebook in the US and Instagram in India, respectively.
Since they surfaced, the sexually explicit AI images have now been removed from Meta’s platforms. Still, a report indicated the social media giant would not name the individuals targeted in the pictures “to avoid gender-based harassment.”
The Oversight Board is a semi-independent policy body that conducts its business at arm’s length from Meta. As a result of these investigations, the board could recommend new rules around this type of of content, sometimes referred to as ‘deepfake porn’.
If a user of Meta’s platforms has an issue with a moderation decision, they must first appeal the company’s course of action before raising a case with the Oversight Board.
Further background information was provided on the individual cases, with the Indian incident taking place on an Instagram account that exclusively posts AI-generated images of prominent Indian women. Most interactions with the account are said to come from users based in India.
Meta did not remove the picture after an initial report, while the ticket for the query was closed automatically after 48 hours as no further review took place. The complainant appealed against this, but again, an automatic closure took place without any additional input from Meta. This was a repeated failure.
In the US, a public figure was depicted in a Facebook group dedicated to AI images.
It had already been posted, and when another user uploaded the picture, Meta censured it after adding it to a Media Matching Service Bank in the category ‘derogatory sexualized Photoshop or drawings’.
Helle Thorning-Schmidt, co-chair of the Oversight Board, said in a statement that the body chose to feature incidents from two different countries as part of the investigation to check for discrepancies in how Meta enforces policies in different regions.
“We know that Meta is quicker and more effective at moderating content in some markets and languages than others,” she said.
“By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way.”
Over the next couple of weeks, the board invites contributions from the public before publishing its investigation findings and any potential recommendations for Meta over the next few weeks.