Meta is testing facial recognition to combat celeb-bait scams and speed up account recovery, three years after disabling its system due to privacy issues.
The company announced on October 21 that it is testing the service again to combat “celeb bait” scams.
We’re testing new ways to make it harder for scammers to bait people and easier for people to regain access to their accounts using facial recognition technology.https://t.co/eLii9xqDFg pic.twitter.com/5sgFcJJ7xV
— Meta Newsroom (@MetaNewsroom) October 21, 2024
Scammers exploit celebrity images in “celeb-bait” ads to trick users into visiting scam websites for personal details or money, making them hard to distinguish from legitimate ads. If Meta’s systems suspect an ad is a scam, they will use facial recognition technology to compare the ad’s images to the public figure’s profile pictures. Confirmed scam ads will be blocked.
Meta is also exploring facial recognition to identify accounts impersonating celebrities by matching profile pictures on suspicious accounts with those of public figures on Facebook and Instagram. Currently, the company relies on detection systems and user reports to spot these impersonators.
According to Monika Bickert, Meta’s vice president of content policy, the tool underwent Meta’s “robust privacy and risk review process” and discussions with regulatory authorities and privacy specialists before testing.
The social media giant announced it will enroll about 50,000 public figures in a trial of facial recognition technology on an opt-out basis starting in December. This trial will launch globally, except for the UK, EU, South Korea, and the US states of Texas and Illinois, where regulatory approval is pending.
Additionally, Meta is testing facial recognition for identity verification. If users lose access to their Facebook or Instagram accounts, they can upload a selfie to compare with their profile pictures. The company believes this method will be more secure against hackers than traditional document-based verification. The video selfie recovery option will expand to more users soon.
Meta’s Facial Recognition Trials Amid Legal Challenges
When Meta discontinued its facial recognition system in 2021, erasing data for one billion users, it attributed the decision to “growing societal concerns.” This August, the company was mandated to pay Texas $1.4 billion to settle a lawsuit for illegally gathering biometric data.
Concurrently, Meta faces pressure from politicians and regulators for failing to prevent celeb-bait scams that use AI-generated images of famous people to deceive users into investing in non-existent schemes. Mining magnate Andrew Forrest has sued the company for not addressing scams that use his image, and the Australian Competition and Consumer Commission has also filed a lawsuit.
As a result, the company is now trying to balance employing potentially invasive technology to combat escalating scams while minimizing complaints regarding the management of user data, a long-standing issue for social media platforms. Meta indicates that throughout the new trial, it will remove any facial data from comparisons with suspected ads right away, regardless of scam detection, and that this data will not be used for other purposes.
Additionally, the company’s tests seem to align with a broader PR strategy in Europe aimed at pressuring lawmakers to weaken privacy protections. Now Meta frames the need for unrestricted data processing for AI as essential in the fight against scammers.