How to Spot Deepfakes: Trends, Regulations, Best Practices & Tips

Why Trust Techopedia

Deepfakes are beginning to disrupt what we know as real — blurring the line between fact and fiction, with politicians and celebrities becoming easy targets for manipulated content.

From deepfake audio of London Mayor Sadiq Khan appearing to make inflammatory remarks just before the nation commemorated Armistice Day, to Taylor Swift the victim of NSFW AI-generated images, the power to manipulate is rapidly becoming democratized for use by anyone, anywhere.

And as OpenAI releases Sora — undoubtedly a technical marvel in creating advanced video simulations of the real world — the ability to distinguish between real and fake media is going to be a hard question for 2024.

Given the high volume of deepfakes circulating, it’s clear that vendors, regulators, and users need to get better at identifying and limiting the spread of deepfakes — and consumers need to get as much help as possible in distinguishing the reality behind content.

What is the artificial intelligence (AI) industry doing to help, and how can we help ourselves? Read on to find out.

Key Takeaways

  • Recent deepfake incidents, such as the Sadiq Khan audio and Taylor Swift images, highlight the growing threat of manipulated media.
  • OpenAI’s release of Sora underscores the challenge of distinguishing between real and fake content.
  • Vendors like Microsoft have implemented restrictions on creating deepfakes of celebrities, but users can still find ways to circumvent these policies.
  • Regulatory efforts in the U.S. and EU are emerging to address deepfake risks, including legislation targeting deepfake pornography and election misinformation.
  • Users can look for signs of deepfakes, such as unusual features or inconsistencies, and utilize deepfake detection tools, though these methods may not always be foolproof.
  • Ultimately, users must remain skeptical of digital content, as deepfakes are likely to persist until mitigation strategies are implemented by vendors and regulators.

The DeepFake Crisis & How AI Vendors Are Responding to It

Shortly after the negative fallout surrounding the Taylor Swift deepfakes, Microsoft proceeded to ban the creation of images of celebrities altogether. Prior to this, the organization had restrictions on the creation of images of public figures and those depicting nudity.

Advertisements

Many other vendors like OpenAI and Google maintain similar content moderation policies to ban the creation of content of public figures with tools like DALL-E 3 and ImageFX.

But with some creative prompts and jailbreaks, users can trick AI image and voice generation tools into generating content that violates their content moderation policies.

For example, in the Telegram channels where Swift deepfakes were circulating, some users deliberately misspelled the names of celebrities and used other words to imply nudity to trick image generators into creating fake images.

The reality is that every time AI vendors create new moderation policies, bad actors will attempt to find a workaround. While vendors like OpenAI and Google are attempting to augment these policies by digitally watermarking AI-generated images – there is still a long way to go.

Leena Ammanath, trustworthy AI and technology trust ethics leader at Deloitte, told Techopedia:

“As AI grows more sophisticated, it becomes easier to create and spread this malicious content — potentially doing significant reputational damage.

 

Detecting and limiting the spread of deep fakes and other false content is essential for keeping misinformation at bay, instilling trust in AI systems, and preventing public harm.”

A Look at the Current Regulatory Landscape

As of today, the legal and regulatory landscape surrounding the creation of deepfakes is in its infancy, with regulators in the U.S. and across the EU looking to curb the development of deepfake pornography and election content.

At least 14 states across the U.S. have introduced legislation to combat the risk of deepfakes spreading misinformation around elections.

These range from disclosure requirement bills like those in Alaska, Florida, and Colorado that would require disclosures to be placed on media created with AI that is being issued to influence an election to outright bans like those in Nebraska, which would prevent the dissemination of deepfakes before an election.

In the European Union, the European Council and Parliament have agreed on a proposal to criminalize the non-consensual sharing of intimate images – including AI-generated deepfakes. Likewise, the UK plans to criminalize sharing deepfake images under an online safety bill.

While these steps are small, they highlight that regulators are taking a closer look at regulating the risks around AI-generated content, though for now, it’s ultimately up to users to recognize deepfakes when they see them.

How to Detect Deepfakes

One way that users can protect themselves against deepfakes is to be aware of some of the hallmarks of deepfake images, videos, and audio.

Some of the telltale signs of a deepfake image include:

  • Unusual depictions of hands
  • Rough edges around the face
  • Inconsistent skin texture
  • Blurred sections
  • Unusual lighting or distortion

Some of the telltale signs of a deepfake video include:

  • Unnatural eye and hand/body movements
  • Whether lip movements are in sync with the audio
  • Unusual lighting/shadows

In this example created with Sora below – you can see unnatural motion in the legs while the woman in red is walking.

That being said, detecting deepfakes is often easier said than done

Ammanath said: “Although humans can sometimes detect deepfakes, the task is getting harder as the technologies used to generate fake content become more capable.”

“Advanced Al/ML algorithms—particularly neural networks—can be trained to detect deepfakes and other fake content in real time, thereby limiting their spread. Neural networks that have been trained to detect deepfakes can recognize telltale patterns and subtle inconsistencies within doctored media files.”

To highlight how AI can be used, Ammanath explained that AI-based detection algorithms can pick up subtle fading or grayscale pixels around a person’s face in altered photographs.

For this reason, using deepfake detectors, that feature machine learning algorithms and neural networks trained on large datasets of legitimate and manipulated videos and images is a great way to more reliably identify fake content.

The Heightened Stakes in Election Year

With the U.S. election taking place throughout 2024, consumers and enterprises need to be prepared for an uptick in AI-generated content as dishonest actors attempt to exploit this technology to advance their political arguments or positions.

One of the simplest ways this can happen is by creating deepfakes of political figures where they appear to make or endorse political positions, in an attempt to influence the way voters cast their votes.

We’ve already seen this happen with the Biden robocall, where a New Orleans magician claimed a consultant for Dean Phillips’ campaign paid him to create fake audio of President Biden to discourage voters from voting in the New Hampshire primary on 23rd January 2023.

The key to confronting these types of threats is to brush up on the telltale signs of deepfakes and where possible, to use deepfake detectors to detect them and to double-check the source. Doing so will help to mitigate the potential fallout of AI misuse during election season.

The Bottom Line

Deepfakes are here to stay – and until AI vendors and regulators find how to reduce the spread of them, it’s up to users to learn how to detect them. There is no silver bullet here in being able to trust the authenticity of digital content anymore.

While this won’t mitigate the distress or damage done to those who are victims of deepfakes, it will help to prevent the spread of misinformation.

Advertisements

Related Terms

Advertisements
Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.