Deepfake

Why Trust Techopedia

What is a Deepfake?

A deepfake refers to computer-generated videos, audio recordings, and images that are used to portray individuals saying or doing things they never actually did or said. Essentially, deepfake technology uses artificial intelligence (AI) and machine learning (ML) to generate synthetic digital content that looks and sounds as if it is authentic.

Advertisements

While deepfakes can be created for benign purposes, such as in filmmaking or satire, they have gotten a negative connotation because the most publicized applications of deepfake technology have been fraud-related. They have gained attention for their ability to mislead viewers into believing that falsely depicted events or statements are real.

The technology’s ability to be misused has raised ethical, legal, and social issues, particularly regarding misinformation, privacy violations, and the manipulation of public opinion.

Techopedia Explains the Deepfake Meaning

Techopedia explains Deepfake

The deepfake definition is a portmanteau of “deep learning” and “fake.” In this context, deep learning is a type of machine learning, and fake is a synonym for fabricated or synthetic data.

It’s important to remember that while all deepfakes use synthetic data, not all synthetic digital content qualifies as deepfakes. The key difference lies in the intention behind the creation of the content and its potential for deception.

How are Deepfakes Created?

Deepfake models can be created with generative adversarial networks (GANs), autoencoders, or variational autoencoders. Once the model is sufficiently trained, it can be used to create deepfakes by inputting new training data or prompts.

Generative Adversarial Networks (GANs)

This process involves two main components: a generator and a discriminator. The generator’s job is to create image, audio, or video content that mimics real content and give both the real and the generated content to the discriminator.

The discriminator provides the generator with feedback about how the two samples it received are different, and the generator uses the discriminator’s feedback to make the next content it generates more realistic. The process continues until the discriminator can no longer find any difference between the real and the fake content.

Autoencoders

This process is often used for face swapping. Essentially, person A’s encoded facial features are decoded using a decoder that was trained on person B’s data. This allows the deepfake AI generator to superimpose the facial expressions of Person A onto the face of Person B.

Variational Autoencoders

This process can be used to generate realistic faces or facial expressions that are not direct copies of those in the training data. For example, by training a VAE on images of Person A, you can generate new expressions or movements that Person A never actually made.

What is Deepfake-As-A-Service?

Ten years ago, if someone wanted to create convincing deepfake content, they needed to have a strong background in mathematics, data science, and computer programming.

Today, people can use free or low-cost apps and cloud services that can create convincing deepfakes from just a few reference images or videos. For example, Tencent has a commercial deepfake service that can create high-definition, realistic deepfake humans using just three minutes of live-action video and 100 spoken sentences as source material.

Unfortunately, this has made it easier than ever for threat actors to create deepfakes too. In the last five years, there have been notable instances where deepfakes have been used to spread misinformation, commit financial fraud, create non-consensual adult content, and unduly influence political campaigns.

The quality of deepfakes created with inexpensive software is inconsistent, however. That is why deepfakes with noticeable flaws or inconsistencies are often referred to as cheapfakes or shallow fakes (the opposite of deepfakes).

Use Cases of Deepfakes

Deepfakes have been used in various contexts, ranging from benign and entertaining to controversial and malicious. Here are some notable examples (both good and bad).

Entertainment and MediaPolitical and Social CommentaryMisinformation and PropagandaArt and CultureEducation and Training
  • Artists have utilized deepfake technology to explore themes of identity, privacy, and the nature of reality. These projects often aim to provoke thought and discussion on the impact of digital manipulation and AI on society.
  • South Korean company DeepBrain AI is offering a deepfake service that takes images, audio, and video of a deceased person and creates an avatar that allows the bereaved to chat with them as if they were still alive.

Who is Making Deepfakes?

Some of the initial work on deepfake technology was conducted in academic and research settings to explore the potential for using AI in film. Practical applications of early deepfake technology included matching an actor’s lip movements to audio recorded in another language, de-aging actors, or replacing one actor’s face with another’s in a specific scene without having to film the scene again.

Today, a significant number of deepfakes are created by hobbyists and technology enthusiasts. Deepfake technology accessibility has increased with the widespread availability of user-friendly deepfake software and cloud services, and people with varying levels of technical skill can create realistic deepfakes.

How to Spot a Deepfake

Although it’s getting harder to identify deepfake images, audio, and video as the technology improves, there are still some telltale signs and techniques you can use to help identify deepfakes.

A chart showing how to spot deepfake content

Unnatural Facial Expressions or Movements

Look for facial features that appear to be stiff, exaggerated, or out of sync with the speech or emotions being portrayed.

Inconsistent Lighting or Shadows

Analyze the lighting and shadows in the video. Look for inconsistencies in how light falls on a face or background.

Poor Lip Syncing

Check to see if the lip movements are perfectly synchronized with the speech. Inaccuracies in lip-syncing are common when created with low code/no code (LCNC) cheapfake apps and services.

Unnatural Blinking and Eye Movements

Less sophisticated deepfakes (cheapfakes) often struggle to accurately reproduce natural blinking and eye movements.

Unusual Skin Texture or Coloration

Look for irregularities in skin texture. Telltale signs of a deepfake include skin that is too smooth and skin that lacks pores or moles.

Artifacting and Distortion

Digital video artifacts, such as blurring, flickering, or distortion, especially around the edges of the face or where the face meets the neck and hair, can indicate manipulation.

Inconsistent Audio Quality

Listen for discrepancies in the audio quality, such as the voice changing mid-sentence or the inclusion of background noise that doesn’t match the visual setting.

Contextual Clues

Sometimes the content of the video itself can be a giveaway. If the person is depicted saying or doing something highly out of character or unbelievable, cross-reference the source’s authenticity.

Deepfake Detection Tools

Today, there are a number of technical tools and services that people can use to detect inconsistencies and artifacts introduced during the deepfake creation process.

  • Sentinel: According to their website, Sentinel works with governments, media, and defense agencies to help protect democracies from disinformation campaigns, synthetic media, and information operations.
  • Deepfake Detector: The company’s AI Voice Detector can help users detect if an audio or video clip is a deepfake.
  • Sensity: Sensity’s proprietary application programming interface (API) can accurately identify AI-altered visuals 98.8% of the time.
  • Intel FakeCatcher: According to the Intel website, FakeCatcher analyzes blood flow in video pixels to determine a video’s authenticity.
  • Resemble AI: Resemble AI services include a cutting-edge AI Voice Generator and robust deepfake audio detection.

It’s important to remember that while deepfake detector tools and services are continually improving, the technology behind deepfakes is also advancing. This is why deepfake detection is often categorized as being a cat-and-mouse game.

The Impact of Deepfakes on Society

While deepfakes can serve as powerful tools for entertainment, education, and social commentary, the technology’s potential for misuse in phishing scams, identity theft, and financial fraud has made it a significant security concern.

Deepfake technology has the power to erode trust in media, facilitate misinformation campaigns, fuel political polarization, and pose a serious threat to individuals’ reputations and emotional well-being.

In society, the creation and dissemination of deepfakes are raising questions about consent and privacy, as well as questions about the potential for technology to impact humanity in a harmful manner.

Examples of Deepfake Misuse

Here are some real-world examples of deepfake technology misuse:

Ethical Implications of Deepfakes

As deepfake technology becomes more sophisticated and accessible, it is raising questions about how to verify the authenticity of source content.

Other questions concerning the ethical use of deepfakes include:

Legal Implications of Deepfake Technology

Deepfake technology is creating challenges in courts of law where video and audio evidence were once considered reliable.

In many countries, existing laws and regulations are not addressing the nuances of deepfake technology adequately, and this has led to calls for new regulations and legal frameworks.

Several countries and jurisdictions have begun to introduce laws and regulations specifically designed to hold deepfake creators and distributors accountable for harmful impacts.

  • In the United States, at least ten states have passed laws that criminalize the creation and distribution of deepfake pornography without consent and deepfake videos that aim to interfere with elections.
  • China has strict laws that specifically prohibit the production of deepfakes without user consent and require content generated with artificial intelligence to be clearly labeled.
  • The EU’s Artificial Intelligence Act includes provisions that require anyone who creates or disseminates a deepfake to disclose the content’s artificial origin and provide information about how the content was created.

The Bottom Line

Deepfake technology itself is not dangerous. It can be used to engage learners, lower production costs in film, and streamline content adaptation for audiences who speak different languages.

The ease with which threat actors are using the technology to create convincing fake videos and video clips, however, is helping to undermine people’s trust in digital media.

FAQs

What is a deepfake in simple terms?

Is a deepfake illegal?

What is a deepfake example?

Can deepfake video and audio be detected?

References

Advertisements

Related Questions

Related Terms

Margaret Rouse
Technology Specialist
Margaret Rouse
Technology Specialist

Margaret is an award-winning writer and educator known for her ability to explain complex technical topics to a non-technical business audience. Over the past twenty years, her IT definitions have been published by Que in an encyclopedia of technology terms and cited in articles in the New York Times, Time Magazine, USA Today, ZDNet, PC Magazine, and Discovery Magazine. She joined Techopedia in 2011. Margaret’s idea of ​​a fun day is to help IT and business professionals to learn to speak each other’s highly specialized languages.