5 Deepfake Scams That Threaten Companies — and Ways to Mitigate Them

Why Trust Techopedia

Deepfake technology has raised significant concerns due to its potential for misuse.

Before now, enterprises had little to worry about deepfakes, but with attackers leveraging artificial intelligence, particularly generative AI, quality is getting better and better, and their numbers are getting out of control.

Research shows that the number of deepfake videos circulating online in 2023 was about 96,000, a 550% increase from 14,678 in 2019. Apart from their proliferation, another report by Intergrity360 indicates that 68% of security professionals surveyed on Twitter are concerned about cyber crimes using deepfake scams to target their organizations.

Given this growing concern, research group Forrester published a study alerting enterprises on five major deepfake schemes to look out for, including stock-price manipulation, fraud, and damage to reputation and brand.

Five Deepfake Scams That Threaten Organizations

5. Stock Price Manipulation

Rumors of stock price manipulation are not new. However, with deepfake, it becomes easier to deploy price manipulation tactics that are easy to fall for.

According to Forrester’s report, bad actors can create a deep-fake video where a well-regarded senior executive departs a publicly traded company and disseminate the news on the internet. With this comes panic and anxiety, which will eventually cause the company’s stock price to take a bearish run.


Jeff Pollard, vice president and principal analyst at Forrester, warns that “while this seems minor if timed correctly, it could impact employee compensation and the company’s financing efforts”.

An example of this manipulation occurred when a deepfake image depicting an alleged explosion near the Pentagon went viral and was retweeted by outlets like Russia Today (RT), leading to fluctuations in the U.S. stock market.

Given the growing concerns around using deepfake scams to manipulate the stock market, some senators in the United States are already pushing for new legislation to fine businesses that use deepfakes or other artificial intelligence tools to manipulate markets or to engage in securities fraud.

4. Fraud

Businesses commonly use biometric authentication like face or voice recognition for employee verification. Pollard warns enterprises to watch out as deepfake technology can clone faces and voices, posing a risk in employee verification processes.

While talking with Techopedia, Arti Raman, founder and CEO of Portal26, reckons deepfakes pose a grave risk, especially as we are all becoming comfortable with using biometrics for logins.

She said:

“As generative AI has taken off, the quality and speed of deepfakes have increased dramatically. This means that more individuals are targeted and the likelihood of an employee or customer falling for a deepfake is very high.”

Pollard also highlights that fraud could come in the form of fraudulent transactions, such as an attacker impersonating a C-level executive to initiate unauthorized payments, wire transfers, or alter financial details.

Pollard may well be right, as Reuters recently reported a deepfake-driven financial scam where an attacker convinced someone to transfer money under the guise of a friend — one of the most prevalent deepfake types due to its quick path to monetization.

3. Reputation and Brand

Deepfakes significantly threaten a company’s reputation or brand name, as attackers can generate offensive content that falsely appears to originate from the brand. For instance, deepfakes can be used in a video manipulation scheme to make a senior-level execute use offensive language to insult customers, blame business partners and employees, or even spread fake news about the company’s products and more.

A recent video on the internet showing the chairman of a renowned energy firm criticizing climate change measures a week before the start of the United Nations Climate Change Conference (COP28) is a case in point. The video went viral and was later debunked to be entirely fabricated by AI.

This type of incident could result in a critical PR crisis that can ripple through an enterprise like a shockwave, shattering trust, tarnishing reputation, and leaving long-lasting and irreparable consequences in its wake.

2. Employee Experience and HR

Sextortion scams and the creation of nonconsensual pornographic materials are some of the main ways bad actors use deepfakes. Research put together by Home Security Heroes found that deepfake pornography made up 98% of all deepfake videos in 2023.


Based on Forrester’s report, this may soon become a problem for enterprises as disgruntled employees or competitors can create this kind of deepfake content using the likeness of employees, and it begins to circulate.

“The motivation behind these scams could be out of revenge or any other malicious intentions,” Pollard noted.

1. Amplification

Deepfake technology can be used not just to create fake content but also to spread other deepfake content.

According to Forrester, this can be likened to “bots spreading content, but instead of giving those bots usernames and post histories, we give them faces and emotions.”

These deepfakes could also be employed for reacting to and disseminating additional deepfake content, conveying opinions, sharing emotions, and more. This poses a threat to the company’s reputation and magnifies the dissemination of already fabricated news and content to a broader audience.

What Can Organizations Do to Protect Themselves?

Forrester recommended that organizations invest more in academic and corporate research for deepfake detection and check for commercial and open-source solutions and non-technical solutions for AI detection.

Academic and Corporate Research

According to Pollard, deepfake has plenty of academic and corporate research dedicated to it. However, he reiterated that we can’t stop it, “…there is no foolproof strategy here, and almost no sure-shot way to prevent deepfakes,” he noted.

He further stressed that this is why deepfakes can’t be totally shut off from our space, and enterprises need to constantly be on their toes to monitor what is happening with their brand.

Commercial and open-source solutions

Many commercial and open-source tools are available for detecting deepfakes and deepfake scams today. Some prioritize fraud prevention, while others focus on brand and reputation.

Based on the Forrester report, organizations can employ tools like Blackbird.AI, Sentinel, Reality Defender, and many more commercial solutions to detect deepfakes.

Non-technical solutions

Enterprises can incorporate various non-technical solutions into their current security processes.

Practices like rotating code words or passphrases for phone or text-based account transfers are suitable non-technical measures. This approach adds an extra layer of authentication, making it harder for malicious actors to manipulate communication channels in your organization.

Pollard recommends that changing these codes regularly enhances security and helps organizations thwart potential threats related to deepfakes and social engineering.

In addition to the above, Raman recommended using a combination of facial/voice recognition and older password/passphrase technology. Since faces and voices are popular targets for deepfakes, this will reduce the associated risks.

The Bottom Line

As with all things related to cybersecurity, vigilance is the watchword and still the first line of defense against deepfake scams.

Even though there has been progress in developing new technologies to detect deepfakes, the fraudsters aren’t idle either; they are constantly forging ahead to maneuver these detective measures.

So, it behooves organizations to stay informed. Know the latest developments in deepfake technology and how fraudsters could use it to harm their enterprises. This may range from simply training their employees to spot signs of deepfakes to using robust authentication security tools and procedures.


Related Reading

Related Terms

Franklin Okeke
Technology Journalist
Franklin Okeke
Technology Journalist

Franklin Okeke is an author and tech journalist with over seven years of IT experience. Coming from a software development background, his writing spans cybersecurity, AI, cloud computing, IoT, and software development. In addition to pursuing a Master's degree in Cybersecurity & Human Factors from Bournemouth University, Franklin has two published books and four academic papers to his name. His writing has been featured in tech publications such as TechRepublic, The Register, Computing, TechInformed, Moonlock and other top technology publications. When he is not reading or writing, Franklin trains at a boxing gym and plays the piano.