U.S. Treasury: Gen AI and Deepfakes Make it Easier to Con Financial Institutions

Why Trust Techopedia
KEY TAKEAWAYS

  • Fraudsters are leveraging AI for attacks on financial institutions, utilizing deepfakes to impersonate clients and bypass security measures.
  • Detection of AI-generated deepfakes remains challenging and the U.S. Treasury is looking to assist in data sharing among financial institutions to stop attacks.
  • There are two known incidents of companies being tricked into sending large sums — from $250,000 to $25 million — due to deepfake scams.
  • Defensive AI, coupled with shared fraudulent data, offers potential mitigation. Vigilance against deepfake technology is crucial in the evolving threat landscape.

The U.S. Department of the Treasury has this week released a report detailing how fraudsters are using artificial intelligence (AI) to launch attacks against financial institutions.

In a report titled Managed Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector, The Treasury highlights that recent developments in AI have made it easier for cybercriminals to use deepfakes to pose as clients of financial institutions and gain access to accounts.

The Treasury also warned about the impact that generative AI has had on the threat landscape more broadly.

“Generative AI can help existing threat actors develop and pilot more sophisticated malware, giving them complex attack capabilities previously available only to the most well-resourced actors. It can also help less-skilled threat actors to develop simple but effective attacks,” the report said.

How AI and Deepfakes are Helping Attackers Up their Con Game

Marcus Fowler, CEO of Darktrace Federal, a company that was listed in the report as an external participant, warned Techopedia that:

“The use of AI among attackers is still in its infancy, and while we don’t know exactly how it will evolve, we know it is already lowering the barrier to entry for attackers to deploy sophisticated techniques, faster and at scale.”

According to the Treasury, while there are a variety of AI-enabled threats facing financial institutions, the most successful attacks are those implemented with social engineering and identity spoofing.

On one end of the spectrum, threat actors can use large language models (LLMs) like ChatGPT or Gemini to create more convincing phishing emails in multiple languages, making it easier to launch successful business email compromise attacks against financial institutions. It can also help to write up content for phishing sites.

At the other end of the spectrum, malicious entities can use AI to mimic a financial institution’s customers via voice and video to bypass the identity verification process.

The Threat of AI-Generated Deepfakes

Out of the threats outlined in the report, AI-generated deepfakes stand out as the most concerning because they’re difficult to detect.

After all, while employees can still be tricked by phishing emails, most people are still aware they encounter them on a day-to-day basis – yet the same can’t be said for synthetic media.

Few people expect attackers to be able to clone a CEO’s voice to scam them — yet, in one example, that’s exactly what happened.

In an example highlighted in the report, The Treasury documents how fraudsters used AI-generated audio to impersonate a company’s CEO and instructed a subsidiary company in the UK to transfer money to a supplier for a loss of nearly $250,000.

Unfortunately, this isn’t the only example of this style of attack. In February it was reported how a scammer managed to trick an employee at an undisclosed Hong Kong company into transferring $25 million by posing as the company’s chief financial officer in a video conference call.

As AI voice and video creation technology becomes increasingly sophisticated (just look at OpenAI Sora’s highly realistic synthetic images), employees can’t be expected to reliably spot deepfakes.

The Treasury clearly recognizes the problem of detection in the report stating that “it appears that even live video interactions with a known client may be no longer sufficient for identity verification because of advances in AI-driven video-generation technology.”

Finding a Solution

One of the main challenges in finding a solution to AI-driven threats is that there is a fraud data divide between large and small financial institutions.

“The largest barrier for smaller financial institutions in utilizing AI for fraud detection is not model creation but with quality and consistent (standardized) fraud data,” Narayana Pappu, CEO at Zendata, a data security and privacy compliance firm, told Techopedia via email.

The Treasury suggests that greater information sharing around fraud could be one way to address this imbalance, highlighting the work of The American Bankers Association (ABA) to develop an information-sharing exchange detailing fraudulent activity.

In an attempt to support this effort, The Treasury also notes that the U.S. Government may be able to “contribute to a data lake of fraud data” to help train detection solutions across the industry.

While concerns over the exposure of confidential information or personally identifiable information (PII) may halt data sharing in the industry, Pappu explains that there are some ways to mitigate the potential risk.

“Techniques such as differential privacy can be used to facilitate information between financial institutions without exposing individual customer data, which might be [a] concern preventing smaller financial institutions from sharing information with other financial institutions,” Pappu said.

The Bottom Line

AI has brought a very fast-moving threat landscape to financial institutions. However, if the industry commits to sharing fraudulent data to train AI models, then there it is likely that these threats will become less prominent.

Defensive AI, when combined with the right data, has the potential to help identify phishing scams and deepfakes in a way that human users can’t.

Until then, be wary that cloned voices and deepfake videos could be the spoofing emails of the 2020s.

Related Terms

Related Article