As cyber threats evolve, identity security has become more important than ever in the frontline against cyberattacks. A recent report from SlashNext found that threat actors are leveraging generative AI tools to aid phishing scams, leading to a 341% increase in phishing in just six months.
This massive wave of new attacks includes malicious link attacks, business email compromise (BEC), QR code phishing, and attachment-based email and multi-channel messaging threats.
SlashNext reports that since the first generative AI tech was made public (ChatGPT in November 2022), there has been a 4,151% increase in malicious phishing messages sent.
Today 80% of all breaches use compromised identities, costing enterprises billions of dollars annually. With the proliferation of SaaS applications and artificial intelligence, the volume of attacks targeting identities is also growing — with a 71% increase year over year, according to the 2024 IBM Threat Intelligence Index.
Key Takeaways
- AI is dramatically escalating identity theft. Cybercriminals are leveraging AI to create highly personalized phishing attacks, deepfakes, and automated fraud bots, posing unprecedented threats to individuals and organizations.
- The costs of AI-fueled identity theft go far beyond financial loss. Reputational damage, operational disruptions, legal expenses, and increased cyber insurance premiums are significant consequences.
- The threat of AI-powered identity theft is rapidly evolving. New attack methods and techniques are emerging constantly, demanding continuous adaptation and investment in security measures.
How AI Identity Theft Tech Wages War Within Organizations
Identity-based attacks are becoming increasingly sophisticated as attackers leverage AI to automate and scale.
Jim Alkove, former Chief Trust Officer with Salesforce and Microsoft Security CVP, and current CEO of Oleria, spoke about identity security with Techopedia.
“Identity is the greatest challenge in cybersecurity today. If you don’t know who and what’s interacting with your data, you can’t secure it.”
In addition to costing U.S. companies an average of $9.48 million per breach, identity-related security incidents or theft cause significant reputational damage. Alkove spoke about the hidden costs of identity theft.
“When a company suffers a major security incident, it not only impacts business operations — it can also impact trust with customers and partners longer-term.”
“With a record number of breaches this year and new AI-driven security threats, it’s never been more critical for enterprises to evolve and modernize their approach to identity security.”
“Techniques such as AI-driven phishing, where machine learning algorithms generate highly personalized phishing emails, are becoming more common,” Alkove said.
“Additionally, AI is used to create deepfakes, which can deceive both individuals and systems, further complicating security efforts.”
Alkove explained that attackers are using AI to analyze vast amounts of data to identify vulnerabilities in software forcing security teams to constantly adapt and evolve their defenses, often with limited resources.
Hal Lonas, CTO of Trulioo told Techopedia that AI-fueled identity theft damages extend far beyond financial losses.
“The attacks can affect employee morale, creating a feeling that internal teams aren’t prepared for the fight. That can lead to unproductive reactions.”
These types of attacks can also affect innovation as organizations perceive new products or solutions as potential targets for hackers to breach. This puts companies at a disadvantage as they delay the adoption of advanced technologies due to security concerns.
In 2023, the Identity Theft Resource Center (ITRC) tracked 3,205 data compromises, a 72% increase from the previous high in 2021. Meanwhile, the Federal Trade Commission (FTC) reported over $10 billion in fraud losses in 2023, a 14% increase from 2022, highlighting the heightened vulnerability and consequent distrust among consumers.
Jim Kaskade, CEO of Conversica, a conversational AI provider for revenue teams, spoke to Techopedia about the future of AI identity theft.
“I think going forward, 60% of these attacks will be AI-fueled. AI-driven attacks, such as deepfakes, synthetic identities, and sophisticated phishing scams, have become more prevalent.”
Additionally, as AI identity theft thrives, business continuity is impacted. The ITRC found that nearly 11% of all publicly traded companies were compromised in 2023. These breaches resulted in substantial operational disruptions and diversion of resources to manage and mitigate security incidents. Kaskade from Conversica told Techopedia that the trend is expected to increase driven by AI.
“AI enhances the efficiency and success rate of attacks, leading to more frequent and severe operational disruptions. AI algorithms can automate large-scale phishing and credential stuffing attacks, causing widespread disruptions.”
Legal costs are also on the rise, with losses in 2023 exceeding $10 billion. “With AI facilitating more sophisticated and large-scale attacks, legal ramifications, including fines and lawsuits, are becoming more common,” Kaskade said.
Additionally, hidden costs include post-breach and band rebuilding, as well as cyber insurance premiums becoming more costly.
Alkove from Oliera added that the constant threat of attacks creates a high-stress environment for security teams, leading to burnout and often human errors.
“Lost business opportunities are another long-term cost, as potential customers and partners may be wary of engaging with a company with a history of security incidents,” Alkove said.
Next-Generation AI Fraud Bots and GenAI Attacks
Christopher Tennyson, Director of Product Marketing at Kount, an Equifax company dedicated to fraud, identity, and compliance, spoke to Techopedia about how malicious bots have been upgraded with AI, complicating fraud detection.
Tennyson explained that bots that used to perform specific actions were easy to detect using machine learning models and AI that could identify non-human behavior.
“Then fraudsters had to adapt and began using AI themselves. A simple script would not only enable behaviors that could mimic those of a real person.”
“The appearance of Gen AI has only increased these capabilities by enabling bots to bypass widely accepted bot detection checks like CAPTCHA,” Tennyson said.
Tennyson said that deep fakes is another area where they are witnessing significant advancements. “By collecting a few examples of a person’s face and voice, a fraudster can now mimic a video or voice call impersonating that person,” Tennyson added.
Using GenAI, cybercriminals can also create texts in any language and bypass automated security models that rely on nuances and errors to detect an attack.
Tennyson explained that criminal innovations create numerous costs for organizations, from operational resources being diverted and impacting performance and growth to ongoing fraud, financial impacts, stock price reduction, credit and funding limitations, and increased costs in technology investment to fight new vulnerabilities.
The Bottom Line
As Chris Hills, Chief Security Strategist at BeyondTrust told Techopedia: “When it comes to organizational impact involving AI-fuelled identity theft, we haven’t even begun to see the surge or long-lasting impact of what will come”.
Hills added that AI attacks will ramp up, peak, and become a critical issue, just like ransomware did.
“AI-fuelled identity threats are in their early stages. They will become more sophisticated, AI will enable threat actors to streamline attack paths at speeds we have never seen or been exposed to,” Hills said.
“When it comes to AI attacks on identities, we are seeing such an evolution across deepfake audio and video identity impersonation, (that) end users, engineers, and executives are now having to question, what is real vs what is fake, it’s gotten THAT GOOD!!”
In conclusion, the convergence of AI and cybercrime has ushered in a new era of identity theft with exponentially higher stakes.
The hidden costs of these attacks extend far beyond financial losses, impacting trust, operations, and even innovation. As AI continues to evolve, organizations must urgently adapt their security strategies to combat this growing threat or risk facing catastrophic consequences.
FAQs
How has AI caused a rise in phishing attacks?
What are the hidden costs of AI-fueled identity theft?
What impact has AI-fueled identity theft had on businesses?
References
- The State of Phishing 2024 (Slashnext)
- CrowdStrike 2024 Global Threat Report (Crowdstrike)
- IBM Security X-Force Threat Intelligence Index 2024 (Ibm)
- Jim Alkove – Seattle, Washington, United States | Professional Profile (Linkedin)
- Identity security reimagined (Oleria)
- Cost of a data breach 2023: Geographical breakdowns (Stats.nwe)
- Sign In | Qwoted (App.qwoted)
- Global Online Identity Verification Service – KYC, KYB, AML (Trulioo)
- SpamFireWall is checking your browser and IP 45.61.185.154 for spam bots (Idtheftcenter)
- As Nationwide Fraud Losses Top $10 Billion in 2023, FTC Steps Up Efforts to Protect the Public (Ftc)
- Jim Kaskade – Conversica (Linkedin)
- AI-Powered Conversations to Unlock Revenue (Conversica)
- Identity Theft Resource Center 2023 Annual Data Breach Report Reveals Record Number of Compromises; 72 Percent Increase Over Previous High – ITRC (Idtheftcenter)
- Christopher Tennyson, MBA – Kount, an Equifax Company (Linkedin)
- Fraud Detection and Chargeback Management Solutions (Kount)
- Equifax | Credit Bureau | Check Your Credit Report & Credit Score (Equifax)
- Christopher Hills – BeyondTrust (Linkedin)
- Identity and Access Security (Beyondtrust)