The rise of AI cybersecurity threats has changed the way companies think about risk. Attacks are faster, more convincing, and harder to spot. With tools that copy voices, write fake emails, and build malware in minutes, AI attacks are becoming part of everyday life.
Based on new data published by the World Economic Forum (WEF), this article explores what organizations say they’re most worried about when it comes to AI attacks – and how those concerns are changing cybersecurity.
Key Takeaways
- AI cyberattacks are getting harder to detect, with 42% of organizations reporting a successful social engineering attack in the past year.
- Deepfakes and impersonation are the top concern for 47% of companies, as these tools are used to trick staff during calls, emails, and video meetings.
- Data leaks are growing, with 22% of companies highlighting the risk of private information slipping through everyday tools.
- Managing AI tools is also becoming more difficult, as different teams use them in different ways without clear oversight.
- Less visible threats, like supply chain risks and system tampering, are still serious, even if they don’t get as much attention.
- Show Full Guide
How AI Has Changed the Cyberattack Game
In recent years, Generative AI has made it much easier for people to carry out cyberattacks, even if they don’t have advanced skills. In the past, writing malicious code or creating convincing scams took time and expertise, but that’s no longer the case.
Here’s how artificial intelligence (AI) in cybersecurity has shifted things:
According to the WEF research, 42% of organizations had a successful social engineering attack in the past year. This number may grow as tools become more powerful and easier to use.
AI cyberattacks are getting harder to spot. Many of them are smart, fast, and designed to feel real, which makes them more dangerous than traditional cyber threats.
Why Deepfakes & Impersonation Top the List of Cybersecurity AI Concerns
Out of all the AI cybersecurity threats emerging today, one concern stands out above the rest: adversarial capabilities. These are tactics that use AI to impersonate people, generate fake content, or manipulate conversations, and they’re getting harder to detect.
In the WEF report, nearly half of all organizations (47%) said this was their number one concern when it comes to generative AI. And it’s easy to see why.
One of the most alarming tactics is deepfake impersonation. Using video, audio, or writing that mimics real people, attackers can now fool staff in ways that were almost impossible before.
Here’s what makes this threat so effective:
- Fake messages and voices sound real: AI can now copy a CEO’s voice or writing style with surprising accuracy.
- Long conversations build trust: Some attackers use deepfakes across multiple interactions, slowly convincing staff to share details or take action.
- It’s spreading fast: Accenture found a 223% rise in deepfake tools being traded on dark web forums between Q1 2023 and Q1 2024.
- It’s keeping CISOs up at night: At the Annual Meeting on Cybersecurity 2024, 55% of CISOs said deepfakes pose a moderate-to-significant threat to their organizations.
This kind of manipulation is harder to detect than traditional phishing. Because the content feels natural and human, it’s easier for staff to fall for it, especially if they’re under pressure.
Data Leaks: How GenAI Increases Exposure Risk
Among today’s growing AI cyber threats, one area that’s often overlooked is the risk of data exposure. As more teams begin using generative tools at work, the chances of leaking sensitive or internal information are rising, sometimes without anyone realizing it.
In the WEF survey, 22% of respondents said their biggest concern was data leaks linked to GenAI use.
These risks aren’t always caused by attackers; often, they’re the result of everyday tools being used without proper guidance.
Here are some of the ways GenAI can lead to information slipping through the cracks:
- Training on public data: Some GenAI models are trained using online content that hasn’t been filtered properly. If that training set includes private or copyrighted information, pieces of it can appear in future outputs.
- Accidental sharing in replies: Tools that summarize emails or generate text might unknowingly include confidential details, especially if they’re pulling content from sensitive documents or past prompts.
- Lack of clear limits: Without clear rules, staff may share internal materials with public-facing tools, unaware of how that data will be stored or used.
To lower the risk, companies should take a few practical steps:
- Red-teaming helps test how easily a tool might reveal private content.
- Clear guidance and training can support safer habits when using these tools in daily work.
Knowing how generative AI can be used in cybersecurity also means understanding where it might put information at risk. Good habits and strong processes make a big difference in keeping data safe.
Managing Cybersecurity in the Age of Generative AI
As more teams begin using generative tools at work, it’s becoming harder for companies to keep track of how they’re being used. This adds pressure on those trying to manage security across the whole organization.
14% of companies said that governance – the ability to oversee and manage these tools – is their main concern.
The issue isn’t just about risk; it’s also about visibility. Tools are being used in so many different ways that it’s easy for things to fall through the cracks.
Here’s why managing cybersecurity is getting more complicated:
- Different teams use GenAI in different ways: It’s no longer something handled only by IT. Marketing, legal, HR, and product teams may all be using these tools, each with their own goals and habits.
- Questions around ownership are growing: When GenAI creates code or content, it’s not always clear who owns it or who’s responsible if something goes wrong.
- More teams need to be involved: Strong governance depends on collaboration between IT, legal, compliance, and leadership. One department can’t do it all.
Beyond the Obvious: Other AI Cybersecurity Threats
Some AI cyber threats are easier to spot than others. Deepfakes, phishing, and data leaks get most of the attention, but there are quieter risks that can still cause serious problems, especially as generative tools become part of daily operations.
In the WEF survey, 17% of organizations highlighted “other” concerns that don’t always make it into security checklists.
These included:
- Supply chain issues: There’s a risk of harmful code being hidden inside third-party tools or software updates, especially when teams rely on open-source libraries.
- Manipulated systems: Attackers may find ways to interfere with how generative tools work, changing their behavior without being noticed.
These risks show that artificial intelligence in cybersecurity brings new layers of complexity. It’s not just about protecting data; it’s also about keeping systems reliable, understanding where your tools come from, and making sure responsibilities are clear.
The Bottom Line
Generative tools are constantly changing the way digital threats look and feel. Many AI cybersecurity threats are now faster, more convincing, and harder to track.
As these tools become part of everyday work, teams need to pay closer attention. Understanding how AI in cybersecurity fits into daily operations and setting clear rules around its use can help companies stay protected and avoid mistakes that could put systems or data at risk.
FAQs
How can generative AI be used in cybersecurity?
What is the threat of AI in cybersecurity?
What is the impact of AI in cybersecurity?
What is a common challenge when using AI in cybersecurity?
References
- Global Cybersecurity Outlook 2025 (Reports.WeForum)
- Deepfake Technology: New Cybersecurity Threats (Accenture)