Exclusive: Accenture’s 5 Cybersecurity Predictions for 2025

Why Trust Techopedia

The threat landscape is changing, not least due to the spread of artificial intelligence (AI), which brings new cybersecurity threats that pretty much everyone online needs to prepare for.

Global services giant Accenture recently reached out to Techopedia to share cybersecurity predictions for 2025 from its top security leaders.

Key predictions include increased automation to bridge the cybersecurity skills gap, a growth in deepfakes, and insights into the new attack surfaces presented as AI agents enter the workplace.

Techopedia explores Accenture’s predictions and adds our own views on how the threat landscape will evolve in 2025.

Key Takeaways

  • Accenture security leaders share 5 top cybersecurity predictions for 2025.
  • Automation could provide an answer to the cybersecurity skills gap.
  • We’re likely to see an increase in deepfakes alongside traditional data breaches.
  • AI agents entering the workforce will introduce new vulnerabilities.
  • Company boards will start to push for quantum security.

Accenture’s 5 Cybersecurity Predictions for 2025

1. Companies will Use AI & Automation to Bridge Cybersecurity Skills Gap

For years, security teams have struggled to keep up with the scale of modern cyber threats — Microsoft reports 600 million cyberattacks a day. The average team deals with too many vulnerabilities across too many systems to keep them all secure. However, Accenture security global lead Paolo Dal Cin believes that the rapid adoption of generative AI will change this reality.

“Companies will increasingly turn to AI to address the cybersecurity skills shortage, using automation to streamline tasks and reduce reliance on specialized talent.

 

“For example, AI will assist security operations analysts by providing context on threats/alerts to help them to make faster, better decisions.”

This prediction is interesting because it matches a similar outlook predicted by Gartner, which suggests that by 2028, the adoption of generative AI will close the skills app and remove the need for specialized education from 50% of entry-level cybersecurity positions.

Advertisements

We have also previously explored companies that are already supplementing their human workforce with Agent AIs today.

2. Deepfakes Combine with Data Breaches

On the other side of the coin, the increase in the adoption of generative AI will inevitably create new cyber threats.

One of the most significant concerns is that of deepfakes, which can be used to conduct social engineering scams and trick victims into handing over sensitive information.

We saw deepfakes weaponized in early 2024 when a fraudster convinced a finance worker to pay out $25 million after using deepfake technology to pose as the company’s chief financial officer.

In the future, Robert Boyce, global cyber resilience lead at Accenture, sees deepfakes becoming an even more central part of the modern attacker’s toolkit.

“Deepfakes will be combined with previously leaked data to increase the chances of successful exploitation. For instance, adding business context and personal information makes the deepfakes seem more legitimate.

“Also, deepfakes will shift to target mid-level employees rather than executives — targeting executives makes the attack seem less legitimate.

“When was the last time your CEO reached out asking you to download software? But a local tech support person seems much more reasonable.”

3. Organizations will Need to Focus on Secure AI Agents

Throughout 2024, companies like OpenAI and Microsoft have been developing AI agents to help augment the capabilities of human workforces. However, they’ve also introduced new vulnerabilities that need to be secured.

After all, if agents can perform useful tasks, they need privileges — and if the agents are accessed and exploited by a threat actor, they could be used to wreak havoc.

“AI agents are coming to the workplace and will have the same level of access to systems and data as human employees,” Damon McDougald, cyber protection lead at Accenture, told Techopedia.

“Regulatory requirements for managing these AI agents will soon be as stringent as those for human employees.

“Identity security plays an essential role in managing access and orchestration of these agents, what they have access to, what they can do within an organization, and how it’s enforced.

“Auditors may soon require organizations to demonstrate how they’re managing access for AI agents, much like they ask for human access today.

“There will likely also be a marketplace for external use where organizations can sell access to their AI agents to other organizations.

“Cybersecurity will play a major role in these agents’ authentication, credentialing, and authorization process, ensuring that they can securely perform tasks in external environments.”

4. 2025 Will Be the Time Boards Push to Prepare for Quantum

As quantum computers begin to enter the market, security leaders will need to be conscious of how they can be used to decrypt public key encryption and other trusted security mechanisms.

Tom Patterson, quantum security lead at Accenture, predicts that boards will become increasingly vocal about their desire for quantum preparedness.

“The United Nations has officially declared 2025 to be the International Year of Quantum Science and Technology (IYQ). This will usher new focuses on defending enterprises against quantum computing decryption capabilities.

“Coupled with new NIST guidance on new encryption standards and depreciation dates for our most commonly used cryptography, it will cause a push from boards downwards to ensure that enterprises remain safe and compliant in their ever-important use of encryption to manage their organizations,” Patterson said.

5. Digital Content Needs Assurance Markers to Ensure Integrity

With deepfakes and synthetic content running wild, organizations have to play a more active role in helping users differentiate between real and AI-generated content.

For Daniel Kendzior, data and AI security lead at Accenture, this can be done by putting assurance markers to help differentiate between human-generated and synthetic content.

“Organizations will need to fundamentally rethink how they approach media content provenance and apply the same rigorousness as security departments have been instilling around phishing.

“I expect this to take the form of a two-stage transformation of media integrity mechanisms and user behavioral shift,” Kendzior said.

“One, organizations will be investing in embedding trust and authenticity markers leveraging zero-proof technologies across the content development lifecycle, while platform providers will be creating interoperable standards for content transformation verification — moving towards a digital trust environment where media integrity attributes are as common as a file name and date stamps.

“Two, organizations will also need to invest in changing the behavior of users to a model where media is assessed based on its provenance and integrity assurance markers rather than assessing the content itself.”

The Bottom Line

If there’s anything to take home from Accenture’s predictions, it’s that AI hasn’t finished shaking up enterprise security.

Just as these tools are transforming people’s work and home lives, they’re also introducing new threats that organizations must be prepared to address.

The good news is that just as AI introduces threats in the form of deepfakes, it also opens the door to new automated capabilities that could help security teams to better protect their environments.

Cybersecurity takes on a new shape every few years — and that evolution shows no sign of stopping.

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Writer
Tim Keary
Technology Writer

Tim Keary is a technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology. He holds a Master’s degree in History from the University of Kent, where he learned of the value of breaking complex topics down into simple concepts. Outside of writing and conducting interviews, Tim produces music and trains in Mixed Martial Arts (MMA).