AI Ransomware Will Surge in Next Two Years, UK’s GCHQ Warns

Why Trust Techopedia

The UK’s National Cyber Security Centre (NCSC) warns that artificial intelligence (AI) will significantly escalate global ransomware attacks over the next two years.

The warning comes in a new report titled “The near-term impact of AI on the cyber threat“, which assesses how AI will impact the potency of cyber operations and the implications for the cyber threat landscape.

The GCHQ-backed agency emphasizes that AI is already being leveraged in malicious cybercrimes and predicts a significant increase in both the volume and impact of cyberattacks, particularly ransomware.

Key Takeaways

  • The UK’s National Cyber Security Centre (NCSC) warns that artificial intelligence will significantly contribute to the rise of global ransomware attacks in the next two years.
  • AI can lower the entry barrier for less-skilled cybercriminals, enabling more effective access and information-gathering operations.
  • There has been a surge in ransomware attacks against British organizations, with notable incidents, including The British Library spending £7 million in recovery efforts.
  • Globally, high-profile ransomware attacks have disrupted businesses and governments, with 66% of organizations affected in 2023, according to the Sophos ‘State of Ransomware’ report.

According to the report, AI will lower the entry barrier for novice cybercriminals, hackers-for-hire, and hacktivists. This accessibility will allow less-skilled threat actors to conduct more effective access and information-gathering operations.

Combined with improved victim targeting facilitated by AI, these factors will contribute to the heightened global ransomware threat.

The report also reveals that from 2025 and beyond, there will be an increasing commoditization of AI-enabled capabilities in both criminal and commercial spheres, which will likely give more access to cybercriminals and state actors. This trend is anticipated to result in an expanded and improved toolkit for malicious activities.

Advertisements

This latest warning comes barely three months after British Prime Minister Rishi Sunak warned that AI could be humanity’s greatest undoing if proper measures are not implemented to guide its development.

The Prime Minister had called the world to a global AI summit that led to the Bletchley declaration last November and the introduction of the world’s first guidelines for secure AI.

While these efforts are still at an early stage, many expected they would begin to level out AI-driven attacks or slow down the development of open-source generative AI models, reducing ransomware attacks.

The Rising Tide of Ransomware Attacks

As per the latest set of data on security incident trends released by the Information Commissioner’s Office (ICO), there was a notable increase in ransomware attacks against British organizations.

In the first three quarters of 2023 alone, there were 874 recorded incidents, marking a significant surge compared to the 739 incidents reported for the entire year of 2022.

Among the latest on the list of ransomware attacks is The British Library, which is set to spend £7 million in recovery efforts. As a result, last December, the UK Parliament declared ransomware as the number one cyber threat to the country.

At the global level, high-profile attacks have disrupted businesses, governments, and critical infrastructure worldwide, causing billions of dollars in damages. According to the Sophos’ State of Ransomware’ report, 66% of organizations were affected by ransomware in 2023.

The NCSC’s report suggests that the situation is likely to get worse before it gets better. The combination of AI and ransomware could lead to more frequent and more damaging attacks, putting even more pressure on organizations to improve their cybersecurity defenses.

NCSC CEO Lindy Cameron said:

“We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat.

 

“The emergent use of AI in cyberattacks is evolutionary, not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term.

 

“As the NCSC does all it can to ensure AI systems are secure by design, we urge organizations and individuals to follow our ransomware and cybersecurity hygiene advice to strengthen their defenses and boost their resilience to cyberattacks.”

Reacting to the report, Crystal Morin, Cybersecurity Strategist at Sysdig, told Techopedia:

“It should come as no surprise that the bad guys are taking advantage of AI to speed up and improve the success of their attacks — in fact, they’ve been doing so for years.

 

“As you would assume, attackers are more technically savvy than the average person. I suspect that AI’s shift toward the mainstream makes this revelation all the more pertinent,” she said.

The Warning Call is Beyond the UK

As the UK goes on the attack against AI-enabled cybercrime, reports show they are not alone in sounding the alarm on the potential increase in AI-driven attacks.

Last November, the US Department of Homeland Security (DHS) issued a similar warning (PDF), stating that AI could enable more sophisticated and targeted ransomware campaigns. They urged organizations to implement best practices to safeguard their data and systems.

The European Union Agency for Cybersecurity (ENISA) echoed these sentiments. It highlighted how AI could amplify the speed, scale, and complexity of cyberattacks, including ransomware. These reports point to the fact that the AI-driven ransomware threat is a global issue that demands urgent, coordinated, and proactive action from all stakeholders.

While Jeff Schwartzentruber, Senior Machine Learning Scientist at eSentire, told Techopedia he agrees with many aspects of these warnings and notes that as long as open-source large language models (LLMs) are still being developed and deployed on consumer-level hardware, attackers will always leverage them for exploits.

“For the general population, LLMs refer to one of the mainstream commercial services such as ChatGPT or Gemini, which have strict user policies and guardrails to prevent their usage in malicious activity.

“On the other hand, there exists the relatively unrestricted world of open-source models that have had these guardrails removed, and in some cases, fine-tuned for malicious intent (WormGPT, FraudGPT).”

The implication of these unrestricted models, according to Schwartzentruber, results in “attackers having a ‘virtual’ expert in crime, for whom they can leverage to exploit in almost every part of the cyber kill chain, and thus enhancing their own capabilities.

“Through this enhanced aid, it is safe to expect that the frequency, diversification, and success rate of cyberattacks to increase.”

AI Cyber Crime: The Defensive Way Forward

In the face of this global challenge, the NCSC urges organizations and individuals to implement protective measures, such as regularly updating and patching systems, backing up data, and educating staff about the risks of phishing emails, often used to deliver ransomware.

For Morin, awareness and skepticism are the two best practices organizations can implement to improve their cyber resilience against AI-driven attacks.

“Company-wide training and challenges will raise awareness and help keep organizations, employees, and customers abreast of adversarial tactics and techniques. They should also be taught to take everything with a grain of salt.

“The old adage is true: don’t believe everything you see on the internet. With AI, it is more challenging to discern what is authentic and what is generated. It is up to each of us to make that call. Organizations, media outlets, and journalists can and should be fact-checking and refusing to share information that cannot be justified as a precautionary measure.”

Matt Middleton-Leal, Managing Director EMEA at Qualys, advised security teams to look at how they use AI to understand their risks, communicate those risks to businesses, and then take steps to eliminate them through patching and automated remediation.

“To defend against these risks, security teams have to know their estates and the risks that they face. They have to be able to patch quickly and automatically for the majority of applications, which will free up time to concentrate on the biggest risks and most critical applications.

“Lastly, they have to be able to judge risks across all of their organization’s assets, whether these are traditional IITendpoints or more modern cloud services running software containers. Without this accurate picture of what known good activity looks like, it will be hard to spot those attack attempts taking place or any attacker activity when a breach is in motion.”

The Bottom Line

While the NCSC’s report paints a worrying picture, it also serves as a wake-up call. AI, as it is today, represents the latest stride in a continuous journey of progress, compelling individuals and governments alike to evaluate their risk tolerance and acceptance.

Generative AI, with its myriad of benefits, is a testament to the potential of AI. However, the challenge lies in outpacing the AI capabilities of potential attackers. While there is hope about our ability to rise to this challenge over time, we must acknowledge that cybercriminals will also evolve as we do.

Advertisements

Related Reading

Related Terms

Advertisements
Franklin Okeke
Technology Journalist
Franklin Okeke
Technology Journalist

Franklin Okeke is an author and tech journalist with over seven years of IT experience. Coming from a software development background, his writing spans cybersecurity, AI, cloud computing, IoT, and software development. In addition to pursuing a Master's degree in Cybersecurity & Human Factors from Bournemouth University, Franklin has two published books and four academic papers to his name. His writing has been featured in tech publications such as TechRepublic, The Register, Computing, TechInformed, Moonlock and other top technology publications. When he is not reading or writing, Franklin trains at a boxing gym and plays the piano.