How PoisonGPT and WormGPT Brought the Generative AI Boogeyman to Life

Why Trust Techopedia
KEY TAKEAWAYS

The release of malicious LLMs like PoisonGPT and WormGPT demonstrate how generative AI can be used to commit cybercrime, as part of a growing underground economy.

AI-driven cyberattacks may not be a new concern, but the growth in the adoption of generative AI has made autonomous cyberattacks more accessible than ever before.

While in November 2022, the risk of large language models (LLMs) like ChatGPT being used to create phishing emails or malicious code was largely theoretical, today, weaponized language models like PoisonGPT and WormGPT are openly available for purchase on the dark web.

Now, anyone can pay for a subscription to a malicious LLM and begin generating phishing emails at scale to target organizations.

As anxiety over the security risks of generative AI increases, organizations need to be prepared to confront a significant uptick in automated social engineering scams.

Why ChatGPT Clones Have Changed the Game

At the start of this year, Darktrace observed a 135% increase in novel social engineering attacks, which coincided with the release of ChatGPT. This study was one of the first that indicated an uptick in AI-generated phishing emails.

However, the release of WormGPT and PoisonGPT in July highlights the next phase in weaponized AI; the spread of malicious ChatGPT-inspired clones.

Advertisements

These tools are purpose-built to develop scams and don’t feature the “restrictive” content moderation policies of legitimate or jailbroken versions of generative AI chatbots, which would need to be jailbroken to be used harmfully.

Kevin Curran, IEEE senior member and professor of cyber security, told Techopedia:

“Cybercriminals launching phishing attacks is nothing new, but WormGPT and FraudGPT, both large language models (LLMs) which claim to get around the restrictions of the ‘normal’ LLMs, are certainly going to make it easier for them to do so.”

One way that these dark LLMs help cybercriminals is by enabling non-native English speakers to create well-written, convincing scams with minimal grammatical errors. This means there’s more risk of users clicking on them.

For instance, WormGPT, built on the open-source LLM GPT-J, has been trained specifically on malware-related training data, giving it the ability to instantly create scam emails in multiple languages, which can sidestep the victim’s spam filter.

This generative AI solution makes it easier for hackers to generate scams at scale while making it more difficult for users to detect them.

Generative AI: What’s the Damage?

Dr. Niklas Hellemann, psychologist and CEO of security awareness training provider SoSafe, suggests that AI-generated malicious emails, such as those generated via WormGPT and FraudGPT, can be more effective at misleading users than those written by humans.

“Our social engineering team’s latest studies have shown that AI-generated phishing emails can be created at least 40% faster while clicking and interaction rates are steadily rising when compared to human-generated phishing – in fact, interaction rates with AI-generated emails (65%) have now overtaken those of human-generated emails (60%).”

“Scaling of personalization through AI means that even using very minimal publicly available information, spear-phishing attacks have a massively increased success rate,” Hellemann added.

Although other studies have come to the conclusion that AI-generated phishing emails are less-effective than those written by humans, if hackers perceive weaponized LLMs as effective tools for hacking organizations, then there is the potential for more of these solutions to emerge.

Just as the success and profitability of ransomware attacks led to the development of a thriving Ransomware-as-a-Service (RaaS) economy with cyber gangs selling pre-built ransomware payloads, defenders have to be prepared to meet the next generation of automated cyberattacks if the demand for dark LLMs increases.

Doubling Down on Employee Awareness

Phishing emails are the main threat vector created by LLMs and generative AI. These scam emails rely on exploiting human error in order to trick the victim into downloading a malicious attachment or visiting a phishing website where their login credentials can be harvested.

As a result, organizations need to double down on investing in the human factor of cybersecurity. That means investing in security awareness training for employees so that they have the knowledge and experience necessary to detect phishing emails when they encounter them.

This goes beyond small and digestible e-learning courses but should also involve live phishing simulations where employees are sent fake phishing emails to assess how effective they are at identifying malicious content.

That being said, it’s important to recognize that although human error can be reduced, it can’t be eliminated completely. So it’s a good idea to incorporate other cybersecurity best practices, such as implementing identity and access management (IAM) tools to apply multi-factor authentication to user accounts as an extra layer of security.

Likewise, high-value accounts with access to credentials and secrets can also be protected with privileged access management (PAM), monitoring privileged accounts and revoking access if anomalous activity is detected.

Make Risks Manageable with Proactivity

Even though the introduction of weaponized LLMs to the underground economy introduces new risks for enterprises, organizations can reduce their exposure by being proactive and investing in providing employees with the skills they need to identify even the best-written scam emails.

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.