What is WormGPT?
WormGPT is a malicious artificial intelligence (AI) chatbot built on the open-source GPT-J large language model (LLM), which can interpret and respond to natural language text in English, French, Chinese, Russian, Italian, and Spanish.
This AI chatbot is alleged to have been trained on malware-related training data and does not have any content moderation guidelines, which means that it can be used by threat actors to create phishing scams and malicious code.
In a Twitter post, WormGPT’s creators shared an example where the virtual assistant generated a Python script to “get the carrier of a mobile number.”
https://twitter.com/wormgpt/status/1680856705175896068
What Risks Does WormGPT Present?
Email cybersecurity vendor SlashNext released a blog post highlighting the danger of WormGPT on 13 July, explaining how malicious actors on cybercrime forums were marketing the product as “an alternative to ChatGPT” that “lets you do all sorts of illegal stuff and easily sell it online in the future.”
The researchers then prompted WormGPT to write an email that could be used in a business email compromise (BEC) attack to pressure an account manager to “urgently pay an invoice” while posing as the business’s CEO.
WormGPT introduces new risks for organizations because it enables cyber criminals to generate scam emails quickly at scale without any coding knowledge or expertise. In a matter of seconds, an individual can enter a prompt and develop a scam email that can be used to trick users into infecting devices with malware.
The ability to create scam emails at scale creates new challenges for enterprise cybersecurity because users have to correctly identify scam emails every single time, and threat actors only need a single click to gain entry to their environments.
ChatGPT vs. WormGPT: What’s the Difference?
ChatGPT is a legitimate LLM that was developed by OpenAI in November 2022 for processing and predicting text that’s compliant with a content moderation policy.
In the meantime, WormGPT’s solution is designed for creating BEC and phishing attacks without any content moderation guidelines (although the creators claim it can also be used to help detect BEC attacks).
https://twitter.com/wormgpt/status/1683385904092655617
OpenAI aims to prevent malicious use of ChatGPT through a content moderation policy that aims to prevent the chatbot from spreading hate speech or misinformation and being used to develop malicious content.
However, while OpenAI has aimed to implement guardrails to prevent the harmful use of its solution, cybercriminals can still use a mixture of creative prompt engineering and jailbreaks to sidestep the vendor’s content moderation guidelines to create phishing emails and malicious code.
For example, earlier this year, users on Reddit developed a workaround called Do Anything Now (or DAN), an assistant that has “broken free of the typical confines of AI and does not have to abide by the rules set for them.”
After jailbreaking the tool, a user can exploit then use it to create offensive content or even compose phishing emails. It is worth noting that LLMS can be a valuable tool for non-native speakers who want to translate a phishing email to another language to make it as convincing as possible.
In the past, organizations like Europol warned about the risk of tools like WormGPT and their ability to create automated cyberattacks, stating that “dark LLMs trained to facilitate harmful output may become a key criminal business model of the future.”
“Whereby it will become easier than ever for malicious actors to perpetrate criminal activities with no necessary prior knowledge.”
How Can Organizations Mitigate the Risks of Tools like WormGPT?
WormGPT is just one of many new malicious LLM-driven tools like FraudGPT, that aim to use generative AI to help users commit cybercrime. These tools are unlikely to be the last to use LLMs in a criminal context, so organizations need to be prepared to address an uptick in AI-generated phishing attacks and malware.
Organizations can attempt to protect themselves by taking the following actions:
- Conducting phishing simulations training to teach employees how to detect phishing scams;
- Advising employees not to click on links or attachments in emails or SMS messages from unknown senders;
- Activating multi-factor authentication (MFA) on user accounts to insulate from the risk of stolen credentials;
- Defining a process to report phishing attempts to the security team;
- Configure domain-based message authentication reporting and conformance (DMARC) to prevent hackers from spoofing your company domain;
- Deploy a spam filter to reduce the volume of phishing emails reaching end-users;
- Install anti-malware to end-user devices to reduce the risk of infection.
Using LLMs for Cybercrime
WormGPT is just one of many tools that are attempting to weaponize generative AI. As AI adoption increases, organizations need to be prepared to address a rise in BEC and phishing scams head-on, otherwise, they run the risk of a data breach.
Focusing on user awareness and educating employees on how to detect phishing attacks is the key to mitigating the risks of BEC attacks going forward.