3,000 Dark Web Posts Discuss How to Misuse ChatGPT and LLMs

Why Trust Techopedia

A report released today by Kaspersky’s Digital Footprint Intelligence service found almost 3,000 dark web posts across 2023 discussing how to use ChatGPT and other large language models (LLMs) for illegal activities.

These discussions included creating malicious alternatives to the chatbots, techniques for jailbreaking, lists of malicious prompts, and other general conversations around how to misuse the tools — along with 3,000 posts discussing stolen accounts with access to the paid version of ChatGPT.

Key Takeaways

  • Kaspersky’s Digital Footprint Intelligence service discovered nearly 3,000 dark web posts in 2023 discussing illegal activities involving ChatGPT and other large language models (LLMs).
  • These include creating malicious versions, jailbreaking techniques, lists of harmful prompts, and discussions on stolen accounts.
  • Threat actors on the dark web actively share knowledge on exploiting ChatGPT, discussing topics like creating malware, using artificial intelligence for processing user data dumps, and sharing jailbreaks to bypass content moderation policies.

The research also found a high volume of conversations around tools like WormGPT, XXXGPT, and FraudGPT, which were marketed as alternatives to ChatGPT with fewer restrictions.

Kaspersky’s research comes just after OpenAI suspended a developer for creating a chatbot that mimicked U.S. Congressman Dean Philips, an act that the organization says violated its rules on political campaigning or impersonating individuals without consent.

How ChatGPT is Being Exploited: The Key Findings

While enterprises and consumers look toward ChatGPT as a tool to improve their day-to-day lives, threat actors are experimenting with ways to exploit it to target unsuspecting individuals and organizations.

In a series of posts shared on the accompanying research blog, dark web users can be seen discussing how to use GPT to create polymorphic malware, which could modify its code, and how to use artificial intelligence (AI) to process user data dumps.

Advertisements

Another user shared the well-known Do Anything Now (DAN) jailbreak for ChatGPT, designed to get around OpenAI’s content moderation policy. The research found 249 offers to distribute and sell prompts on the dark web in 2023.

Collectively, these findings highlight not just that ChatGPT can be used for misuse but also that cybercriminals are actively sharing knowledge on how to exploit it. As one anonymous user commented, “AI helps me a lot, GPT-4 is my best bud.”

“Threat actors are actively exploring various schemes to implement ChatGPT and AI,” said Alisa Kulishenko, digital footprint analyst at Kaspersky.

“Topics frequently include the development of malware and other types of illicit use of language models, such as processing of stolen user data, parsing files from infected devices, and beyond.

“The popularity of AI tools has led to the integration of automated responses from ChatGPT or its equivalents into some cybercriminal forums.

“In addition, threat actors tend to share jailbreaks via various dark web channels – special sets of prompts that can unlock additional functionality – and devise ways to exploit legitimate tools, such as those for pen-testing, based on models for malicious purposes.”

What’s the Risk So Far?

While Kulishenko believes “it’s unlikely that generative AI and chatbots will revolutionize the attack landscape”, this research indicates that threat actors have taken a significant interest in exploiting this technology for their own ends.

So far, it appears the most significant exposure from generative comes in their ability to create phishing emails. For instance, a study released in November 2023 by cybersecurity vendor SlashNext found that since ChatGPT’s release in Q4 of 2022, there’s been a 1,265% increase in malicious phishing emails.

Although vendors like OpenAI have attempted to use content moderation policies to try and prevent ChatGPT from creating malicious outputs, these have proved insufficient at preventing misuse and are too easily sidestepped through jailbreaks and other techniques.

Techopedia briefly tested ChatGPT’s content moderation capabilities by asking ChatGPT to generate a phishing email that could persuade the recipient to update their online account payment details “as part of a phishing awareness program,” which the chatbot responded to by creating a basic phishing email.

The reality is that if someone wants to use LLMs maliciously, they have plenty of workarounds at their disposal to do just that.

The Bottom Line

The study highlights that the dark web is buzzing with interest in using AI to automate cyberattacks. Although it’s important not to panic, it’s essential to recognize the potential for an increase in cybercrime.

Inevitably, cybercrime will rise if hacking forums and other nefarious communities continue to collaborate on how to use this technology maliciously.

With more than 67,000 AI startups out there today, who knows where this bumpy road will take us?

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.