Fake B2B AI Tools Are the New Ransomware Bait & It’s Working

Why Trust Techopedia

Cybercriminals know your favorite tools, and they can trick you with them. While this may sound like an overblown assumption, it doesn’t change the reality. We now live in a world where business tools are beginning to prove harder to trust.

This comes on the backdrop of a recent report by Cisco’s threat intelligence organization, which showed how ransomware groups now have the capacity to clone popular B2B AI tools to perpetrate cyber attacks of varying degrees.

We take a closer look at how this attack mechanism works and why it’s a growing concern for businesses leveraging AI tools for operations.

Key Takeaways

  • Cybercriminals are creating fake versions of popular AI business tools like NovaLeadsAI, ChatGPT, and InVideo to distribute ransomware and malware.
  • These attacks use SEO poisoning to make malicious download links appear at the top of search results.
  • The CyberLock ransomware demands $50,000 in Monero cryptocurrency and manipulates victims by falsely claiming payments fund humanitarian aid.
  • Lucky_Gh0$t ransomware encrypts smaller files but destroys larger files completely.
  • The Numero malware manipulates Windows GUI components, making systems completely unusable.
  • Businesses should only download AI tools from official sources and implement security measures like MFA and endpoint protection.

AI Tool Cloning as the New Tactics

One of the biggest fears the AI boom has brought was the rise of deepfakes, as they’re extremely difficult to detect. And while we prepare for a world where we must scrutinize every image or video to distinguish what’s real from what’s not, threat actors appear to be shifting their focus to “deepfaking” trusted business applications.

Last month, Cisco Talos drew our attention to the proliferation of cloned business AI tools that mimic the characteristics of legitimate business software solutions.

According to the researchers, some known ransomware families like CyberLock, Lucky_Gh0$t, and a new harmful malware they called “Numero” have been circulating in the wild for a while, camouflaging as legitimate AI tool installers.

Based on their findings, the payloads were propagated through various cybersecurity SEO techniques, including SEO poisoning, which allows their fake ads and download links to appear in the search engine results or social media platforms like Telegram.

They went on to reveal that these AI malware installers mimicked popular AI solutions like NovaLeadsAI, ChatGPT, and InVideo AI due to their popularity amongst businesses.

NovaLeadsAI, ChatGPT & InVideo Fake Installers Drive Attack

Each of these three fake installer campaigns demonstrates the attackers’ deep understanding of which AI tools businesses rely on most, allowing them to come up with convincing malware distribution methods.

Fake NovaLeadsAI Installer

Talos observed that the CyberLock Ransomware group created a fake lookalike version of the NovaLeadsAI website with a CyberLock ransomware PowerShell script embedded in the download resource file.

According to the report, when users download the fake AI product as a ZIP archive and run the loader executable, it deploys ransomware that encrypts the victims’ files and demands a ransom in return.

A promotional webpage for NovaLeadsAI offering B2B sales solutions, featuring a call-to-action for generating qualified sales calls.
Fake lookalike NovaLeadsAI website with a clickbait button. Source: Cisco Talos

Fake ChatGPT Installer

Talos also discovered that bad actors lured ChatGPT users to download fake installation ZIP files, which contain the Lucky_Gh0$t ransomware that imitates some Microsoft open-source executable tool.

The malicious ZIP installer has a deceitful file name ‘ChatGPT 4.0 full version – Premium.exe,’ and also contains legitimate Microsoft open-source AI tool files, possibly to evade anti-malware scan detection.

Comparison of two ZIP archives: one labeled "Legitimate" with standard files, the other "Malicious" containing a ransomware executable.
Lucky_Gh0$t ransomware fake installation ZIP file contents. Source: Cisco Talos

Fake InVideo AI Installer

Talos reported that attackers also use a new malware they call “Numero” to imitate the InVideo tool installer.

From their findings, the fake installer is a loader that contains a malicious Windows batch file, VB script, and the Numero executable. Once deployed, it affects victims by manipulating the graphical user interface (GUI) components of their Windows operating systems, rendering them unusable.

Flowchart illustrating the execution process of a fake installer dropping and executing the Numero malware payload.
Fake installer execution flow running Numero payload. Source: Cisco Talos

How Organizations Are Affected

The sole aim of most ransomware groups is usually financially motivated. Organizations, especially small and medium-sized businesses, are believed to be the prime targets for these threat actors due to their growing adoption of AI technologies.

According to Cisco Talos researchers, many of these unsuspecting small businesses looking for AI solutions could be lured into downloading cloned AI tools that are riddled with malware.

CyberLock ransomware, for instance, lured its victims with a bogus free one-year subscription package just to install the NovaLeadsAI app. But instead of enjoying the free offer, the installation file deploys CyberLock ransomware upon execution.

At the end, the attackers demand a $50,000 ransom payment exclusively in Monero (XMR) cryptocurrency, employing psychological manipulation by falsely claiming the payments will fund humanitarian aid in Palestine, Ukraine, Africa, and Asia.

However, Cisco Talos found no evidence of actual data exfiltration capabilities within the malware code.

Even where the malware does not rely on encryption like traditional ransomware, it still causes serious damage. For example, Cisco Talos research findings showed that Numero malware doesn’t use encryption; instead, the malware attacks and breaks down a victim’s Windows Interface and renders it useless once executed.

All these scenarios not only affect business productivity and revenue, but also erode the trust many have in AI solutions.

How Businesses Can Identify & Avoid Fake AI Business Tools

The first and most important line of defence is to download and install software solutions only from trusted sources. However, there are other safe measures to follow as recommended by Cisco Talos Threat Intelligence Group:

  1. Have an endpoint protection tool

    To help detect and prevent the execution of malware, even if you download fake AI tools
  2. Protect your email with threat defense

    So it can block malicious emails that could contain fake AI tool download links
  3. Use firewall protection

    To detect malicious activity associated with fake AI tools and download clickbait
  4. Utilize a secure internet gateway

    To block your team from connecting to malicious domains both on and off your business network
  5. Implement the least privilege principle

    To only allow trusted users secure access to the company network, cloud services, or private applications
  6. Enable multi-factor authentication (MFA)

The Bottom Line

Cybercriminals are adapting faster than ever, and the current obsession with new AI solutions gives them the perfect bait to sway their victims.

The fact that many AI tools we trust are now used as doubles for ransomware calls for a rethink in security strategy. To navigate this tricky threat landscape, businesses must prioritize security awareness training and implement strict software verification processes.

Also, organizations need layered defenses, including endpoint protection and controlled installation policies. Businesses should also establish mandatory verification for all software downloads, regardless of search rankings or promotional offers that seem too good to be true.

FAQs

What are fake B2B AI tools, and how are they used in cyberattacks?

What is SEO poisoning?

How does SEO poisoning help cybercriminals spread AI-themed malware?

Why are attackers exploiting trust in AI platforms?

How can businesses protect themselves from fake AI tools?

References

Related Reading

Related Terms

Advertisements
Franklin Okeke
Technology Journalist
Franklin Okeke
Technology Journalist

Franklin Okeke is an author and tech journalist with over seven years of IT experience. Coming from a software development background, his writing spans cybersecurity, AI, cloud computing, IoT, and software development. In addition to pursuing a Master's degree in Cybersecurity & Human Factors from Bournemouth University, Franklin has two published books and four academic papers to his name. Apart from Techopedia, his writing has been featured in tech publications such as TechRepublic, The Register, Computing, TechInformed, Moonlock, and other top technology publications. When he is not reading or writing, Franklin trains at a boxing gym and plays the piano.

Advertisements