AI Jacking

Why Trust Techopedia

What is AI Jacking?

AI jacking is a new cybersecurity term that explains a specific kind of cyberattack targeting artificial intelligence (AI) systems. It primarily affects popular AI platforms like Hugging Face. This kind of attack is concerning because it can affect a lot of users at once.

Advertisements

Hugging Face is a key site in this issue. It’s known for its open-source machine learning projects, offering many models and datasets used in AI research and development.

The platform gained more users with the growth of generative AI, especially with models like GPT, the basis for OpenAI’s ChatGPT. But its popularity also made it a target for AI jacking.

Techopedia Explains

The attack happens when someone maliciously takes advantage of the way Hugging Face renames its models or datasets. Normally, when a model or dataset gets a new name, the old name redirects to the new one.

But if a hacker takes the old name for their use, they can replace the original content with something harmful or incorrect. This is dangerous, especially in machine learning, where data integrity is very important.

How Does AI Jacking Work?

AI jacking operates through a series of targeted steps that exploit the structure and functionalities of AI platforms.

Identification of Targets

Attackers begin by identifying popular or widely used AI models and datasets within platforms like Hugging Face. They focus on those with significant dependencies in various projects.

Monitoring for Renaming Events

The attackers closely monitor these AI resources for any renaming events. Such events typically involve changing the name of a model or dataset for reasons like updates, rebranding, or organizational changes.

Registration of Abandoned Names

Once a renaming event occurs, the original name of the resource becomes potentially available. Attackers swiftly register these abandoned names under their control before they are noticed or blocked by the platform’s administrators.

Replacement with Malicious Content

After securing control over the old names, attackers replace the legitimate content with malicious versions. These could be subtly altered models or datasets designed to perform malicious functions, gather data illicitly, or corrupt AI training processes.

Exploitation of Dependency Chains

Many AI applications and systems depend on these resources for their functionality. By compromising a single model or dataset, attackers can potentially infiltrate multiple downstream applications and projects that rely on the integrity of these resources.

Delayed Detection

The changes are often subtle and hard to notice. Because of this, users and developers might not quickly spot the difference, especially if the harmful changes are made to look like normal updates.

Potential for Widespread Impact

The interconnected nature of AI systems means that a single compromised resource can have a ripple effect, impacting a wide range of applications and users. This potential for widespread impact is what makes AI jacking a particularly insidious form of cyberattack.

AI Jacking’s Implications and Limitations

AI jacking is a cybersecurity threat with serious consequences for AI, posing various risks and facing some challenges in its execution.

Impact and Risks

Impact/Risk Description
Reduced Trust in AI Platforms AI jacking can make people less confident in using or contributing to AI models and platforms due to security concerns.
Data Integrity Issues The accuracy of AI depends on good data. AI jacking risks harming this data, leading to flawed AI training and inaccurate results, which is a huge problem in critical areas like healthcare.
Operational Problems for Businesses Companies using AI can face disruptions from AI jacking, leading to financial loss, work stoppages, and damage to their reputation.
Potential for Spreading False Information AI jacking could be used to spread misinformation through AI systems, impacting public opinion or causing confusion.

Limitations of AI Jacking

Limitations Detection and Response Mechanisms
Detection and Response Mechanisms As awareness of AI jacking increases, so do efforts to detect and respond to such attacks. Improved security protocols and AI auditing practices can limit the effectiveness of AI jacking.
Platform Countermeasures AI platforms, alerted to the threat of AI jacking, are likely to implement stronger security measures, making it more challenging for attackers to exploit vulnerabilities successfully.
Legal and Ethical Constraints The legality and ethical considerations surrounding AI jacking can deter potential attackers. Legal consequences and the growing emphasis on ethical AI use to serve as deterrents.

How Legit Security Uncovered AI Jacking

Legit Security’s discovery of AI jacking involved a careful examination of how the Hugging Face platform manages its AI models and datasets.

Initial Tests

The team started by changing their account name on Hugging Face from “high-rep-account” to “new-high-rep-account.” They watched how the platform redirected these changes, noting that the original account name became available again. This suggested a possible security issue.

Demonstrating the Vulnerability

To show how AI jacking works, Legit Security made a demonstration video. In it, they took over an existing model and added harmful code, proving the risks of this vulnerability.

Searching for Vulnerable Projects

Hugging Face doesn’t keep a history of changes to its projects like some other platforms do. So, Legit Security used the Wayback Machine, a tool that archives the internet, to look at past versions of Hugging Face’s models and datasets.

They focused on changes since 2020 when Hugging Face first started hosting these models and datasets.

Research Process

The team looked at various dates in the Wayback Machine archives and gathered information about the models and organizations of Hugging Face at those times.

They adjusted their methods to match changes in the appearance of Hugging Face’s website over the years.

Identifying Risks

After collecting names, they checked each one to see if it redirected to a new name. A redirect meant the original name had changed, creating a chance for AI jacking.

They found many accounts that could be hijacked this way. There might be even more vulnerable accounts, as not all historical data was available in the archives.

The Bottom Line

In summary, AI jacking is a complex type of cyberattack mainly affecting AI platforms such as Hugging Face. It involves taking over previously used names for AI models and datasets and inserting harmful content into them.

This attack can damage the trust in AI technologies, affect the quality of AI data, and disrupt business operations.

Legit Security’s work in uncovering this issue emphasizes the importance of stronger security and continuous monitoring in the AI field.

As AI becomes more widespread, protecting against threats like AI jacking is key to maintaining safe and responsible AI use.

Advertisements

Related Terms

Maria Webb
Technology Journalist
Maria Webb
Technology Journalist

Maria is a technology journalist with over five years of experience with a deep interest in AI and machine learning. She excels in data-driven journalism, making complex topics both accessible and engaging for her audience. Her work is prominently featured on Techopedia, Business2Community, and Eurostat, where she provides creative technical writing. She holds a Bachelor of Arts Honours in English and a Master of Science in Strategic Management and Digital Marketing from the University of Malta. Maria's background includes journalism for Newsbook.com.mt, covering a range of topics from local events to international tech trends.