AI Names 2024’s Biggest Cybersecurity Threats – and AI is One of Them

In 2024, cybercrime is expected to cost the world’s internet users a total of $9.22 trillion dollars. By 2028, that figure will be almost $14 trillion.

Armed with Artificial Intelligence (AI)-enabled arsenals of cyber weapons, bad actors – fraudsters, hackers, state-sponsored cyberterrorists – are carrying out attacks with greater venom and verve; at ever-growing rates of speed and success. Meaning that, seemingly every day, cybersecurity becomes that bit more important – but it too needs AI to keep up.

We wanted to learn more about AI, cybersecurity, and how the two intersect. To settle some big questions – is AI a positive societal force, or a harmful one? – and learn more about how AI both hinders and helps the world’s ongoing tussle with cyber criminals. But we also wanted to pinpoint the biggest cybersecurity threats to internet users, and what part AI has to play.

So we thought: who better to ask than AI itself?

Emboldened, we asked five leading AI language models (ChatGPT, Perplexity AI, Google Bard, Claude, and Llama) what they thought about the cybersecurity landscape – and AI’s role in facilitating cybercrime. Their responses will shock, surprise, and even entertain you; but they contain plenty of good advice for staying safe on the internet, too.

First, though, let’s summarize the state of play with our top 10 AI and cybersecurity statistics.

AI in Cybersecurity: Top 10 Statistics

  • By 2030, the global AI in cybersecurity market is expected to be worth $133.8 billion.
  • Breaches that affected organizations with fully deployed security AI solutions cost them, on average, $1.8 million less than businesses without them.
  • Organizations with AI cybersecurity took 100 days less to identify and contain these data breaches when they occurred, compared to those lacking them (IBM, 2023).
  • 75% of security professionals have seen an increase in cyberattacks in the past year, and 85% blame AI.
  • Almost half (46%) of those same respondents believe generative AI (AI with the ability to create content) will leave organizations more vulnerable to cyber attacks than they were before AI (Deep Instinct, 2023).
  • Businesses are adopting an increasingly proactive, rather than reactive, approach to cybersecurity, with 2023 seeing a 95% increase toward this mentality vis a vis 2022 (Deep Instinct, 2023).
    Among cybersecurity experts’ top concerns around AI implementation were increases in privacy concerns (39%), undetectable phishing attacks (37%), and both the volume and velocity of attacks (33%) (Deep Instinct, 2023).
  • 34% of organizations are already using or implementing AI cybersecurity tools.
  • 69% of enterprises believe AI in cybersecurity is necessary due to the burgeoning number of threats that human analysts are unable to get to.
  • With AI, there will be a 150% increase in predictive analysis for cyber threats by 2025 (Zipdo, 2023).

Want more of this kind of data? Head to Techopedia’s roundup of the latest cybersecurity statistics.

AI Cybersecurity Insights – Infographic

Infographic showing key insights into cybersecurity by AI tools

AI and Cybercrime: Hindrance or Help?

AI and cybercrime have a complicated relationship.

On the one hand, AI models and algorithms enable the world’s cybersecurity teams to spot threats faster – sometimes before they’ve even arisen – and act swiftly to combat them. But on the other hand, it’s the very tool enabling the ever-smarter, increasingly dangerous threat cybercriminals pose: allowing them to launch faster attacks, and on a grander scale.

So, is AI a ‘goodie’ or a ‘baddie’ – or is it all a little more nuanced than that?

How AI is Being Used to Commit Cybercrime

As the potential applications for AI have grown – from creating blogs and artwork to its emerging role in personalized medicine – so too have the opportunities to exploit it.

And AI’s “dark side” is no more prominent than in the cybersecurity realm.

Three quarters of security professionals, for example, observed an increase in cyberattacks over the past year: and, of those surveyed, 85% pointed the finger at generative AI-equipped cybercriminals (Deep Instinct, 2023). As has the UK government: writing, in a report titled Safety and Security Risks of Generative Artificial Intelligence to 2025, that AI will enable “faster-paced, more effective and larger-scale cyber-intrusion.”

What is AI’s role in facilitating and fueling cybercrime, then? Let’s take a look.
It Increases the Speed and Volume of Attacks

One of AI’s biggest selling points is its ability to automate – and therefore, speed up – attacks.

Through AI, hackers can engage in a wide range of malicious activity (such as Distributed Denial of Service (DDoS), zero-day, and brute force attacks) at a speed and scale beyond the limits of human capabilities. For example, AI can optimize DDoS attacks – which attempt to bring down a website or network by flooding it with requests – by dynamically adjusting the attack vectors to adapt to changing network conditions.

This increases the longevity and effectiveness of DDoS campaigns – freeing up attackers to execute them with renewed vigor, and at an even greater scale. In 2023 we saw this play out, with Google fending off the internet’s largest DDoS attack to date. The AI-enabled attack, which reached a crescendo of 398 million requests per second, was seven and a half times bigger than any previous attack Google faced.

That same year, Amazon was also rocked by a seismic, AI-fueled DDoS attack, and it’s clearly something that has the industry concerned. A 2023 report by Deep Instinct, titled Generative AI and Cybersecurity: Bright Future or Business Battleground?, found that a third (33%) of the cybersecurity experts it surveyed were concerned about AI’s role in increasing the velocity and volume of attacks – the third-highest reason on the list.

As for brute force attacks (in which an attacker systematically attempts to guess all possible combinations of a password until they land on the right one), AIs are changing the game. One site – which quantifies the amount of time it takes AI to guess passwords of varying lengths – proclaims that any password containing numbers only is instantly guessable by AI. Even more complex passwords, such as ones consisting of eight characters (including numbers, upper and lowercase letters, and symbols), take AI just seven hours to guess correctly.

It Adapts to Specific Defenses

AI’s ability to learn and adapt is one of its greatest attributes and just one reason it’s such an effective cybersecurity strategy.

Yet, as is the case with most of AI’s strengths, it can also be exploited by hackers.

Take malware, for instance. This is software, such as viruses, worms, and ransomware, designed to compromise computer systems, networks, or devices. With AI, attackers can now use AI to create malware able to alter its code at will (called “polymorphic malware”), making it extremely hard for firewalls and antivirus software to detect and extract.

Through AI machine learning algorithms (which we’ll discuss shortly), attackers can also adapt to the patterns and processes of intrusion detection solutions. As Mihoko Matsubara, Chief Cybersecurity Strategist at NTT Corporation, explains, this enables AI-empowered attackers to spot and circumvent defenses with greater ease – and exploit any systemic gaps or weaknesses with the utmost ruthlessness.

“Malicious actors will use AI to continue to accelerate malware,” Matsubara states, “and in passive reconnaissance to identify targets, software, and weaknesses to exploit.”

 

It Enables More Sophisticated Attacks

By harnessing AI’s ability to crunch – and learn from – enormous datasets, hackers are able to launch more sophisticated, more convincing attacks.

Take phishing, for instance: a form of cyberattack in which fraudsters reach out to a victim via an email or SMS, masquerading as a legitimate entity to convince the recipient to click on a link that, if clicked, will demand their sensitive information. (The amount of these malicious URLs that exist, incidentally, has been increasing every year without fail: rising by 61% in 2022 to almost 30 million in 2023.)

Phishing – a form of social engineering, in which hackers manipulate and deceive their victims through psychological techniques – is one of the most common cybersecurity threats. In 2023, 94% of organizations fell victim to a phishing attack; of those, 96% were negatively impacted.

In 2022, it was the most common type of cybercrime in the US, with 300,497 complaints.

The most difficult part of executing a phishing attack is making the messaging appear legitimate – and now, AI is helping bad actors surmount this barrier. Through AI (specifically, a subset called Natural Language Processing, or NLP) fraudsters can create convincing, highly personalized phishing communications. NLP algorithms give phishers the tools to mimic human communication styles – leading to more successful attacks.

Phishing as a subset of cybercrime includes several of its own subsets: including smishing (SMS phishing), vishing (voice phishing), spear phishing (a more targeted form), whaling (phishing which targets high-end individuals), and credential harvesting. Of these, credential harvesting proved most popular in 2022, comprising over a quarter (76%) of zero-hour attacks.

 

How AI is Being Used to Fight Cybercrime

In any discussion of AI’s impact on cybersecurity in 2024, it’s easy to cast AI in the role of a pantomime villain – especially given all the aforementioned ways AI enables cybercrime.

However, this would be to overlook the debate’s finer points – namely, the myriad ways experts are using AI to prevent, detect, and combat all forms of cybercrime.

And to great effect, too. IBM’s Cost of a Data Breach 2023 survey, for example, found that the use of AI and automation saved organizations almost $1.8 million in data breach-related costs.

Given that the average cost of a data breach in 2023 was $4.45 million, that means data breaches cost organizations with AI-enabled cybersecurity only $2.65 million – more than 40% less. Plus, AI-powered cybersecurity setups helped organizations speed up the breach identification and containment processes by an average of more than 100 days.

Just how is AI defending our livelihoods and businesses from cybercrime, then?

Let’s dive deeper.

Enhanced Threat and Phishing Detection

AI algorithms are able to integrate a wide range of data into their approach to detect cyberthreats as they occur.

This includes, but isn’t limited to:

  • Behavioral analysis, which analyzes user behavior to identify strange patterns, or any deviations from the norm that could suggest malicious activity.
  • Advanced Persistent Threat (APT) detection, which continuously monitors for subtle, long-term indicators of compromise that traditional security systems might miss.
  • AI-enhanced Security Information and Event Management (SIEM) technologies, which automate the correlation and analysis of security events from various sources to enable faster and more accurate threat detection.

How does this look in practice? Well, recent research found that AI is extremely effective at analyzing code for threats: identifying 70% more malicious scripts than conventional methods. When it came to detecting attempts by malicious scripts to target a device, AI also registered an accuracy rate of up to 300% more than traditional techniques (VirusTotal, 2023).

As for phishing detection, AI systems sift through enormous data sets – which include email content, user interactions, website characteristics, plus known and suspected cases of phishing historically – to look for trends and patterns associated with these types of cyber attacks.

Take your email provider’s spam filter, for example. Every day, it uses AI to go silently, and dutifully about its work identifying and blocking phishing emails from your inbox. (Google alone blocks around 100 million spam emails every day.)

This involves scanning incoming emails for known phishing patterns – such as malicious attachments and suspicious links – and removing anything that raises a red flag. AI content filters also analyze what’s being said in these emails (and how it’s being said) to make decisions. Emails with urgent calls for action or requests for sensitive information usually end up blocked; as do ones with misspelled, grammatically incorrect, or hyperbolic language.

Plus, it’s not only cybercrime AI is working to detect and prevent. It’s in-person crime, too.

In a recent development more reminiscent of 2002’s Minority Report than any real-life event, AI is now predicting crime a week before it happens – and with a mind-boggling 90% accuracy.

Rapid Incident Analysis and Machine Learning

In the unceasing, ever-shifting world of cybersecurity, there’s often little time for manual threat detection – and a 2023 study reported that the Security Operation Centre staff surveyed spend a third (33%) of their time validating and investigating false positives. What’s more, a further 80% claimed that manual effort slowed down their processes “a lot” and scuppered the speed of their overall threat response times (Morning Consult and IBM, 2023).

One solution? A subset of AI called rapid incident analysis.

Rapid incident analysis involves the AI-led processing of huge datasets to identify potential security threats. To do this, AI systems employ machine learning algorithms. These enable AI-driven cybersecurity tools to not only analyze logs, network traffic, and other data sources, but to learn from them: synthesizing this new-found knowledge into a continually learning, adapting, threat-fighting approach.

Machine learning enables AI-driven systems to quickly identify anomalies based on historical and current patterns: then act swiftly to isolate and mitigate the threat. Because the algorithms are fed with – and always learning from – actual data, there’s less chance of false positives. (One cybersecurity tool, for example, reduced false positives by an average of 90%.)

This frees up whole teams of human analysts from manual tasks – giving flesh-and-blood cybersecurity experts more time to focus on strategic response and remediation.

Emphasizing how important AI’s extra (thousand) pair of eyes provides is Dr Robert Johns, data analyst at Hackr.

“By analyzing petabytes of data in real time, AI systems can spot anomalies that human analysts might overlook: giving security teams an early warning system to get ahead of emerging issues.”

“AI also helps lighten the load by automating routine security tasks,” Dr Johns continues. “In one project, AI sifted through firewall and IDS logs, finding 900 daily alerts we likely would have missed due to fatigue. This shows how AI strengthens defenses by connecting the dots across multiple, diverse sources of information.”

Social Engineering Simulations

With social engineering being implicated in a staggering 98% of cyber attacks, it’s a threat that internet users – especially those representing organizations with privileged access to client and company data – need to be alert to.

Here, AI-powered social engineering simulations can help. These involve a company simulating, via AI, a social engineering attack: testing its employees’ ability to detect and combat them. Through effective simulations, organizations can identify weak points in their own cybersecurity setups, and inform more targeted training programs for staff.

Plus, utilizing AI-generated content in social engineering simulations helps familiarize employees with AI communication styles – something vital for spotting these schemes before they can take root.

What do AIs Think are the Biggest Threats to Internet Users in 2024?

To assess the most pervasive and perilous threats to today’s internet users, we turned to five of cybersecurity’s foremost experts for their top takes. They were all busy.

So, we changed tack to an altogether different set of technology and cybersecurity experts – AI language models. The team we assembled included ChatGPT, as well as four of ChatGPT’s top competitors:

  • ChatGPT
  • Perplexity AI
  • Bard (Google)
  • Llama (Meta)
  • Claude

What do these five leading AI language models have to say about the top threats to our online security? Let’s find out.

AI-Powered Attacks and Systemic Vulnerabilities

Is there any loyalty in the world? Among AIs, apparently not – because of the five AIs we “interviewed”, four – ChatGPT, Perplexity AI, Google Bard, and Llama – pointed to AI-powered attacks as being among the most pressing cybersecurity threats in 2024. (Claude strongly hinted in this direction, but didn’t explicitly name AI among its chief cyber concerns.)

This can be partly explained, however, by the fact that the term “AI-powered attacks” is such a broad one, and encompasses a wide variety of cyber threats and strategies.

Ransomware and malware, for instance, were highlighted by all five AIs. “The prevalence of ransomware continues to grow,” warned ChatGPT, with “attacks becoming more sophisticated, using advanced encryption methods and exploiting zero-day vulnerabilities.” Perplexity AI honed in on the rise of Ransomware-as-a-Service (RaaS) in particular, while Claude simply added that “ransomware remains disruptive”.

This, incidentally, was a view shared by security professionals: 46% of whom identified ransomware as their organizations’ top data security threat.

What’s more, 62% of that same sample agreed that ransomware was their chief C-suite concern, up from 44% in 2022 (Deep Instinct, 2023). In another survey – this time of 2,300 cybersecurity decision-makers from large organizations across 16 countries – respondents identified AI-enabled malware as the single greatest cyber threat.

But back to our AIs. In addition to outlining the dangers of AI-enabled attacks, all five engaged in some form of ‘victim blaming’ – pointing to the vulnerabilities of the targets as a key factor.

Focusing on mobile device vulnerabilities, ChatGPT said “as mobile device usage surpasses traditional computers for many users, the threat of mobile-specific vulnerabilities increases, including insecure apps and mobile phishing attacks.” Claude added – somewhat cryptically – that “critical infrastructure lacks sufficient safeguards”.

As for Llama, it opined that “the increasing use of IoT [Internet of Things] devices has created new vulnerabilities, with many devices open to hacking and other forms of cyber attack”. It also referenced a broader lack of cybersecurity awareness, suggesting that “many internet users still lack basic knowledge of online security best practices and threats.”

Systemic vulnerabilities was also something one human expert we spoke to highlighted.

Ed Skoudis, Founder of Counter Hack and President at SANS Technology Institute, says that that one trend to watch in 2024 is the susceptibility of systems to hacking – even AI-based ones. “In 2023, we witnessed the leakage of user data within AI chat systems,” Skoudis states.

“We’ll see a rise in deliberate attacks targeting these systems, particularly through their application programming interfaces (APIs). These APIs – which empower AI systems with various capabilities – haven’t yet received the cybersecurity scrutiny they require. Expect attackers to exploit API vulnerabilities to access and steal user information within AI systems.”

Sophisticated Phishing Attacks

The growing sophistication of phishing attacks was a hot topic among the five AIs we consulted. All of them namechecked phishing (with the exception of Llama, which simply talked about “online scams”), and the increasing cunning these attacks are deployed with.

Is that cunning of a human kind, though – or an artificial one? That depends who you ask.

“Phishing techniques have become increasingly sophisticated,” writes ChatGPT, “often leveraging AI and machine learning to create highly convincing fake messages and websites.”

Perplexity AI also pays lip service to “AI-assisted phishing attacks.” On the whole, however, it’s less keen to implicate its own ilk, and prefers to shift the buck onto human hackers. It states: “Phishing, ransomware, and data breaches continue to be prevalent threats, with a high percentage of breaches attributed to human error, privilege misuse, and social engineering.” (It’s right, of course: around 88% of all data breaches are caused by human employees.)

Sitting on the fence was Claude, stating merely that “phishing scams are increasingly sophisticated, with hackers exploiting vulnerabilities in popular platforms”. (Claude was also the only AI we asked that didn’t specifically name AI in any of its responses. However, Claude did state that “as technology advances, the nature of cyber threats can evolve rapidly…which amounts, more or less, to the same thing!)

Meanwhile, Llama switched tack, blaming neither AI nor humans – but the internet itself. “The internet has made it easier for scammers to target victims,” Llama claims, “with fake emails, texts, and social media messages being used to trick people into revealing personal information or transferring money.” However, Meta’s AI tool did acknowledge the growing threat of “AI-powered phishing attacks and AI-powered malware.”

We listed AI-fueled malware as one of Techopedia’s top 8 AI trends to keep an eye on in 2024. Go check out our full list for the other seven!

Deepfakes, Misinformation, and AI-Generated Scams

Four out of five of the AIs we chatted with expressed concern around deepfakes, and the ability of their artificial comrades to perpetrate fake content-led scams. Rather than discuss the nuances of these scams, though, our AIs focused instead on their repercussions for society: namely, their role in the spread of misinformation and propaganda.

With 2024 bringing elections in the US and UK – and high-profile geopolitical conflicts continuing to rage in Gaza and Ukraine – there’s scarcely been a more fertile environment for fake news to take hold.

“Deepfakes are becoming increasingly sophisticated,” said Bard, “making it harder to discern real from fake content. This, coupled with the spread of misinformation and disinformation, can have a chilling effect on democracy, fuel social division, and erode trust in information sources.”

“The internet makes it easy for false or misleading content to spread rapidly on social networks and other platforms,” Claude added. “This can manipulate public opinion, influence elections, promote extremist views, and more.” Llama furthered that “deepfakes and AI-generated scams “can have serious consequences, such as influencing political decisions or causing public panic.”

Adding to the debate, Anatoly Kvitnitsky – the very-much-human founder and CEO of AI or Not – says: “AI video becomes indistinguishable from real video to the human eye, helping bad actors create deepfakes of customers and employees for malicious activity. Election season will only highlight this.”

Kvitnitsky also points to the role of deepfakes in enabling other forms of fraudulent activity, such as the ability to create fake IDs. This, Kvitnitsky says, “opens up new ways to beat verification processes and facilitate money laundering and other illegal activity”.

Cyberwarfare and State-Sponsored Attacks

The political consequences of cyberattacks were something the five AIs we “interviewed” were particularly attuned to. And not just with deepfakes or misinformation, but with state-sponsored cyberwarfare, too.

ChatGPT stated the concerns around the “rising prevalence” of this type of nation state- perpetrated espionage. “These attacks can target critical infrastructure, steal intellectual property, and influence political processes”, it added. Bard said something similar, claiming that “cyberattacks targeting critical infrastructure like hospitals and schools are becoming more frequent and disruptive.”

Chiming in, Perplexity AI highlighted the “significant concern” posed by international cyberwarfare, particularly “with major elections taking place in various countries”.

By contrast, Claude offered an alternative take, focusing not so much on the pervasiveness or peril of state-sponsored cyber attacks themselves, but rather on the “vulnerabilities in internet infrastructure” that enable them to be so catastrophically effective: including “software bugs, natural disasters, equipment failures, or attacks”. (Implicating, incidentally, neither humans nor AI.) Llama, on the other hand, blames “geopolitical tensions, which can impact internet security; some countries use cyberwarfare as a form of espionage or sabotage.”

No AI made reference to a specific country, but – going by recent data, at least – China is a key culprit. In 2023 alone, Chinese-linked hackers launched attacks on government and political organizations in Europe and Asia (February); on political organizations in Ukraine and Taiwan, and government entities in Thailand, Indonesia, and Vietnam (March); and on telecommunications service providers in Africa (April).

China was also involved in a spear phishing campaign against a prominent Belgian politician, and in compromising the security of a US outpost in Guam. Chinese-linked cyber criminals were also linked with the hacking of the Canadian, Cambodian, Pakistani, Kenyan, Japanese, Uzbekistani, and Korean governments, respectively – all in 2023.

As with deepfakes, the prevalence of state-sponsored cyber attacks can have grave consequences when election time comes around.

These attacks raise valid questions around election interference – particularly in countries flirting with authoritarianism, or already living under the iron thumb of a despot – and remind us that, when it comes to cybersecurity, there’s plenty at stake.

Company Data Leaks and Breaches

When a cyberattack successfully targets an organization, it results in a data breach – the exposure of a company’s most sensitive data. However, this isn’t always a result of an attack, and – in the case of a data leak – can come down to simple human error.

Data breaches are a huge cybersecurity threat, with 2,814 incidents in 2023 leading to a staggering 8.215 billion records exposed. So it was strange that only two of the five AIs we spoke to (ChatGPT and Perplexity AI) mentioned data breaches in their responses.

“With increasing amounts of personal data stored online, data breaches remain a significant threat,” wrote ChatGPT, stating the obvious. Perplexity AI, as you’ll remember, was quick to highlight human, rather than AI, culpability for data breaches. But the language model also foretold 2024’s “likelihood of data leaks and the development of new methods to bypass authentication” – speaking to the grim reality of cybercrime’s ever-growing threat.

Ethical and Data Privacy Issues Around AI

AI serves up no small amount of controversy – particularly around the ethics of its widespread use, its role in facilitating big data mining, and the equally big privacy concerns AI throws up.

Machine learning models, for example, rely on an almost constant flow of data to thrive. This, in itself, raises some big questions: How is that data collected, stored, and used? Where – or, more to the point, who – is it coming from? Who has access to it – and for what purpose?

What’s more, AI’s high-powered algorithms have the ability to infer, by analyzing a user’s web traffic or device information, potentially sensitive information: such as their geographic location, habits, online activity, demographics, or preferences. All this raises even more ethical questions around the data and how organizations are harvesting it – and makes establishing user consent around this data collection extremely hard to qualify.

Of the AIs we consulted, four out of five – except only Perplexity AI – mentioned data privacy as a top security threat to online users. Claude wrote, for example, that “with more devices and services connected to the internet than ever before, there is a greater risk of companies and governments collecting user data without consent. Things like location tracking, browser history monitoring, and backdoors in devices and apps all contribute.”

Llama and ChatGPT were both more vague. The former wrote: “With the increasing amount of personal data being collected, stored, and shared online, there is a growing concern about data privacy and the potential for misuse of this information”. The latter added, simply: “Companies and individuals alike must be vigilant in protecting data privacy” – shifting the onus of this responsibility onto the shoulders of people, rather than AI.

The staunchest critique of the internet’s data privacy threats, however, came from one AI in particular – Bard. Google’s AI tool was surprisingly (and somewhat ironically) comprehensive in its condemnation of data mining, as well as some of the less tangible threats internet users face in 2024: including surveillance, censorship, hate speech, and digital inequalities.

Bard on surveillance:

“The rise of surveillance technologies and the expansion of government and corporate data collection practices pose concerns for how personal information is collected and monetized.”

Bard on online hate speech and abuse:

“Online platforms containing hate speech and abuse create a toxic environment for users, which can have severe real-world consequences for marginalized communities and individuals targeted by online harassment.”

Bard on digital inequities:

“The digital divide – the gap between those with and without access to technology and the internet – persists. This inequality can limit access to education, healthcare, and economic opportunities, further widening existing social and economic disparities.”

Bard on internet censorship and freedom of expression:

“Users in many countries face challenges accessing information and expressing themselves freely online.”

(To find out which countries have the least free internets, why that is, and how their citizens are using VPNs to get by, explore our guide to VPN global usage in 2023.)

Bard on algorithmic bias and discrimination:

“Algorithms used by social media platforms, search engines, and other online services can perpetuate discriminatory biases based on race, gender, and other factors. This can lead to unfair outcomes in areas like job opportunities, access to credit, and criminal justice.”

Bard didn’t namecheck AI specifically on this last point, although it’s clear that these insights apply as much to AI algorithms as they do to the above. Plus, AI didn’t get a free pass: “While AI offers great potential for innovation and progress,” Bard wrote, “it also raises concerns about job displacement, ethical implications, and the potential for misuse by malicious actors.”

According to AIs, How Can Internet Users Stay Safe Online?

After getting each of our five AI language models’ take on the internet’s biggest threats to its users, we posed our follow-up question: what can those users do to protect themselves?

Here’s what our team – ChatGPT, Perplexity AI, Bard, Llama, and Claude – came up with.

Education Around New Hacking Methods

Each AI we consulted had something to say about the importance of education in being able to recognize, avoid, and mitigate against cyber attacks.

“Stay informed about the latest cybersecurity trends and threats,” encouraged ChatGPT. “Educate family members, especially children and the elderly, as they can be more vulnerable to certain types of scams.” Perplexity AI added some good points, imploring internet users to “regularly check for data breaches, and consider undergoing cyber awareness training to enhance your knowledge and vigilance.”

On top of education, all five AI models encouraged some form of healthy skepticism in the face of established and emerging phishing tactics. “Don’t click on suspicious links and attachments, or download apps from unknown sources,” wrote Bard. “Be cautious with emails, messages, and phone calls from suspicious sources, ChatGPT added, “and always verify the authenticity of requests for personal information.”

Creating Strong, Unique Passwords

All five AIs stressed the importance of creating strong, unique passwords.

Remember how easy they are for AI to guess? Take a look at this guide, adapted from Home Security Heroes, to inform how you create your next password.

Number of characters Numbers only Lower case letters Lowercase upper and letters Numbers, upper and lower case letters  Numbers, upper and lower case letters, symbols
4 Instantly Instantly Instantly Instantly Instantly
5 Instantly Instantly Instantly Instantly Instantly
6 Instantly Instantly Instantly Instantly 4 seconds
7 Instantly Instantly 22 seconds 42 seconds 6 minutes
8 Instantly 3 seconds 19 minutes 48 minutes 7 hours
9 Instantly 1 minute 11 hours 2 days 2 weeks
10 Instantly 1 hour 4 weeks 6 months 5 years
11 Instantly 23 hours 4 years 38 years 356 years
12 25 seconds 3 weeks 289 years 2,000 years 30,000 years

On top of this, three of the AIs we chatted to – ChatGPT, Claude, and Llama – all suggested using a password manager. Password managers help you generate, store, share, and sync your passwords across different platforms and devices. Many include password auditing and auto-fill features, and encrypt your data – so even if the password manager’s database is compromised by hackers, your information will remain safe.

To learn more about password managers and compare the top providers on the market, immerse yourself in our guide to the best password managers in 2024.

Implementing Multi-Factor Authentication (MFA)

Next to strong, unique passwords, the five AI language models we turned to only had one other online security tip in common: multi-factor authentication.

As ChatGPT explains, MFA “adds an extra layer of security beyond a password, typically involving something you know (a password), something you have (a phone or security key), or something you are (biometric verification).” It means that, even if your password is compromised, a hacker will still be unable to access your online accounts without having unblocked access to your devices.

On that note, Perplexity AI recommends “securing your devices with features like device encryption and remote wipe.” Bard suggests “keeping your operating systems, applications, and web browsers updated with the latest security patches to fix vulnerabilities,” as well as “protecting your devices through regular scans and reputable security software.”

Protecting Your Home Network

There are several things you can do to secure your home’s network, including:

Setting a strong wifi password and enabling WPA3 encryption
Segmenting your devices into different networks – i.e. upstairs, downstairs, guest – to minimize the impact of a potential breach
Activating the built-in firewall on your router

ChatGPT’s main advice? To use a VPN (Virtual Private Network) – especially if you’re using public wifi. VPNs funnel your traffic through a secure, encrypted tunnel: masking your IP address with a completely different one to let you browse anonymously. Claude also emphasized the importance of a VPN to prevent “snooping or eavesdropping” – or from being, as Llama puts it, “intercepted or accessed by unauthorized parties” – a point particularly pertinent in countries with low levels of internet freedom and extensive online censorship.

For its part, Claude highlighted the need for a “proactive, instead of reactive” approach to securing your home network: “updating hardware, software, passwords, and settings.”

Which leads us to our next point…

Keeping Software Updated

ChatGPT, Claude, and Llama all raised good points about installing – and updating – robust antivirus and anti-malware solutions.

“This software should include real-time scanning, firewall protection, and ransomware protection”, said ChatGPT, with Claude adding browsing protection to the list of must-have features. Antivirus software can also detect and block phishing attempts, and stop suspicious or irritating ads from clogging your feed – if you select the right provider.

To learn more, dive straight into our list of the top 10 best antivirus software providers.

Staying Safe Online: Scattered Observations

The five AI language models we chatted to had no shortage of tips for helping you safeguard your online activity as you browse.

Here are a few that didn’t fit a particular theme, but are well worth including:

  • Regularly review the permissions you grant to mobile apps and browser extensions, limiting their access to personal information and device functionality (ChatGPT)
  • Identify deepfakes by looking for inconsistencies in video or audio quality, lip movements not matching speech, and unrealistic facial expressions (Bard)
  • Think critically about online information sources and check facts before accepting something as true or spreading potential misinformation. (Claude)
  • Regularly back up your important data to a secure location, such as an external hard drive or cloud storage service, in case your computer or data is compromised (Llama)
  • Avoid oversharing online, on social media, and other platforms. Oversharing can make you a target for identity theft and social engineering scams (ChatGPT).

Conclusion: How Do the AIs Feel?

Here, we’ve asked AIs for their take on the biggest threats to – and chief mitigators against – the internet’s most perilous threats to its users.

But we also asked them one more question: “how do you feel about AI being a threat to internet users?” The results were almost unanimous.

“As an AI, I don’t have personal feelings or opinions,” replied ChatGPT. Bard, agreeing, wrote “I don’t have the capacity to feel emotions like fear or concern”, while you could almost feel the digital shrug of the shoulders accompanying Llama’s response – ”I’m just an AI”.

Humorous, but unhelpful. Yet in some ways, the responses this last question invoked are the most instructive. Why? Because they remind us that, as much as we’d like to characterize AI as human (and at times, these AI language models do sound human) it’s simply not.

AI is a tool to be wielded by humans, for humans, to achieve aims decided by humans. Through that lens, it can be classified as both a help and a hindrance, and neither good nor evil – merely a tool (however advanced) to forward inherently human goals.

Think of AI in cybersecurity like the Force in the Star Wars films – not inherently good or bad, simply dictated by both the skill and the intentions of those wielding it.

And, like the epic struggle depicted in that trilogy, AI looks set to play a starring role in our own fight. The ongoing battle for the destiny of the internet – and our collective future.

Related Reading

Rob Binns
Tech Writer
Rob Binns
Tech Writer

Rob brings a wealth of expertise to the realm of writing and editing, boasting a diverse background across various subjects such as cybersecurity, renewable energy, appliances, home security, and business software. His accomplished portfolio includes contributions to publications like Eco Experts, The Independent, Home Business, Expert Market, Payments Journal, and Yahoo! Finance. Fueled by a fervor for online privacy, Rob channels his passion into articles for Techopedia. There, he delves into topics ranging from cybersecurity to VPNs, providing insightful and engaging content for readers.