How the Two Faces of AI Are Forcing a Cyber Re-Think

Why Trust Techopedia

In the fight against cybercrime, AI is both a rising threat and a potential new ally.

While large language models (LLMs) can blur the lines between authentic users and imposters, they can also help CSOs find hidden vulnerabilities and predict where the next attack might land.

We examine AI’s cyber-duality and look at how LLMs are being used for both bad – and good.

Key Takeaways

  • For security professionals. AI’s impact has negatives and positives.
  • Hackers are using GenAI to power up their exploits and adapt their techniques.
  • Yet LLMs are also giving security operations center (SOC) teams new capabilities for incident response and vulnerability management.
  • AI’s good/bad dichotomy is another factor CSOs need to weigh as they evolve cybersecurity infrastructure for future threats.
  • Those could include emerging developments in affective computing and embodied AI, which experts believe we need to watch closely to ensure they can’t be weaponized by bad actors.

Friend, Enemy, or Both?

Every tech breakthrough brings positives and negatives. An application promising greater productivity could also be ripe for weaponization, with generative AI providing the perfect example. Its ability to sort through massive amounts of data and quickly recognize useful patterns is changing how we work and create. GenAI is also helping hackers mount attacks that weren’t possible before.

This dark AI is adept at learning and adapting its techniques to breach security systems. Where good AI is used to improve decision-making or automate complex tasks, dark AI uses its powers for evil, altering data, infiltrating systems, and conducting cyberattacks.

Bad actors have already built dark AI tools that exploit network and device vulnerabilities at speed, going unnoticed until the damage is already done.

Advertisements
AI’s hype and promise in cybersecurity is balanced by trepidation and risk
AI’s hype and promise in cybersecurity is balanced by trepidation and risk. Source Gartner

AI’s Darkside

Zendata CEO Narayana Pappu told Techopedia that hard-to-detect deepfakes are one serious concern. Gartner reckons that deepfakes will become so convincing in the next two years that 30% of enterprises will stop using facial biometrics altogether for identity verification and authentication.

Meanwhile, GenAI tools designed for hacking are “adapting to security systems and can learn from the results of other attacks, allowing malware to dynamically adjust tactics during an attack based on real-time analysis of the target’s defenses.”

Together, they’re enabling hackers to operate at a scale not seen before. Last week, BT said it was capturing roughly 2,000 signals per second across its networks, indicating a potential cyberattack. This points to an “AI arms” race as increasingly sophisticated hackers seek to out-compete one another in identifying new exploits. In the past year, BT engineers have seen a meteoric 1,200% increase in scanning bots attempting to access its systems.

However, AI tools don’t have to be weaponized to assist in criminal activity; their use in analytics provides ample benefits.

Stephen Kowski, Field CTO at SlashNext Email Security, told Techopedia that while direct AI involvement in attacks grabs headlines, “the real threat lies in AI’s ability to scale and refine existing attack patterns.

“We’re seeing a surge in multichannel attacks that don’t rely on advanced AI for execution but benefit from its creative input in design,” he says. “This indirect use of AI allows cybercriminals to craft more convincing, widespread, and adaptable attack campaigns.”

How Cybercriminals Use AI

Biggest Threats of AI for Cybersecurity

Automating Attacks
Sophisticated cyberattacks normally require a human hacker to drive them forward. GenAI-enabled tools enable adversaries to automate how attacks are executed and respond to challenges in real time.
Gathering Data More Efficiently
Every cyberattack starts with a research phase, where bad actors scan for exploitable vulnerabilities and look for information assets worth stealing. AI can accelerate this reconnaissance phase by doing the heavy lifting of identifying potential targets and improving the accuracy of analysis.
Customizing Messages
GenAI routinely scrapes massive amounts of personal data from public sources like company websites and social media accounts. Cybercriminals can turn that into a dark AI capability by crafting hyper-personalized messages that make phishing emails and calls more plausible and effective.
Reinforcement Learning
Large language models are designed to learn and adapt in real-time, a capability that threat actors can use to identify the most effective break-in techniques or the best ways to avoid being detected.
Targeting Employees
AI tools can help identify the most high-value targets inside a business. They might be executives, people with access to proprietary data, or IT staff with wide system access. AI can also be used to find insider threats – people who have a lower degree of technical comprehension or exhibit unsafe behaviors.

AI for Good

Dan Ortega, Security Strategist at Anomali, told Techopedia that the proliferation of AI is making the threat landscape “increasingly complex and difficult to manage and track. Security teams now need to rethink what tools they’re using to gather and analyze threat data.”

He says AI can add capabilities in the security operations center (SOC), “taking the load off of security analysts by providing the ability to continuously gather and analyze data from across the IT environment, to flag threats before they can create damage.”

Impact of GenAI for CISOs
Impact of GenAI for CISOs. Source: Gartner

SlashNext’s Kowski says this is where AI’s true potential in cybersecurity lies – in enhancing human expertise rather than replacing it.

“By automating routine tasks and providing rapid threat analysis, AI empowers security teams to focus on strategic decision-making and complex problem-solving. The most effective security strategies combine AI-driven insights with human intuition and experience to create a robust defense against evolving threats.”

Eric Schwake, Director of Cybersecurity Strategy at Salt Security, told Techopedia that current cybersecurity tools “can be improved with AI to quickly filter through analogues traffic to find what’s truly malicious, automate incident response, and predict future attacks.”

That includes SIEM, forensic investigation and endpoint detection solutions, as security vendors rush to meet AI-driven threats with AI-powered responses.

But Schwake also notes that a new class of AI-driven technologies is emerging, “specifically designed for security use cases like code analysis, threat intelligence, and deception technologies.”

Analytics sit at the core of these platforms, which take the convergence of data, security, and IT and blend in GenAI and workflow automation to stop attacks before they can do damage.

The Soft Underbelly of LLMs

Complicating the picture is the fact that GenAI has its own unique vulnerabilities that hackers are actively probing. Many are accessible to non-technical actors, creating a new category of ‘layman’ cybercriminals who can mount attacks they would otherwise struggle to execute.

These can include:

Prompt Injection Attacks
A type of attack that occurs when special instructions are given to an otherwise benign GenAI prompt to shape the model’s output for malicious purposes.
Jailbreaking
A form of prompt injection designed to undermine the function of LLM-driven chatbots. A prompt is created with instructions designed to disable LLM safety and moderation features.
Poisoning Training Data
Corrupting an LLM’s training data to create malicious outputs. This could involve inserting manipulated or incorrect data into a model’s training dataset to alter its behavior.
LLM Supply Chain Attacks
Where criminals target and compromise third-party software libraries, dependencies, and development tools used to create GenAI tools.
Crescendo Attacks
A newer form of attack that borrows ideas from social engineering. They get around  LLM safety measures by starting with benign prompts and then gradually adding injections, avoiding triggering defenses.

Tim Ayling, VP Cyber Security Solutions EMEA for Imperva, agrees that GenAI tools lower the bar for entering the hacking world.

“We’ve observed instances where comments about specific exploits posted online are picked up by these tools during web crawling and then built upon,” he said.

The handling of sensitive information by AI systems presents another opportunity for cybercriminals. Ayling added:

“With the vast amounts of data processed, there’s a risk that sensitive data could be inadvertently revealed. Threat actors might trick AI systems into sharing confidential data, such as customer records or trade secrets, leading to privacy breaches, legal issues, and reputational damage.”

Top 3 Future Threats

While developers work to patch up GenAI’s in-built security weaknesses, a new set of cyber worries is on the horizon. Experts point to these emerging AI or ‘AI-adjacent’ trends as the ones to watch:

1. Agentic AI

The proliferation of GenAI tools will eventually create an Internet of Agents or IOA, a deeply integrated network of AI-to-AI applications or ‘agents’ that interact directly with each other, completing tasks and executing complex, multi-step transactions with minimal human intervention

The risk is that an AI tool tasked with, say, managing stock portfolios, making travel arrangements, or executing marketing campaigns could be compromised by third parties.

2. Embodied AI

On a similar theme, anyone watching the video for the recent launch of NVIDIA’s AI-driven robotics platform will be struck by the demo robots and their ability to interact with the physical world using fine motor skills. Robots are being designed to interact with other robots, LLMs, and humans.

The potential for a physically strong, fast, and independently-mobile robot to be hacked and zombie-fied by a threat actor won’t be lost on anyone.

3. Affective Computing

Academics are making progress in creating more personalized and intuitive interactions between people and software. In the simplest terms, affective computing aims to create machines that understand and respond to human emotions – naturally making them harder to distinguish from real flesh & blood homo sapiens.

We’re already seeing deepfakes capable of tricking accountants into making bogus international cash transfers. Imagine what else fraudsters might be able to perpetrate with more-human-than-human technologies to work with?

The Evolving Role of AI in Cybersecurity

In the past, developing sophisticated exploits needed budgets, brainpower, and resources only nation-state actors could provide. Today, AI is handing those capabilities to a wider array of criminal gangs and fraudsters. Despite safeguards by GenAI leaders like OpenAI to stop the spread of potentially harmful information, hackers keep discovering new ways to get around them.

Zendata’s Pappu says one result of GenAI being harnessed for cybercrime is that security teams are ‘drowning in noise’ from continual system alerts warning of potential threats. Amid a growing volume of signals and false positives, AI and machine learning technologies offer potential solutions to an overwhelming workload … though much of it is dark-AI-driven.

AI-powered cyber defenses can scan network activity, pinpoint anomalies and prioritize alerts. It can pore over huge volumes of data, identify suspicious content and isolate threats. They also promise to leave senior SOC analysts free to focus on strategic activities like coordinating damage control and directing tactical offensive operations.

The Bottom Line: Building Resilient, Multi-Layered Security

SlashNext’s Stephen Kowski says that, for now, “CSOs should prioritize addressing current risks amplified by AI’s supporting role in attack design.

“The immediate concern is less focused on autonomous AI agents, but rather on the enhanced creativity and efficiency AI lends to human attackers. Focusing on building resilient, multi-layered security systems that can adapt to diverse attack vectors is crucial in this evolving threat environment.”

FAQs

How is AI a threat to cyber security?

How is AI revolutionizing cyber security?

Can AI replace cybersecurity professionals?

How are malicious actors using AI?

How much demand is there for AI-powered cybersecurity tools?

Advertisements

Related Reading

Related Terms

Advertisements
Mark De Wolf
Technology Journalist
Mark De Wolf
Technology Journalist

Mark is a freelance tech journalist covering software, cybersecurity, and SaaS. His work has appeared in Dow Jones, The Telegraph, SC Magazine, Strategy, InfoWorld, Redshift, and The Startup. He graduated from the Ryerson University School of Journalism with honors where he studied under senior reporters from The New York Times, BBC, and Toronto Star, and paid his way through uni as a jobbing advertising copywriter. In addition, Mark has been an external communications advisor for tech startups and scale-ups, supporting them from launch to successful exit. Success stories include SignRequest (acquired by Box), Zeigo (acquired by Schneider Electric), Prevero (acquired…