In the fight against cybercrime, AI is both a rising threat and a potential new ally.
While large language models (LLMs) can blur the lines between authentic users and imposters, they can also help CSOs find hidden vulnerabilities and predict where the next attack might land.
We examine AI’s cyber-duality and look at how LLMs are being used for both bad – and good.
Key Takeaways
- For security professionals. AI’s impact has negatives and positives.
- Hackers are using GenAI to power up their exploits and adapt their techniques.
- Yet LLMs are also giving security operations center (SOC) teams new capabilities for incident response and vulnerability management.
- AI’s good/bad dichotomy is another factor CSOs need to weigh as they evolve cybersecurity infrastructure for future threats.
- Those could include emerging developments in affective computing and embodied AI, which experts believe we need to watch closely to ensure they can’t be weaponized by bad actors.
Friend, Enemy, or Both?
Every tech breakthrough brings positives and negatives. An application promising greater productivity could also be ripe for weaponization, with generative AI providing the perfect example. Its ability to sort through massive amounts of data and quickly recognize useful patterns is changing how we work and create. GenAI is also helping hackers mount attacks that weren’t possible before.
This dark AI is adept at learning and adapting its techniques to breach security systems. Where good AI is used to improve decision-making or automate complex tasks, dark AI uses its powers for evil, altering data, infiltrating systems, and conducting cyberattacks.
Bad actors have already built dark AI tools that exploit network and device vulnerabilities at speed, going unnoticed until the damage is already done.
AI’s Darkside
Zendata CEO Narayana Pappu told Techopedia that hard-to-detect deepfakes are one serious concern. Gartner reckons that deepfakes will become so convincing in the next two years that 30% of enterprises will stop using facial biometrics altogether for identity verification and authentication.
Meanwhile, GenAI tools designed for hacking are “adapting to security systems and can learn from the results of other attacks, allowing malware to dynamically adjust tactics during an attack based on real-time analysis of the target’s defenses.”
Together, they’re enabling hackers to operate at a scale not seen before. Last week, BT said it was capturing roughly 2,000 signals per second across its networks, indicating a potential cyberattack. This points to an “AI arms” race as increasingly sophisticated hackers seek to out-compete one another in identifying new exploits. In the past year, BT engineers have seen a meteoric 1,200% increase in scanning bots attempting to access its systems.
However, AI tools don’t have to be weaponized to assist in criminal activity; their use in analytics provides ample benefits.
Stephen Kowski, Field CTO at SlashNext Email Security, told Techopedia that while direct AI involvement in attacks grabs headlines, “the real threat lies in AI’s ability to scale and refine existing attack patterns.
“We’re seeing a surge in multichannel attacks that don’t rely on advanced AI for execution but benefit from its creative input in design,” he says. “This indirect use of AI allows cybercriminals to craft more convincing, widespread, and adaptable attack campaigns.”
How Cybercriminals Use AI
AI for Good
Dan Ortega, Security Strategist at Anomali, told Techopedia that the proliferation of AI is making the threat landscape “increasingly complex and difficult to manage and track. Security teams now need to rethink what tools they’re using to gather and analyze threat data.”
He says AI can add capabilities in the security operations center (SOC), “taking the load off of security analysts by providing the ability to continuously gather and analyze data from across the IT environment, to flag threats before they can create damage.”
SlashNext’s Kowski says this is where AI’s true potential in cybersecurity lies – in enhancing human expertise rather than replacing it.
“By automating routine tasks and providing rapid threat analysis, AI empowers security teams to focus on strategic decision-making and complex problem-solving. The most effective security strategies combine AI-driven insights with human intuition and experience to create a robust defense against evolving threats.”
Eric Schwake, Director of Cybersecurity Strategy at Salt Security, told Techopedia that current cybersecurity tools “can be improved with AI to quickly filter through analogues traffic to find what’s truly malicious, automate incident response, and predict future attacks.”
That includes SIEM, forensic investigation and endpoint detection solutions, as security vendors rush to meet AI-driven threats with AI-powered responses.
But Schwake also notes that a new class of AI-driven technologies is emerging, “specifically designed for security use cases like code analysis, threat intelligence, and deception technologies.”
Analytics sit at the core of these platforms, which take the convergence of data, security, and IT and blend in GenAI and workflow automation to stop attacks before they can do damage.
The Soft Underbelly of LLMs
Complicating the picture is the fact that GenAI has its own unique vulnerabilities that hackers are actively probing. Many are accessible to non-technical actors, creating a new category of ‘layman’ cybercriminals who can mount attacks they would otherwise struggle to execute.
These can include:
Tim Ayling, VP Cyber Security Solutions EMEA for Imperva, agrees that GenAI tools lower the bar for entering the hacking world.
“We’ve observed instances where comments about specific exploits posted online are picked up by these tools during web crawling and then built upon,” he said.
The handling of sensitive information by AI systems presents another opportunity for cybercriminals. Ayling added:
“With the vast amounts of data processed, there’s a risk that sensitive data could be inadvertently revealed. Threat actors might trick AI systems into sharing confidential data, such as customer records or trade secrets, leading to privacy breaches, legal issues, and reputational damage.”
Top 3 Future Threats
While developers work to patch up GenAI’s in-built security weaknesses, a new set of cyber worries is on the horizon. Experts point to these emerging AI or ‘AI-adjacent’ trends as the ones to watch:
1. Agentic AI
The proliferation of GenAI tools will eventually create an Internet of Agents or IOA, a deeply integrated network of AI-to-AI applications or ‘agents’ that interact directly with each other, completing tasks and executing complex, multi-step transactions with minimal human intervention
2. Embodied AI
On a similar theme, anyone watching the video for the recent launch of NVIDIA’s AI-driven robotics platform will be struck by the demo robots and their ability to interact with the physical world using fine motor skills. Robots are being designed to interact with other robots, LLMs, and humans.
The potential for a physically strong, fast, and independently-mobile robot to be hacked and zombie-fied by a threat actor won’t be lost on anyone.
3. Affective Computing
Academics are making progress in creating more personalized and intuitive interactions between people and software. In the simplest terms, affective computing aims to create machines that understand and respond to human emotions – naturally making them harder to distinguish from real flesh & blood homo sapiens.
We’re already seeing deepfakes capable of tricking accountants into making bogus international cash transfers. Imagine what else fraudsters might be able to perpetrate with more-human-than-human technologies to work with?
The Evolving Role of AI in Cybersecurity
In the past, developing sophisticated exploits needed budgets, brainpower, and resources only nation-state actors could provide. Today, AI is handing those capabilities to a wider array of criminal gangs and fraudsters. Despite safeguards by GenAI leaders like OpenAI to stop the spread of potentially harmful information, hackers keep discovering new ways to get around them.
Zendata’s Pappu says one result of GenAI being harnessed for cybercrime is that security teams are ‘drowning in noise’ from continual system alerts warning of potential threats. Amid a growing volume of signals and false positives, AI and machine learning technologies offer potential solutions to an overwhelming workload … though much of it is dark-AI-driven.
AI-powered cyber defenses can scan network activity, pinpoint anomalies and prioritize alerts. It can pore over huge volumes of data, identify suspicious content and isolate threats. They also promise to leave senior SOC analysts free to focus on strategic activities like coordinating damage control and directing tactical offensive operations.
The Bottom Line: Building Resilient, Multi-Layered Security
SlashNext’s Stephen Kowski says that, for now, “CSOs should prioritize addressing current risks amplified by AI’s supporting role in attack design.
“The immediate concern is less focused on autonomous AI agents, but rather on the enhanced creativity and efficiency AI lends to human attackers. Focusing on building resilient, multi-layered security systems that can adapt to diverse attack vectors is crucial in this evolving threat environment.”
FAQs
How is AI a threat to cyber security?
How is AI revolutionizing cyber security?
Can AI replace cybersecurity professionals?
How are malicious actors using AI?
How much demand is there for AI-powered cybersecurity tools?
References
- Narayana P. – Zendata | LinkedIn (Linkedin)
- Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ | CNN (Edition.cnn)
- Cybersecurity and AI: Enabling Security While Managing Risk (Gartner)
- BT spots 2,000 signals of potential cyber attacks every second, as TV’s Hunted star warns of “AI arms race” (Newsroom.bt)
- J Stephen Kowski, MSEE, JD – SlashNext | LinkedIn (Linkedin)
- Dan Ortega – Anomali | LinkedIn (Linkedin)
- Eric Schwake, CISSP – Salt Security | LinkedIn (Linkedin)
- Tim Ayling – Imperva | LinkedIn (Linkedin)
- This Robots Will Change The World | Nvidia Event 2024 (Youtube)
- Blade Runner-More Human Than Human (Youtube)
- AI and Cybersecurity: A New Era (Morganstanley)