Study Shows 70% of Security Teams Misuse AI – Do You, Too?

Why Trust Techopedia

Organizations — and especially cybersecurity and compliance teams — are rapidly understanding that adopting generative AI is unlike any other technology — presenting unique risks and threats that are challenging to understand in depth.

As AI big tech companies continue to push performance, releasing more powerful versions one after the other, with the latest move by OpenAI and its rollout of ChatGPT 4o, studies warn that a perfect artificial intelligence security storm is approaching.

Techopedia talked to experts to understand what security professionals struggle to understand, how leaders should be working to solve this knowledge gap, and whether AI frameworks and laws help.

Key Takeaways

  • Security teams are quickly adopting Generative AI (GenAI) tools, but 70% lack a full grasp of the risks involved. This knowledge gap creates vulnerabilities cybercriminals could exploit.
  • “Shadow IT” with undeclared GenAI use and the opaque nature of AI algorithms make it difficult for security teams to assess risks and trust AI outputs.
  • Aggressive marketing by vendors can make it hard to distinguish valuable AI features from gimmicks, leading to rushed deployments without proper testing or configuration.
  • Leaders face a choice: upskill existing staff, hire new AI experts (which is challenging and expensive), or outsource specific projects. A combination of approaches may be necessary.
  • While frameworks like NIST AI RMF exist to manage AI risks, many teams lack awareness. Additionally, 34% of organizations lack a GenAI policy altogether.

Is AI Innovation Compromising Security?

Splunk’s latest research, The State of Security 2024: The Race to Harness AI, explores a balance between harnessing the potential of generative AI and mitigating its emerging threats in a fast-paced development environment.

The study found that 91% of security teams are using generative AI, but 70% of professionals don’t fully understand its implications.

According to Splunk’s latest research, most (86%) say their company will shift budgets to prioritize meeting compliance regulations over security best practices. About half (48%) have experienced cyber extortion, making the attack technique more popular than ransomware itself.

Advertisements

Fast Adoption and Shadow AI

Kevin Breen, Senior Director of Cyber Threat Research at Immersive Labs, spoke to Techopedia about why security teams do not understand AI technology in detail.

“The fast adoption of generative AI by organizations has security and risk teams racing to keep pace.”

Breen explained that for organizations that rely on platform-as-a-service (PaSS) and software-as-a-service (SaSS), “Shadow AI” has become a real problem. This is because GenAI is often enabled without warning in many online services, and an organization’s data can be processed by a service provider, like OpenAI or Anthropic, where they have not been declared a data processor.

“Secondly, (there) is a lack of education and understanding on how GenAI works and how data is processed.

“Do your developers understand what information is sent as part of the context window? Adding Function Calling to a chatbot for example could lead to SQL Injection and in extreme cases, this could lead to code execution.”

The Black Box Nature of GenAI

Cache Merrill, CTO at Zibtek, a company that integrates AI within their security strategies, told Techopedia that the main struggle for security professionals often lies in the complexity and opacity of AI algorithms.

“While AI can process and analyze data at an unprecedented scale and speed, understanding how it arrives at certain conclusions or predictions is not always straightforward.”

Merrill explained that this “black box” nature of AI systems makes it difficult for security teams to fully trust or comprehend the risk assessments and decisions made by AI, particularly in generative AI models that can create new data sets.

When Hype Sets the Tone for Security

Erich Kron, Security Awareness Advocate at KnowBe4 — a security culture and anti-phishing company — spoke about how the AI hype influences the situation.

“There has been a push by marketing departments to push AI-related features in products as much as possible. The excessive use of the AI buzzword makes it very difficult for security professionals to understand what is actually helpful and what is snake oil.

 

“This push by vendors often results in products being released without the proper testing as they push to keep up with their competition.”

Kron explained that the addition of AI components to existing tools and software may also mean that security professionals must take time to learn what these new AI features do, and what their limitations are. “In the event there is no time to learn, the features may be deployed without being properly configured,” Kron said.

Apparent Simple AI Questions Perplex Security Teams

Ryan Smith, cybersecurity and AI expert, and founder of QFunction — a company helping organizations strengthen their security posture with customized AI solutions — said AI cybersecurity spawned overnight challenging teams.

“We’re at a point now where every security vendor is marketing AI towards cybersecurity, and a lot of cybersecurity professionals don’t understand the basics of AI and its usage, let alone GenAI and how it can apply towards cybersecurity in general.”

Smith said that teams may struggle with apparent simple questions that require extensive knowledge, such as “What is AI and how does it work?” or “Will AI make my job easier? If so, how?”.

“Until these questions are asked and answered, cybersecurity professionals will continue to not fully understand its implications.”

“Cybersecurity professionals already have competing priorities in their daily work, and now they have to worry about AI and its implications,” Smith said.

“What makes this worse is that they’re expected to immediately be up to speed on AI as well as deal with cybersecurity vendors who are marketing AI in all of their products.”

Integrating AI into SOC Operations

Dr. Richard Searle, Chief AI Officer at Fortanix — a global data security company — told Techopedia that security professionals and AI developers possess discrete skills, and while professionals may have an awareness, and, perhaps, a basic level of expertise, in the other’s field, in general, they lack the deep knowledge gained through training and experience in the other domain.

“This creates a source of risk in the rapid adoption of GenAI technology within cybersecurity tools, as the characteristics of these sophisticated AI systems may not be fully appreciated by security teams, and the intrinsic security of GenAI systems is known to be poor — as evidenced by reported data and intellectual property breaches related to use of generative AI models.

“Specifically, security professionals are failing to conceptualize the nature of GenAI systems that require plaintext inputs, yielding probabilistic outputs, sometimes derived (or hallucinated) by the model in the absence of relative certainty,” Dr. Searle said.

“The requirement for plaintext interaction with large language models (LLMs) removes the possibility of encoding data by encryption or tokenization — since the model cannot comprehend such data unless it is similarly encoded within the training dataset.”

Dr. Searle explained that real-time adaptation processes in response to observed prompts, contextual data, or feedback from responses, and the nature of the prompts entered by human and machine users, all represent new sources of intelligence and systemic vulnerability that could be leveraged by cyber threat actors.

“Indeed, the implementation of GenAI within critical systems such as the Security Operations Center (SOC) has the potential to both mislead or misdirect security teams due to erroneous model behavior and guide cyber threat actors in the prosecution of attacks.”

What Should Leaders Do? Upskill, Hire, or Outsource?

Inevitably, GenAI has not only transformed the tech industry and the world but has created numerous shifts in the job market. A report from the Institute for Public Policy Research (IPPR) found that in the U.K. alone up to 8 million jobs are at risk due to AI.

The 2024 Microsoft Work Trends Index shows that about half of professionals (45%) worry AI will replace their jobs, with the majority (55%) of leaders expressing concern about a lack of talent to fill roles.

Techopedia asked experts what leaders should be focusing on, upskilling workers, hiring new AI experts, or outsourcing projects.

Merrill from Zibtek said leaders should consider a combination of training existing staff and hiring new AI experts.

“Upskilling the current team is crucial to ensure they can effectively work with AI tools and understand their outputs. However, the specialized knowledge required to design and maintain these systems often necessitates bringing in new talent with expertise in AI and machine learning.

 

“Outsourcing can be a solution for specific projects but having in-house AI expertise is beneficial for ongoing risk management and innovation.”

Kron from KnowBe4 said upskilling and training staff is essential. “Leaders should certainly be making plans to train and upskill staff as more tools and features based on AI are being released,” Kron said.

“These skills will be critical for proper configuration and troubleshooting of the tools and understanding the limitations of the technology, risks associated with using it, and best practices for security.”

Hiring AI Experts is Challenging and Costly

Dr. Searle from Fortanix said that hiring new AI experts is challenging, as the technology is new, and qualified AI personnel remain scarce.

“Upskilling is vital not only within the security profession but across all areas of business operations. AI will impact every aspect of the future economy, and this will place increased emphasis on the security of technology adoption within the organization and broader supply chain.”

Dr. Searle spoke about foundational training programs, available by subscription or free, of charge that reputable academic institutions and leading AI organizations offer. These can be leveraged by companies and used to upskill their workforces. But, Dr. Searle said the focus should be on the top rank.

“Particular urgency should be shown by CISO and CSO executives, with many organizations now implementing the role of Chief AI Officer to bridge established security responsibilities and the new demands of generative AI systems.”

Breen from Immersive Labs agreed that finding AI experts is challenging and costly.

“If you are missing specific expertise, like a developer with experience integrating third-party APIs, then hire in these positions but you may want to consider upskilling existing staff who already understand your business given the nuances of GenAI,” Breen said.

Breen added that once this is complete, leaders should deploy an education and upskilling plan for all level workers.

“For developers, understanding data flows and how the context is created and passed, is key to protect against attacks like Prompt Injection or leaking data from vector databases in RAG [retrieval augmented generation] setups,” Breen said addressing security risks of the AI supply chain.

AI Frameworks: Awareness Gaps and Added Complexities

The Splunk report State of Security 2024 claims that GenAI policy is uncharted territory. According to the report, 34% of organizations do not have a generative AI policy in place, despite its high adoption rate. The report presents risky philosophies as innovative.

“’Move fast and break things’ might sound counterintuitive to most security practitioners, but it could be the right philosophy as organizations seek innovation at speed.”

Despite GenAI being relatively new to the show, its operations are bound by several laws including the European Union’s AI Act that introduced a common regulatory framework based on risk categories. In the U.S. the AI Bill of Rights proposes users be notified when they are communicating with an automated system, and allowed to opt out.

Additionally, several organizations released AI frameworks including the NIST AI Risk Management Framework. But this rise in new frameworks and policies is challenging teams, with 45% saying “better alignment with compliance requirements” is a top area for improvement. 

Merrill from Zibtek said that the NIST AI RMF is particularly “commendable because it provides a structured and flexible approach to managing risks associated with AI systems”.

“It emphasizes trustworthiness, which includes the security, explainability, and accountability of AI systems. Adopting such a framework helps in making the AI’s decision-making processes more transparent and auditable, which is crucial for security applications.”

As Breen from Immersive Labs told Techopedia, many frameworks have been popping up to try to match GenAI’s sudden appearance.

“No one framework fits all teams, which makes picking “the best” a difficult task. Each framework is still fairly early in its maturity, so expect them to update frequently over the next 12 months.”

Breen says that the NIST framework is suitable for risk, compliance, and architecture teams. Breen also mentioned the OWASP top 10 for LLMs and the MITRE ATLAS “a powerful resource for security teams, especially ones that are already using MITRE ATT&CK”.

Smith from QFunction added that while GRC teams might be familiar with AI risk management frameworks, most security experts are still in the dark.

“I would be surprised if the majority of cybersecurity professionals outside of GRC teams are familiar with it.”

The Bottom Line

Overburdened security teams are pressured to adopt Generative AI (GenAI) despite facing a relentless stream of high-priority threats and a rapidly evolving security landscape. This lack of bandwidth, coupled with a limited understanding of GenAI’s inherent risks, creates a potential vulnerability that cybercriminals could exploit.

Security teams must continue to evolve as AI technologies do, ensuring they can not only utilize AI effectively but also mitigate any new risks it introduces.

Advertisements

Related Reading

Related Terms

Advertisements
Ray Fernandez
Senior Technology Journalist
Ray Fernandez
Senior Technology Journalist

Ray is an independent journalist with 15 years of experience, focusing on the intersection of technology with various aspects of life and society. He joined Techopedia in 2023 after publishing in numerous media, including Microsoft, TechRepublic, Moonlock, Hackermoon, VentureBeat, Entrepreneur, and ServerWatch. He holds a degree in Journalism from Oxford Distance Learning, and two specializations from FUNIBER in Environmental Science and Oceanography. When Ray is not working, you can find him making music, playing sports, and traveling with his wife and three kids.