The Rush to Deploy AI is Messy — and Full of Security Risks

Why Trust Techopedia

Artificial intelligence is like nuclear technology — it can supercharge limitless applications with abundant power, but when used incorrectly, it rapidly becomes unstable and dangerous.

OpenAI, Microsoft, Google, NVIDIA, Intel, and other players in the generative AI market set out to build the best, fastest, and most powerful large language models (LLMs) to show off to the world.

But businesses and organizations, now racing to deploy AI and reap the benefits and the revenue gains that come with it, must take a crash course in AI deployment security to avoid truly “fission-scale” disasters.

Experts warn that a lack of understanding regarding the technical, privacy, and governance risks introduced by GenAI, alongside increased executive pressure — affects the majority of security teams and leaders who often do not fully understand AI risks.

Key Takeaways

  • Generative AI poses unique security challenges due to its ability to manipulate data and create sophisticated attacks.
  • Security teams are struggling to keep pace with AI’s rapid adoption, which is leading to vulnerabilities in AI systems.
  • Business leaders prioritize revenue over security when deploying AI, potentially leading to disastrous consequences.
  • Frameworks like NIST AI-RMF can help organizations manage AI risks and ensure secure deployment.
  • Collaboration with security experts is crucial for mitigating AI threats and building robust AI applications.

Why Secure AI Deployment Matters

Techopedia talked with Jake Williams, former U.S. National Security Agency (NSA) offensive security expert and Faculty member at IANS Research.

Williams authored the recently released “Empower the Business to Use Gen AI in Customer-Facing Applications” report, which breaks down AI deployment risks and lays out the frameworks for organizations to start building defenses that anticipate the evolving AI threats.

Advertisements

“I don’t mean to sound defeatist, but organizations that don’t start figuring out how to scale their security operations today will be in a world of hurt soon”.

Williams explains that AI is a complicated and constantly evolving technology and compares the GenAI transformation with the great cloud migration, which picked up global speed during the pandemic years.

But Williams warns organizations:

“Generative AI is a regurgitation engine. It’s really good at scaling attacks like phishing, but it won’t be useful in finding new zero-days to exploit.

 

“Most security leaders don’t really understand the risks but are being pressured to implement AI.”

The report adds that business leaders are rushing to adopt AI for its “perceived benefits” but fail to install technical controls before deployment.

What Security Teams and Penetration Testers Say

The 2024 State of Pentesting report of Cobalt found that 88% of cybersecurity professionals have seen a significant increase in the adoption of AI tools in the past year. Of these, 66% say they have seen a rise in external threat actors using AI to create cybersecurity threats in the past year.

Techopedia talked to Jason Lamar, SVP of Cobalt, to get the inside story on the Cobalt AI report.

“Malicious threat actors utilizing generative AI tools can poison training data for biased outputs, manipulate prompts for misinformation and craft adversarial inputs to disrupt AI systems.

“These techniques can cause financial losses, erode trust, and even threaten safety in safety-critical applications like self-driving cars or medical diagnosis,” Lamar said.

Cobalt analyzed 4,068 pentests including an increased amount of tests on AI systems, primarily on software products incorporating AI-enabled chatbots to improve user experience.

“The most common vulnerabilities uncovered included prompt injection (including jailbreak), model denial of service, and prompt leaking (sensitive information disclosure),” Lamar said.

Compromising Security To Drive Revenue

Lamar explained that security teams working inside companies that are integrating and deploying AI face a double challenge: keeping up with GenAI cybercriminal trends while ensuring secure internal adoption of AI.

“64% of those who have experienced increased AI adoption at their company say that the demand for AI has outpaced their ability to keep up with the security implications of these tools.”

Pranava Adduri, CEO and co-founder of Bedrock Security, a data security company that enables organizations to embrace cloud and GenAI growth, also spoke about the criminal AI risks.

“Data poisoning or tampering attacks, jailbreaking — in the case of chat models — and, of course, deepfakes for phishing (which we are already seeing on the rise) are major concerns. Also concerning is offensive scripting that weaponizes hard-to-exploit vulnerabilities by rapidly adapting to the environment.”

However, Adduri explained that the same technology can be used to mitigate these risks.

“For the data leakage scenario, LLMs can be used to summarize the type of data that is being used to train the GenAI model before the model is released,” Adduri said.

“By summarizing the training data and looking for specific sensitive data types, enterprises can prevent the model from “knowing” anything that shouldn’t be communicated to the model’s consumers, enabling the creation of effective data perimeters.”

Secure AI Deployment Goes Well Beyond Traditional Security Risks and Operations

Williams’s views about AI risks, as expressed in his report, go well beyond traditional compliance (breach of laws) and cyberattack preparedness.

Seven AI Security Risk Areas: Each Incredibly Complex

The Williams-authored IANS report warns that there is a “lack of documentation” on AI secure deployment challenging security leaders.

In the research, the former NSA offensive hacker lists seven six key risk AI components:

  • Privacy risks
  • Proprietary data risks
  • Customer trust risks
  • Data management risks
  • Hallucinations
  • Traditional cybersecurity risks

Each component requires strategic security and compliance plans due to their broad nature. For example, privacy risks are bound by global, federal, and state laws or regulations like the E.U. ‘s GDPR, Canada’s PIPEDA, Brazil’s LGPD, and countless more laws.

Companies deploying AI must meet the standards set by the laws that are relevant to their operations, and that applies throughout their business ecosystem and supply chain.

But just to pick one of the seven areas, Williams’ paper lists 13 points linked to privacy AI risks alone:

  • Choice and consent
  • Legitimate purpose specification
  • Use limitation
  • Accuracy and quality
  • Openness, transparency, and notice
  • Data minimization
  • Individual rights and participation
  • Accountability
  • Security supporting privacy protections
  • Preventing harm
  • Free flow of information and legitimate restriction
  • Monitoring, measuring, and reporting
  • Legal compliance

Williams adds:

“AI is not a homogenous. Articulating the subcomponents of what the popular business press has labeled “AI” to leaders is critical.”

AI-RMF, AIRMP, and Deloitte: AI Frameworks

Despite the complexities that secure deployment of AI presents to any organization releasing into production customer-facing AI-enabled applications such as chatbots to drive customer service, natural language processing (NLP) to analyze unstructured customer data, and GenAI for image processing, the IANS research paper presents three frameworks as guiding solutions.

The three major frameworks used for managing AI risk include: the NIST AI Risk Management Framework (RMF), the Department of Energy (DOE) AI Risk Management Playbook (AIRMP), and Deloitte’s Trustworthy AI Framework.

The NIST AI-RMF framework, with a Govern core (culture of risk management), includes the following actions:

  • Map: Context is recognized, and risks related to context are identified.
  • Measure: Identified risks are assessed, analyzed, or tracked.
  • Manage: Risks are prioritized and acted upon based on a projected impact.

Once again, each of these actions, along with the core, requires multiple steps and processes and must be integrated into daily operations. The NIST AI framework, as well as the other two discussed in the report, are holistic frameworks that cover all aspects of AI risks and help companies mitigate them while driving AI performance.

Any company, large or small, that deploys customer AI applications without considering these frameworks or applying a similar approach is risking more than it can afford.

For example, a customer-facing AI that breaches or leaks proprietary rights could expose a business to security compromises, new attack vectors for criminals to exploit, or — for example — significant commercial losses if new software or products are leaked before their rollout date.

With his paper, Williams pushes organizations to be exhaustive, meticulous, professional, relentless, clever, and have a plan and strategy.

“Security leaders must evaluate AI-enabled applications for AI-specific risk, but they must not stop there.”

Strategic Risk Management and Risk Monetization

Following the NIST framework, inventories should be detailed when mapping AI-related risks. The inventory of identified threats should then be quantified and measured.

Top risk management leaders presenting to executives the threats should be clear on the numbers. Williams breaks it down in the report.

“There’s an old adage in security that we shouldn’t spend $100,000 to mitigate a $1,000 loss, and it certainly applies here. However, we must ensure we have a firm understanding of the risk profile before making risk mitigation decisions.”

Stakeholders should also fully understand the risks of AI before deciding to move ahead into production with customer-facing projects.

“In short, the standard states that you cannot consent to a procedure if you don’t understand the risks of undergoing the procedure and those of foregoing the procedure. Similarly, stakeholders cannot accept risk they themselves do not understand.”

Security Teams and Leaders in Dissent

Williams advises security teams to document their dissent.

“If after explaining the risk, the organization chooses to move ahead with a use case that exposes the organization to unacceptable levels of risk, document the reasons for your dissent.”

The dissent documentation can provide security teams and leaders with cover in case of an incident and help business stakeholders better understand the severity of the issue at hand.

Serious Business Risk Vs. Reward

AI has long moved from being an “experimental toy” to becoming the most powerful tool in the tech arsenal. Lamar from Cobalt explained that the pressure to deploy AI is widespread.

“According to our report, 88% of cybersecurity professionals have seen a significant increase in the adoption of AI tools in the past year, indicating that the desire to get in on the AI boom is still high.”

However, Lamar cautions, “Despite this positive trend in adoption, those in the C-suite were 44% more likely than average to wish their company would pump the brakes on AI adoption over concerns over risks.”

Naturally, from a business perspective, deploying revenue-generating technology without delay is a priority, and installing proper security controls only pushes the product down the line.

However, security experts who specialize in security controls and frameworks can help streamline this process while helping organizations produce a much safer and efficient AI application.

“It takes time to develop skills and know-how in a new domain and additional tooling, so consider partnering with a third-party vendor who can help track and manage vulnerabilities,” Lamar said.

The Bottom Line

While there is no questioning that AI is the most powerful tool of technological disruption created to date, it has become clear that deploying this tech safely is not easy.

Naturally, the complexities of safe AI deployment lie in the nature of AI potential and capabilities.

Riding this trend, a new global market for safe AI deployment solutions is already emerging. All top cloud vendors will likely participate in this new transformation, equipped with experience in global cloud migration and automation tools that help companies deploy AI safely.

However, challenges will continue to exist, as company cultures often prioritize revenue and performance against calculated threats and risks.

Neglecting frameworks that are in the public domain is neglect, and neglect is the failure to provide care to others. Legally, any organization found in breach of negligence laws would face serious civil and even criminal consequences.

AI security goes beyond traditional security, beyond a world of constant attacks and cybercriminals, data leaks, and breaches of data laws. It seems clear that failing to implement proper AI security controls is the duty that every individual, developer, or organization has.

Advertisements

Related Reading

Related Terms

Advertisements
Ray Fernandez
Senior Technology Journalist
Ray Fernandez
Senior Technology Journalist

Ray is an independent journalist with 15 years of experience, focusing on the intersection of technology with various aspects of life and society. He joined Techopedia in 2023 after publishing in numerous media, including Microsoft, TechRepublic, Moonlock, Hackermoon, VentureBeat, Entrepreneur, and ServerWatch. He holds a degree in Journalism from Oxford Distance Learning, and two specializations from FUNIBER in Environmental Science and Oceanography. When Ray is not working, you can find him making music, playing sports, and traveling with his wife and three kids.