The hype around artificial intelligence (AI) and automation tools such as machine learning (ML) seems to have no end, with new AI solutions emerging in all fields and industries.
When AI and ML began picking up momentum in the wake of ChatGPT’s release, compliance and security experts showed interest in how this tech could develop new threats.
However, the data to make informed analysis needed to emerge over time, as companies were just getting started with using AI.
Now things have changed and there is enough information to evaluate the overuse — and ‘oversized impact’ of AI and its related threats.
Techopedia sits down with analysts from Zscaler, Nokia and Bedrock Security to work out what we can understand about the threat landscape today.
‘More than 3 Billion AI Transactions a Month’
Cloud security company Zscaler’s embedded research team, ThreatLabz, evaluated more than 18 billion AI transactions for its 2024 AI Security report, observing transactions from April 2023 to January 2024 across its Zscaler Zero Trust cloud security platform.
On its platform, Zscaler witnessed enterprise AI-ML transactions increase from 521 million monthly transactions in April 2023 to 3.1 billion monthly by January 2024 — a 600% increase.
Or to put it another way, a cumulative 569 terabytes of enterprise data sent to AI tools.
Deepen Desai, Chief Security Officer at Zscaler, spoke to Techopedia about the new landscape being shaped by AI.
Most organizations are aware of AI dangers and risks such as biased performance, privacy and data exposure, AI hallucinations and poor performance, but as AI advances cybercriminal AI skills are also improving.
Desai said:
“In reality, AI is aiding cyber attacks across all stages of the attack chain.
“From discovering weaknesses in enterprises defenses to automating compromise through phishing attacks and vulnerability exploits to moving across enterprise networks, and eventually exfiltrating data through the use of AI modules.
“At present, AI will provide the most help to attackers in automating attacks at scale.”
AI Cybersecurity Attacks
Desai explained that threat actors are using generative AI tools like ChatGPT to write highly persuasive phishing emails and use AI to scan for vulnerabilities in external assets of an organization, like VPNs.
“As enterprise AI adoption grows rapidly, it’s likely we will continue to see enterprise AI-related security incidents.
“However, AI security incidents have made numerous headlines in the past year, including leakages of proprietary enterprise data to AI applications, AI-powered phishing, vishing and deepfake attacks, and more.
“As this trend continues, we expect to see more scenarios where private or confidential data is leaked inadvertently to public LLMs [large language models], AI training data sets are breached (both for internal enterprise AI training data and among AI vendors), AI training data poisoning, AI supply chain attacks, and more,” Desai said.
“AI is being used for more sophisticated attacks, likely without precedent.”
The threat landscape amplifies as AI finds more connection to the world via Application Programming Interfaces (APIs).
APIs: At the Core of AI Security
While complex AI code-driven or injection attacks are still rare, attacks on APIs are well established and on the rise, with companies like Akamai finding that 29% of all web attacks target APIs.
As AI and machine learning capabilities are no longer confined to research labs, tech professionals can now leverage these powerful tools through AI and ML APIs. These new APIs act as gateways, providing programmatic access to pre-trained models capable of complex tasks like image recognition, natural language processing, and predictive analytics.
Techopedia talked with Shkumbin Hamiti, Head of Network Monetization Platform, Cloud, and Network Services at Nokia, about AI API security.
Last year Nokia launched its Network as Code platform and developer portal to provide simplified network capabilities to developers as software code that can be easily integrated into applications.
“API security tools – including authentication, authorization, and encryption – are pivotal for protecting network data exposed through APIs from nefarious actors; and for giving CSPs and developers the confidence required to drive close collaboration and the development of new API use cases.”
“AI is a necessity, but to be effective, we need purpose-built large language models that are trained on the variety of security challenges we face.”
Hamiti explained that as we move further into third-party ecosystems with critical information being exchanged through APIs, organizations will need to train models focused on the unique characteristics of APIs and ecosystem interoperability; and ensure the data that models are being trained on is not corrupted.
Hamiti said that in the telcom industry networks are being outfitted with AI every day. “Operators are taking the steps necessary to control the impact of their AI programs within their own networks,” Hamiti explained.
“But as the API economy unfolds and as networks are opened to external third-party access, operators must ensure that their networks are secure and that data crossing into and out of their networks is also secure.”
Compliance and Legal Challenges
Companies leveraging AI have to meet the ever-evolving demands of data and user protection laws at international, federal, and state levels.
Additionally, they must comply with new AI laws, such as the AI Act approved on March 2024 by the European Parliament, to ensure safety and compliance with fundamental rights while boosting innovation.
Studies such as the Industrial Growth of Global AI Compliance Monitoring Market 2023-2029 show that AI monitoring solutions are poised to grow as organizations look for innovative ways to address AI compliance challenges.
Desai from Zscaler told Techopedia that enterprises must take particular care when considering the compliance and governance of their data, particularly in areas like the EU, which recently passed the EU AI Act.
“While regulations will differ in different regions, ultimately, enterprises must have complete visibility and granular security controls over sensitive customer and financial data that may be used with, or to train, AI tools, to ensure it will not be used in ways that run afoul of regulation,” Desai said.
Pranava Adduri, CEO and co-founder of Bedrock Security, also spoke to Techopedia about AI compliance challenges and best practices.
“Highly regulated businesses must be careful when leveraging GenAI because GenAI systems are frequently ‘black boxes’ without fine-grained role-based access control.”
“In these cases, the only way to guarantee that the AI model does not communicate regulated or otherwise sensitive information in its responses is to ensure that such data is not used to train the model at all,” Adduri said.
“In many cases, enterprises simply do not have a strong handle on what data is sensitive and where it is stored, making it easy to unintentionally give a GenAI model data that it shouldn’t have, resulting in data leakage that could violate regulatory requirements.”
Should Different Industries Approach the Risks of AI-ML Usage Differently?
The Zscaler report found that the industries that generate the most AI traffic include Manufacturing, which accounts for 21% of all AI transactions, followed closely by Finance and Insurance (20%) and Services (17%).
Techopedia asked Desai from Zscaler whether industries from different sectors should have different approaches to AI risks.
“Enterprises across every sector should all approach AI-ML usage from the same zero trust perspective — ensuring that enterprise users directly connect to AI applications and data via a zero trust cloud proxy architecture with several layers of security controls, and never across the enterprise network.”
Desai added that naturally, those working in highly regulated areas that contain highly sensitive customer and financial data, such as federal agencies, financial institutions, and healthcare organizations, should take particular care to prevent data being leaked to third-party AI applications, “as well as to secure data being trained on internal AI efforts”.
Adduri from BedRock Security agreed. “Industries that are less highly regulated, such as financial services or healthcare, and-or do not deal with highly sensitive or personal information can embrace AI with less of a sense of caution than those that do not,” Adduri said.
The Bottom Line
The explosive growth of AI, with a nearly 600% increase in AI-ML transactions reported in 2024, has unlocked a new era of innovation across industries. While AI offers immense potential, it also introduces security vulnerabilities, data privacy concerns, and compliance challenges. Organizations must navigate these complexities to ensure responsible AI adoption.
The new era of AI security is only warming up its engines; organizations will rapidly witness AI-powered attacks increase in rates and potential damages. Following DevSecOps practices and implementing Zero Trust concepts and multi-layered proactive defense is more vital today than ever, as organizations and businesses are now releasing frontend and backend AI apps, services, and solutions that might pose risks and liabilities in the near future.