As artificial intelligence (AI) rolls out across the enterprise, so does its use in automated security tools.
With the security industry under pressure due to talent shortages and skills gaps, new innovative threats, and a complex, ever-evolving regulatory landscape, security operations teams are struggling to keep up with demands.
Companies like Microsoft, with the recent release of Copilot for Security reveal the direction in which security vendors are moving.
The integration of generative AI and automation into cybersecurity operations is inevitable. And while GenAI security can help organizations do more with less, risks are abundant.
Techopedia sat with industry experts to discuss the risks of automating security, best practices and technologies, and how security teams should approach this new era of AI security.
Key Takeaways
- While AI excels at tasks like data analysis and anomaly detection, humans are irreplaceable for strategic planning, penetration testing, and interpreting AI's decision-making.
- AI-powered tools can significantly improve threat detection and response times, reducing the burden on security teams.
- Integration is key — AI security's true value lies in its ability to work alongside human security professionals, not replace them.
- Transparency builds trust — organizations must prioritize explainable AI (XAI) to understand how AI arrives at conclusions and ensure the responsible use of AI security tools.
- Show Full Guide
AI Cybersecurity: The Human-Machine Relationship
Automated AI security tools are becoming a top global trend. An Allied Market Research January 2024 report estimated that the market for cybersecurity for industrial automation is expected to reach $20.5 billion globally by 2032 at 8.7% CAGR.
Microsoft, Siemens, Cisco, IBM, Palo Alto Networks, and others are leading players in the AI cybersecurity market.
AI technologies, natural language processing (NLP), and machine learning (ML) have gained traction to protect, detect, and respond to threats. As Techopedia recently reported, the use of AI-ML tools has skyrocketed 600%.
AI and automation are practical and effective solutions to fill in talent gaps, remove human error, and drive performance. However, policies that seek to replace workers with automated systems, and the efficiency rate of threat detection and prevention are factors to consider. And we should not assume that AI is infallible.
Why Humans-in-the-Loop Are Vital
Techopedia asked Shrav Mehta, CEO of Secureframe, what the specific areas where human expertise remains irreplaceable in the security response process are, even with advanced AI-powered tools.
“While AI-powered tools have revolutionized the security response process, human expertise remains irreplaceable in several key areas.”
“One of the most significant concerns for security professionals is the 'black box' nature of AI decision-making,” Mehta said.
“Without a clear understanding of how these models arrive at their conclusions, it becomes challenging for humans to trust, interpret, and modify the logic behind AI-driven security decisions.
“To leverage the full potential of AI in security, it is important to prioritize transparency and interpretability in AI models. By understanding how these systems make decisions, human experts can better trust, validate, and refine the AI's logic, leading to more effective and reliable security outcomes.”
Mehta explained that human experts bring this contextual knowledge to the table, allowing them to assess the significance of security events and spot social engineering attempts or ethical dilemmas that AI might overlook.
Moreover, as the threat landscape continually evolves, the human element is critical for identifying emerging risks and counteracting potential AI biases.
Strategy, Policy, and Penetration Tests: Areas Not Suited for AI
Lisa McStay, Chief Operating Officer at C2 — an established business continuity software provider — explained why strategy and policy development are areas where human expertise cannot be replaced by AI.
“AI isn’t and doesn’t seem like it will be able to create forward-thinking, comprehensive, and advanced plans that don’t just cover current cyber threats but also look to protect businesses from future threats.”
“Similarly, advanced penetration testing and security research and development will also be a no-go area for AI as although AI is advanced, it can’t think and act like humans just yet and misses out on the cunning, creative thinking needed to fully stretch and test security systems to their max,” McStay said.
Integrating AI Security Tools: Challenges and Best Practices
The Adarma report, “A False Sense of Cybersecurity”, found that 61% of security operations leaders believe AI can handle up to 30% of security operations — 17% of them believe AI could even do more and decrease human workload by 50%.
The role that GenAI will play in SecOps is still under development. While security operations teams recognize the potential of AI, still 74% find it challenging to envision how AI will assist them in their tasks.
Even those who have moderate success deploying AI security solutions and automation in their projects, acknowledge the complexity and time-consuming nature of the AI journey.
42% of security operation leaders said they found automation implementation challenging and time-intensive, and an additional 21% indicated that it was more demanding than initially anticipated.
Techopedia asked Mehta from Secureframe what the key considerations are for ensuring seamless integration of AI security tools and other existing security solutions to maximize effectiveness.
“Integrating AI-powered security tools like Copilot for Security with existing solutions requires a strategic approach. Before considering integration, it's essential to have a full understanding of your current security tooling and the attack surface it covers. This knowledge helps identify potential gaps and areas where AI can provide the most value.”
Mehta explained that once gaps and areas where AI can be assisted are identified, organizations should analyze the data generated by their security tools and how the data feeds into security workflows.
“This step is crucial for determining which data problems AI would be most effective at solving. In many cases, AI excels at reducing noise and identifying high-fidelity signals of risk or breach.”
Mehta added that once the specific issues a company wants to address using AI have been pinpointed, security teams can proceed with a targeted proof of concept by integrating Gen AI tools with the relevant security tools.
“It's important to remember that AI is not a magic solution that automatically provides the right answers regardless of the data input. Be precise and intentional about the problem you want to solve and ensure that the data you feed into the AI system aligns with that goal.”
Mehta said that companies should proceed cautiously when expanding their security tech stack to avoid the risks of overlapping security tools.
“Compatibility, data standardization, performance optimization, monitoring, and training should all be taken into consideration to ensure seamless integration.”
“By taking a measured, data-driven approach to AI integration, businesses can effectively leverage tools like Copilot for Security to streamline workflows, reduce noise, and enhance their overall security posture,” Mehta said.
Fatigue Alert and False Positives
Alert fatigue and false positives are becoming a real problem as cybersecurity tools and frameworks integrate AI.
The Qwiet AI 2023 survey “Where are developers spending their time?” found that while 94% of developers feel AI security tools are vital to keep up with the pace of the threat landscape, 33% of developers are spending one-third of their time chasing vulnerabilities and fixing bugs.
This time spent also includes dealing with false positives and an overwhelming number of alerts that “drain individuals and teams”, leading to a drop in productivity and burnout. Similarly, SecOps teams deal with the same issue when integrating AI.
McStay from C2 described alert fatigue as “a motivation killer.”
“I recommend that any business using AI for threat detection implement intelligent alert prioritization. This will prevent overwhelming alerts that turn out to be duds.
“You could also implement contextual analysis with your systems so your AI can understand the broader context for alerts, improving AI decision-making,” McStay said. “Also, UEBA [User and Entity Behavior Analytics] tools are really good at analyzing behavior and teaching AI what is ‘normal’ and what should be flagged.”
How AI Can Reduce Alert Fatigue
Ruoting Sun, VP of Product at Secureframe, explained to Techopedia that AI security tools should excel at rapidly stitching together disparate contexts from various systems, helping security teams form a complete picture and quickly understand whether they're dealing with a real threat.
“By leveraging AI's capabilities in pattern matching, anomaly detection, and context collection/analysis, organizations can significantly reduce false positives and alert fatigue.
“Organizations should continuously train AI models on relevant data, improving their accuracy over time. As human experts address real threats and dismiss false positives, the AI tool will learn and refine its algorithms, further reducing future false positives.”
Sun added that organizations should also dedicate a team or role to monitoring and managing AI security tools and ensure their optimal performance. “By combining the strengths of AI and human proficiency, companies can significantly improve their overall security posture and alleviate the burden on security teams,” Sun from Secureframe said.
Transparency and Understanding: Explainable AI (XAI)
Most of the AI tech in the market is proprietary black-box AI technology. This means that an AI reaches its conclusions, or how it processes information is often obscured. This poses a big question: Can organizations ensure transparency and gain a clear understanding of how AI security tools arrive at conclusions and recommendations to make informed security decisions?
Sun from Secureframe explained that organizations can gain a clear understanding of how AI security tools arrive at conclusions and recommendations by using techniques like Explainable AI (XAI).
“XAI provides insights into the decision-making process of AI models, along with documentation detailing its algorithms, training data, and validation processes.
“User training and education are also essential for all parties, both internal and external, to understand AI principles and interpretations,” Sun said.
“Finally, third-party audits and reviews can validate performance and identify potential risks or biases of an AI tool, which helps promote transparency and trust.”
McStay from C2 agreed.
“You can keep your AI security tool transparent with the introduction of solutions like explainable AI.
“This solution provides details of the AI’s decision-making process, giving you a clear explanation behind actions,” McStay said.
“You could also use audit trails to investigate your AI's decision-making process. Using human-in-the-loop (HITL) guarantees AI is vetted by human overseers, which can ease the process of understanding AI decisions and keep processes transparent.”
The Bottom Line
AI security is undeniably the hot new trend in cybersecurity, promising to plug talent gaps, eliminate human error, and revolutionize threat detection. While AI tools like Copilot for Security offer exciting possibilities for streamlining workflows and reducing busywork for security teams, it's important to remember they aren't magic bullets.
The human element remains irreplaceable for strategic planning, penetration testing, and interpreting the "why" behind AI's decisions.
AI security thrives in a collaborative environment, where it complements human expertise rather than replaces it. By carefully integrating AI tools and prioritizing transparency, organizations can leverage the power of automation to build a more robust and efficient security posture.