ChatGPT: Skyhawk Security’s New Weapon Against Cloud Threats

Why Trust Techopedia
KEY TAKEAWAYS

Chen Burshan, CEO of cloud security vendor Skyhawk Security, explains how ChatGPT can help security teams detect and respond to cloud-based threats.

Looking for malicious activity in the cloud is a lot like looking for a needle in a haystack. Security professionals have to sift through hundreds of false positive alerts per day just to identify legitimate security incidents to investigate.

In fact, according to research conducted by cybersecurity vendor Orca Security, 59% of IT security professionals report receiving more than 500 public cloud security alerts per day. An analyst must then make a judgment call on whether to investigate further or ignore the alert.

All too often, this high volume of alerts leads to a scenario where defenders are so busy managing trivial or unimportant alerts that they can’t identify and respond to actual data breaches. For instance, 55% of security professionals admit to missing critical alerts on a weekly or daily basis.

In light of these challenges, a growing number of cybersecurity vendors are turning to generative AI to help security teams make sense of what’s going on in the cloud.

One such vendor is Skyhawk Security, a cloud security vendor valued at $180 million, which earlier this year announced it would be using ChatGPT to detect threats.

Working Smarter to Find Threats with ChatGPT

Visibility and context are critical for security analysts to identify whether an alert or threat signal is the sign of a cyberattack or an innocuous false alarm. Yet, analysts often have too much or too little data to make a decision without further investigation.

Advertisements

Skyhawk Security’s answer to this predicament has been to integrate generative AI, the ChatGPT API, into its cloud detection and response solution (CDR) as part of two solutions –  Threat Detector and Security Advisor.

  • Threat Detector uses the ChatGPT API, which has been trained on millions of security signals taken from across the web, to analyze cloud events and help generate alerts faster.
  • Security Advisor provides a natural language summary of live alerts alongside recommendations on how to respond and remediate them.

In this instance, using generative AI enables users to surface alerts much faster and provides users with greater context on how they can respond to incidents so they can resolve data breaches in the shortest time possible.

It’s an automated approach to alert management that Skyhawk suggests has been incredibly effective, with tests suggesting that in 78% of cases, the CDR platform generated alerts earlier when using the ChatGPT API as part of its threat-scoring process.

Chen Burshan, CEO of Skyhawk Security, told Techopedia:

“Generative AI was a natural advancement for Skyhawk, as we are always looking to improve our threat detection and consider generative AI as a major opportunity to improve our detection and response for cloud engineers and SOC incident responders.”

Burshan added: “We use it like a force multiplier to the SOC, which helps overcome the shortage of cloud-skilled manpower.”

Using ChatGPT for Cloud Detection and Response

As part of its solution, Skyhawk uses an existing foundation of machine learning (ML) algorithms to monitor assets across the cloud.

The ML has been trained to distinguish between “peacetime” activity and normal usage and can instantly identify and track malicious behavior indicators (MBIs), such as unauthorized storage access, to assign these MBIs a threat score. Once the threat score crosses a certain threshold, an alert is generated.

When the alert is created, Skyhawk’s existing ML solution can then create an attack sequence, presenting the user with a graphical storyline of the event, which summarizes what happened.

Then Skyhawk uses its ChatGPT-trained threat detector to augment and enrich the data provided by its existing ML-driven threat scoring mechanism with additional parameters to help users verify the threat scores assigned to a given event.

This means that security administrators can have more confidence in identifying and prioritizing which alerts to respond to.

Generative AI’s Limitations in Cybersecurity

Generative AI can be useful for security practitioners, but organizations need to be mindful of its limitations to ensure the best results.

Burshan explained:

“While Gen AI is extremely powerful, it has to be used wisely to make sure it doesn’t introduce errors, doesn’t create privacy issues, and many more aspects that require attention.”

In this sense, for SOC teams, generative AI is a tool that can augment and streamline human investigations into security events rather than as a solution designed to automate threat resolution and response entirely.

At this stage, generative AI is most useful when it’s providing a natural language explanation of impenetrable alerts and data and giving users insights into how they can respond effectively.

As Sunil Potti, VP and GM of Google Cloud Security, explained in a blog post after the launch of Google’s security LLM in April 2023, “recent advances in artificial intelligence (AI), particularly large language models (LLMs), accelerate our ability to help the people who are responsible for keeping their organizations safe.”

Potti added:

“These new models not only give people a more natural and creative way to understand and manage security, they give people access to AI-powered expertise to go beyond what they could do alone.”

Knowledge is Power

In a world of fast-moving cyberthreats, knowledge is power. The more context security teams have to make decisions on how to respond to security events, the better they’ll be able to protect on-premise and cloud environments from threat actors.

By implementing generative AI, organizations can make it easier for analysts to decide which alerts to investigate and how to follow up instead of relying on them to make the right judgment calls hundreds of times a day.

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Writer
Tim Keary
Technology Writer

Tim Keary is a technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology. He holds a Master’s degree in History from the University of Kent, where he learned of the value of breaking complex topics down into simple concepts. Outside of writing and conducting interviews, Tim produces music and trains in Mixed Martial Arts (MMA).