On 19 September, search provider Elastic announced the launch of Elastic AI Assistant for Observability. The solution uses generative AI and the Elasticsearch Relevance Engine (ESRE) to provide human site reliability engineers (SREs) with more context on application errors, log messages, and alerts while providing suggestions on code efficiency.
It’s designed to ensure that SREs don’t have to manually track and interpret data as it moves across silos to help streamline and automate the resolution of performance issues.
Chief product officer at Elastic, Ken Exner, said in the announcement press release: “With the Elastic AI Assistant, SREs can quickly and easily turn what might look like machine gibberish into understandable problems that have actionable steps to resolution.
“Since the Elastic AI Assistant uses the Elasticsearch Relevance Engine on the user’s unique IT environment and proprietary data sets, the responses it generates are relevant and provide richer and more contextualized insight, helping to elevate the expertise of the entire SRE team as they look to drive problem resolution faster in IT environments that will only grow more complex over time.”
The Broader Implications: Generative AI in DevOps
The announcement comes just months after the release of Elastic AI Assistant for security operations teams. Cybersecurity professionals, SREs and DevOps engineers alike are expected to make sense of a wide range of alerts on potential incidents, deciding which need to be investigated further or can be safely ignored.
According to an Orca Security survey of 800 IT security professionals in five countries, 59% of respondents receive over 500 public cloud security alerts per day. This high volume not only results in critical alerts being missed but also increases employee churn, with 62% of IT pros saying that alert fatigue has contributed to turnover.
The release of Elastic AI Assistant for Observability aims to respond to this by providing SREs with a copilot they can use to receive contextual support in understanding not just what errors and messages mean but also recommendations on how to remediate them.
Using an augmented intelligence-style approach, SREs can make their workload more manageable and reduce decision fatigue while mitigating performance issues before they cause downtime.
More broadly, the solution illustrates that generative can be applied to any potential scenario where an engineer needs to make sense of lots of data signals quickly, whether monitoring key systems, planning future capacity, or conducting an incident response process.
The Force Multiplier: Proprietary Data
The sophistication of insights provided by generative AI solutions in enterprise environments depends not just on the quality of the underlying AI and training data but whether or not it has access to an organization’s proprietary data. Ultimately, the more specialized the data is, the more granular the operational insights will be.
“Since the Elastic AI Assistant uses the Elasticsearch Relevance Engine on the user’s unique IT environment and proprietary data sets, the repossess it generates are relevant and provide richer and more contextualized insight, helping to elevate the expertise of the entire SRE team as they look to drive problem resolution faster in IT environments that will only grow more complex over time.”
Using specialized proprietary training data can provide recommendations to solve problems that are more specific to the organization or unlock insights that improve the efficiency of particular operations that wouldn’t be available from a more high-level data set.
In this context, providing SREs with more insights and helping them contextualize this information puts them in the position to diagnose and respond to problems much faster.
Elastic vs. PaLM 2, Security Copilot
Of course, Elastic isn’t the only organization that’s looked to use generative AI to help human users combat alert fatigue.
This year, Google and Microsoft have launched their own virtual copilot solutions focused on helping security professionals by using chatbots to analyze and summarize threat signals and malicious activity.
The key differentiator between Elastic AI Assistant for Observability is that it is designed primarily to support SREs.
With the global generative AI market expected to grow from $43.87 billion in 2023 to $667.96 billion in 2030, we can expect to see more vendors experiment with LLM-driven solutions to offer new capabilities to SREs and DevOps engineers.
The biggest takeaway from Elastic AI Assistant for Observability’s launch is that generative AI can be used to support almost any professional who’s trying to interpret data signals from disparate sources at pace.
However, the key to gaining results is providing these automated solutions with access to the proprietary data they need to be able to accurately identify, streamline, or fix operational issues that are specific to the observed environment.
After all, if you collect generalist data, you’ll generate generic insights.