This Is How Confidential Computing Will Drive Generative AI Adoption

KEY TAKEAWAYS

Organizations experimenting with generative AI in the enterprise might want consider using confidential computing to mitigate the risk of data leakage.

Generative AI adoption is on the rise. Yet, many organizations are wary of using the technology to generate insights due to concerns over data privacy.

Advertisements

One of the most notorious examples of this occurred back in May when Samsung decided to limit the use of GPT4 after an employee shared sensitive data with the platform.

Samsung isn’t the only company that’s taken action to limit generative AI adoption internally, with mammoths like JP Morgan, Apple, and Goldman Sachs all deciding to ban tools like ChatGPT due to concerns over data leakage resulting in data protection violations.

Advertisements

However, emerging technologies like confidential computing have the potential to increase confidence in the privacy of generative AI solutions by enabling organizations to generate insights with large language models (LLMs) without exposing sensitive data to unauthorized third parties.

What Is Confidential Computing?

Confidential computing is where an organization runs computational workloads on a piece of hardware within a CPU enclave called a Trusted Execution Environment (TEE). The TEE provides an isolated encrypted environment where data and code can be processed and encrypted while in use.

In many enterprise environments, organizations choose to encrypt data in transit or at rest. However, this approach means that data must be encrypted in memory before it can be processed by an application. Decrypting the data in this way leaves it exposed to unauthorized third parties like cloud service providers.

Advertisements

Confidential computing addresses these limitations by enabling computational processing to take place within a secure TEE so that trusted applications can access data and code where it can’t be viewed, altered, or removed by unauthorized entities.

While the confidential computing market is in its infancy, the technology is growing fast, with Markets and Markets estimating that the market will grow from a value of $5.3 billion in 2023 to $59.4 billion by 2028, with vendors including Fortanix, Microsoft, Google Cloud, IBM, Nvidia, and Intel experimenting with the technology’s capabilities.

Increasing Confidence in Generative AI

The main value that confidential computing has to provide organizations using generative AI is its ability to shield what data’s being processed and how it’s being processed as part of a confidential AI-style approach.

Within a TEE, AI model training, fine-tuning, and inference tasks can all take place in a secure perimeter, ensuring that personally identifiable information (PII), customer data, intellectual property, and regulated data remains protected from cloud providers and other third parties.

For this, confidential computing is a technology that enables data-driven organizations to protect and refine AI training data on-premises in the cloud and at the network’s edge, with minimal risk of external exposure.

Rishabh Poddar, CEO and co-founder of confidential computing provider Opaque Systems, told Techopedia: “Confidential computing can give companies security and peace of mind when adopting generative AI.”

To minimize the likelihood of data breaches when using such new tools, confidential computing ensures data remains encrypted end-to-end during model training,  fine-tuning, and inference, thus guaranteeing that privacy is preserved.

This level of privacy during AI inference tasks is particularly important for organizations in regulated industries, such as financial institutions, healthcare providers, and public sector departments, that are subject to strict data protection regulations.

Verifying Compliance with Confidential Computing

In addition to preventing data leakage, confidential computing can also be used to guarantee the authenticity of data used to train an AI solution.

Ayal Yogev, co-founder, and CEO of confidential computing vendor Anjuna, explained:

On top of making sure the data stays private and secure within the models, the main benefit to LLM integrity comes from the attestation part of confidential computing. Confidential computing can help validate that the models themselves, as well as the training data, have not been tampered with.

More specifically, confidential computing solutions provide organizations with proof of processing, which can offer evidence of model authenticity, showing when and where data was generated. This gives organizations the ability to make sure that model usage occurs only with authorized data by authorized users.

When organizations are subject to data protection requirements under frameworks including the GDPR, CPRA, and HIPAA, the need to do due diligence on model use is becoming increasingly important to drive forward the adoption of this technology.

The Bottom Line

Organizations that want to experiment with generative AI need to have assurances that neither training models nor information submitted to LLMs is open to unauthorized users.

Ultimately, confidential computing provides a solution for assuring the integrity and security of models under the protection of in-use encryption so that organizations can experiment with generative AI at the network’s edge without putting PII or intellectual property at risk.

Advertisements

Related Terms

Advertisements
Tim Keary

Since January 2017 Tim Keary has been a freelance technology writer and reporter covering enterprise technology and cybersecurity.