THE LINUX FOUNDATION PROJECTS
Blog

Protecting Agentic AI Workloads with Confidential Computing

By January 20, 2026No Comments7 min read

By Mike Bursell, Executive Director, Confidential Computing Consortium

ProtectingAgenticAIWorkflow

TL;DR

Agentic AI, unprotected, allows unauthorised and malicious people and systems with access to the machines on which Agents run to tamper with the Agents, their execution and their data.  Confidential Computing isolates workloads such as Agents, protecting them.  It also provides other capabilities that can underpin Agentic AI security

Introduction

The growth in generative AI has recently led to sufficient capabilities for a new set of AI applications: Agentic AI.  One way to characterise generative AI is by its ability to generate and information – video, audio, text, numeric – in response to a query by one or more human actors.  Agentic AI, on the other hand, is designed to operate (semi-)autonomously, performing multiple tasks, including possibly branching and creating new Agents, in order to fulfil a request.  Agentic AI instances may query other systems, including humans, non-AI applications, generative AI and other Agentic AI entities.  

Confidential Computing is defined by the Confidential Computing Consortium (CCC) as “protection of data in use by performing computation in a hardware-based, attested Trusted Execution Environment”.

This article considers some of the key security requirements for Agentic AI and how Confidential Computing may be used to meet them.  It is intended to encourage interest in the subject and prompt technical conversations between practitioners in these and related fields.

The security problem

Agentic AI entities (“Agents”) will often be operating in environments that are not owned or operated by the owner of the Agent itself.  Even where the environment is owned by the company owning the Agent (such as a private cloud or data centre), the people who run the infrastructure are likely to have different responsibilities and authorisations to those associated with or delegated to the Agent.  A system admin is not likely to have the same authority as the CFO and therefore the CFO’s Agent, for example.  The problem here is that when you run any application – including an Agent – on a machine which you do not completely control, then that application is at risk from people and applications with sufficient permissions, who can read or change data within the application, or even the application itself.  This is just a function of how standard computing works, including cloud computing and virtualisation, whether with containers or virtual machines: with standard computing, if you have control over the infrastructure, then you have control over everything running on it.  In this model, every Agent with any significant capabilities or access to sensitive data would need to run on separate servers, owned, controlled and operated by the Agent’s owner.

This causes a significant problem for agents.  Most agents, by their very nature, need two specific things: an identity, and a way to authorise or approve actions.  This latter may well be associated with the identity, but may not be.  The standard way to provide an identity within computing is with a unique identifier such as a UUID, and the standard way to provide capabilities for authorisation is with a public-private cryptographic key, where the public part is published and the private part is kept confidential.  Both of these are at risk and fundamentally insecure for Agents running on standard computing infrastructure.

In a world where you can have no assurance that the Agent you think you are talking to is actually the correct one – because someone may have changed its ID – you can have no trust in that Agent.  Equally, what if somebody steals the private key from your Agent?  In this case, the thief will have all the capabilities you delegated to your agent, which could include anything from access to private files to the ability to charge unlimited transactions to your or your company’s credit card.

Isolation requirements

In order to operate safely and as expected, Agents need to be isolated from the infrastructure on which they are running, breaking the standard model of computing where whoever controls the infrastructure controls the workloads.  This isolation needs to be enforced in at least two ways: their identities need to be integrity protected, and their capabilities must be confidentiality protected.  In fact, there are typically other assurances required: protection of the integrity of the Agent itself (to stop someone changing the “mission” of the Agent) and protection of the confidentiality and integrity protection of most, if not all, of the data held by the Agent (if I have used the Agent to book flights, for example, I want to know that the itinerary that it returns to me is correct and that no unauthorised parties can see it).

These requirements are actually very similar to those for standard applications in highly-regulated industries where data privacy is a concern, such as healthcare, finance, telecommunications, pharmaceutical research and government.  In these contexts, protecting both the integrity and the confidentiality of data is a key requirement, often enforced by regulations.  Where Agentic AI overlaps with these sectors, we can expect to see these regulations being applied directly.  It is also likely that specific legislation and regulations will be created to apply to Agents specifically, simply due to the fact that they are going to be looking after and manipulating sensitive personal and business data.

Confidential Computing to protect Agentic AI

Confidential Computing is a set of chip-based technologies – whether on CPUs, GPUs or beyond – that are widely available both in the cloud and in server-grade technology available to organisations wishing to build private clouds and data centres or even to individual consumers.  It provides exactly the protections required – integrity and confidentiality of data and applications – using hardware-based isolation, rooted in silicon. 

Workloads, including Agents, are protected in-use – while they are executing – when run using  Confidential Computing: the memory they are using is protected from tampering and viewing by all other entities with access to the machine, including administrators, the kernel and hypervisor.  Additionally, Confidential Computing allows attestation measurements of applications and data can be verified by third parties to verify that these protections are in place and that the workloads are as expected.  It also provides the underpinning technologies required to allow identity to be created and managed.

This is a perfect fit for Agentic AI, providing solutions to the problems explained above with protections that are available now, allowing owners to trust their Agents and for those interacting with them to be sure that they have not been compromised or their data exfiltrated.  There are also opportunities for commercial providers of Agentic AI environments to build and sell services that owners of Agents can prove are safe for their Agents, because they do not need to trust these commercial providers, but the Confidential Computing infrastructure instead.

Conclusion

Confidential Computing allows Agentic AI to flourish without requiring infrastructure that is itself trusted: Agents from multiple owners can execute and interact on the same infrastructure.  Confidential Computing’s remote attestation also allows identity to be established and proved both to owners of Agents and to other Agents and systems.

The Confidential Computing Consortium

The Confidential Computing Consortium is part of the Linux Foundation and the industry body dedicated to defining and accelerating the adoption of confidential computing.  Members include businesses, research organisations and not-for-profits across the ecosystem who work on technical and outreach projects to further the Consortium’s goals.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.