
The Confidential Computing Consortium (CCC) has recently submitted formal responses to two major government consultations on AI security: the US National Institute of Standards and Technology (NIST) Request for Information on the secure development and deployment of AI agent systems (NIST-2025-0035), and the UK Government’s Department for Science, Innovation and Technology (DSIT) Call for Information on Secure AI Infrastructure. Taken together, these responses make a consistent and compelling case: as AI systems become foundational to national security, public services, and economic competitiveness, hardware-enforced trust must become a foundational layer of AI infrastructure.
A Shared Threat Landscape
Both responses begin from the same premise: AI agent systems face a category of risk that conventional cybersecurity tools were not designed to address. The threats are not merely traditional data breaches, they target the unique characteristics of AI itself.
Key risks highlighted across both submissions include:
- Model weight theft, where proprietary model weights can be exfiltrated through API abuse or direct memory dumps by malicious insiders or compromised infrastructure
- The infrastructure trust gap, where standard cloud security protects against external attackers but leaves model weights and inference data accessible to the cloud provider’s hypervisor or privileged administrators
- Memory scraping and cold boot attacks, which can extract sensitive context, credentials, or cryptographic material from unprotected RAM
- Memory poisoning, where adversarial content injected into an agent’s long-term memory is triggered later, with the temporal gap between injection and execution making detection very difficult
- MCP-specific threats (highlighted in the NIST response), including shadow servers, tool poisoning, and confusion attacks that undermine the integrity of agent-to-tool communication
- “Confused deputy” attacks in multi-agent systems, where a compromised agent manipulates another into sharing sensitive data without adequate authentication
Why Confidential Computing Is the Answer
The central recommendation of both responses is that protecting AI systems requires moving beyond perimeter-based controls toward architectures rooted in hardware-enforced trust; specifically, attested, hardware-based Trusted Execution Environments (TEEs).
Confidential Computing addresses several of these risks directly:
- Data-in-use protection encrypts agent memory and model weights during processing, ensuring that even cloud providers and privileged infrastructure operators cannot access sensitive workloads
- Remote attestation cryptographically verifies that the correct, unmodified agent code is running on a genuine, trusted platform before any secrets are released, providing technical guarantees rather than mere contractual assurances
- Cryptographically assured workload identity gives each agent an ephemeral identity rooted in hardware attestation, replacing static API keys with dynamic, verifiable credentials
- Key Broker Services release decryption keys and credentials only after successful attestation, meaning that if the environment doesn’t match an approved policy, keys are simply not released
- Confidential Inference (highlighted in the UK response) keeps user prompts encrypted in transit, decrypting them only inside an attested TEE, preventing cloud operators or intermediaries from accessing prompt contents
The UK response also draws attention to the need to extend these protections to accelerators such as GPUs, which in multi-tenant environments represent a significant attack vector, and to future-proof the transport layer against “Store Now, Decrypt Later” attacks using Post-Quantum Cryptography (PQC).
Looking Ahead: Agentic Zero Trust and Standardisation
As AI agents become more capable and autonomous, potentially holding wallet keys, signing transactions, and communicating with other agents, the CCC’s responses call for a shift toward what we describe as Agentic Zero Trust: a model where every inter-agent interaction is cryptographically authenticated, and where an agent’s identity is bound to its code measurement rather than a pre-shared secret.
Both responses also call on governments to take an active role in standardisation. The NIST response urges the US to define clear “Confidential AI” assurance levels so that AI providers can credibly demonstrate they are technically unable to access user data. The UK response similarly highlights the need to standardise attestation reports across hardware vendors – AMD, Intel, Arm, and NVIDIA – to enable a unified root of trust across the UK AI sector.
On the supply chain side, the NIST response raises a specific concern: MCP authentication is currently optional by design and package signing is inconsistently required, creating risks at every startup. Both responses make clear that governance assurances are not a substitute for cryptographic guarantees.
Read the Full Responses
These are just highlights from two detailed submissions that together cover threat modelling, technical controls, patching challenges for stateful agents in TEEs, monitoring constraints imposed by Confidential Computing, and much more.
Read the CCC’s full response to NIST-2025-0035 →
Read the CCC’s full response to the UK Government’s Secure AI Infrastructure Call for Information →







