THE LINUX FOUNDATION PROJECTS
Category

Blog

CCC Outlook for 2026: A Message from Executive Director Mike Bursell

By Blog No Comments

Introduction

2026 feels like an important year for Confidential Computing – one of Gartner’s top strategic technologies for the year.  There are a number of trends and developments that are converging, suggesting that there are going to be major opportunities for the industry.  These include:

  • Availability of hardware – CPUs and GPUs are now well-established in hyperscalers and data centres
  • Visibility – the industry seems finally to be paying attention to the capabilities that Confidential Computing provides
  • Growing interest from Regulators around data-in-use protection
  • AI – realisation that AI needs protection
  • Digital Sovereignty – growing concerns about protecting data, applications and AI/ML models from interference from non-local actors, including governments
  • Distributed trust models, including Web3.

We are also seeing, as a Consortium, increased interest from demand-side, rather than supply-side.  Of course, defining “demand-side” can be quite tricky: to a chip vendor, a hyperscaler is demand-side, whereas to a hyperscaler, the term may be better applied to a bank, who, in turn, considers demand to rest with its business customers, who themselves have consumer customers!  Most important, from the CCC’s perspective, is that there is a developing “pull” for Confidential Computing, and we must position ourselves to service and encourage this.

In December, the Governing Board agreed a budget which aims to balance revenue against spending in 2026 – over the past few years, we’ve been spending into our reserves, which had grown quite large, in part because of reduced spending over the Covid years.  One of the impacts is on events, which the Outreach committee had already identified as an area of high spend but where the ability to track return-on-investment was low.  As a result, we will be doing careful targeting of which events we sponsor and get involved with this year, in particular considering how best to address the trends noted above and driving demand-side interest.

In another move to address and develop demand-side interest in Confidential Computing, the Governing Board has agreed to constitute a new Special Interest Group around Regulatory and Standards bodies.  This will concentrate on non-technical contacts and conversations with these bodies, leveraging expertise and links within Member organisations to influence work where Confidential Computing could and should be explicitly noted, recommended or even mandated.

Focus Areas

I expect to see three main areas of focus in the work that the Consortium undertakes during 2026.  In all three cases, there is a need for general evangelisation of Confidential Computing as a relevant technology and also for engagement with appropriate bodies and organisations.  I’m also sure there will be others that I’ve failed to identify, or whose importance has not yet registered.

Regulators

Government-backed regulatory bodies provide important checks and balances across many sectors and many jurisdictions.  They also often track emerging requirements and provide guidance on best practices that are expected to become mandated in the future.  An increasing realisation of the importance of protecting citizens’ and customers’ data in all states – in transit, at rest and in use – allows the CCC to position itself as a trusted advisor to bodies considering how best to provide guidance and, ultimately, regulations around using Confidential Computing as a technology to improve the protection of data, with its unique combination of performance, confidentiality and integrity.  

Given the growth in regulations around AI and digital sovereignty, the other two areas identified for focus, we can also expect to see overlap with activities in these contexts.

AI and Agentic AI

The last year or two has seen realisation of how important security is for AI, with proof of provenance often being equally important as the confidentiality and integrity of the systems that organisations are building and hosting, not to mention with which they are interacting.  The past few months, however, have seen the promise of Agentic AI becoming a major force in our day-to-day lives, with a rapid ramping up of technical work around how such agents will work.  All Agentic AI requires identity and, like human identity, this needs to be protected.  Confidential Computing provides opportunities to safeguard Agentic AI identity cryptographically, isolating the agent from its environment and attackers.  

Digital Sovereignty

As the global political climate has evolved and governments realise that their and their businesses’ and citizens’ applications, data and, ultimately, livelihoods are intimately wound up with the interests of the organisations hosting and storing the information and applications, there has been a move to try to move the hosting and processing of that information into the control of organisations that are locally managed or governed.  This is not just about protecting data, but also key intellectual property including AI/ML models.  Given the existing geographic distribution and deployment of computing resources, moving all processing within national boundaries is often challenging and may not even be sufficient, depending on the entities operating the computing resources.  Confidential Computing offers technical controls that allow for much greater assurances and transparency around digital sovereignty by isolating the processing of data and applications from the operating environment in which they take place.

Attestation

While confidentiality and integrity remain the first properties that most users initially associate with Confidential Computing, the value of attestation is often where long-term value is realised.  The Consortium already does a great deal of work around technical approaches around attestation, including engaging with standards bodies like the IETF on protocols and primitives.  We also have a number of open source projects which focus on or revolve around attestation. 

There continues to be a need for work around business models for attestation verification services (AVSs).  This includes consideration of revenue and charging models, policy management and devolution, trust transfer and also what types of bodies should be running an AVS in the first place: not-for-profits, silicon vendors, CSPs, ISVs, banks, governments, regulators or organisations themselves.  We can expect to see more conversation around these topics as we go through 2026.

Members

The beginning of 2026 sees the CCC with a healthy set of members across multiple geographic areas, of various sizes and in different industries and sectors.  As Confidential Computing grows through the year, we need to ensure not only that we are meeting the varying needs of existing members, but also showing and growing the benefits of membership to attract new members so that we can work to improve industry knowledge and adoption of Confidential Computing.  This means looking at new sectors (e.g. AI and Web3), crafting new messaging and materials (e.g. for regulators and governments) and adapting our messaging for those on the demand-side who need to find out more about the technologies in ways that suit them.

This all requires engagement by existing members, and I plan to find ways for members, both new and established, to engage in our activities in ways that are aligned with their interests and priorities, amplifying their efforts through our communal work.

Conclusion

2026 comes with many opportunities for Confidential Computing, and for the CCC to consolidate and grow our place in existing and new industries as a trusted and maturing technology.  The number of companies already using Confidential Computing is more than most people realise, as evidenced by the IDC’s report Unlocking the Future of Data Security: Confidential Computing as a Strategic Imperative (available on our White Papers & Reports page).  We at the Confidential Computing Consortium need to spread the news, while continuing to make the technologies as attractive and easy to use as possible and providing the primitives, protocols and open source projects that ease and encourage adoption.  I look forward to working with you and your colleagues as we tackle these tasks over the next twelve months.

New Study Finds Confidential Computing Emerging as a Strategic Imperative for Secure AI and Data Collaboration

By Announcement, Blog No Comments

Research commissioned by the Confidential Computing Consortium highlights accelerating adoption driven by AI innovation, compliance standards, and data sovereignty

Summary

  • The Confidential Computing Consortium (CCC), a project community at the Linux Foundation, announced new IDC research, “Unlocking the Future of Data Security: Confidential Computing as a Strategic Imperative.”
  • The global survey of 600+ IT leaders across 15 industries finds that 75% of organizations are adopting Confidential Computing, signaling its shift from niche to mainstream.
  • By adding protection of data in-use, Confidential Computing is emerging as a core enabler of secure, data-centric innovation, delivering measurable gains in data integrity, confidentiality and compliance.
  • Despite strong momentum, skills gaps, validation challenges and interoperability barriers persist, highlighting the need for open standards and industry collaboration led by the CCC.

SAN FRANCISCO, Dec 3, 2025 – The Confidential Computing Consortium (CCC), a project community at the Linux Foundation dedicated to defining and accelerating the adoption of Confidential Computing, today announced findings from a new survey conducted by IDC. Based on insights from more than 600 global IT leaders across 15 industries, the study, “Unlocking the Future of Data Security: Confidential Computing as a Strategic Imperative,” reveals that Confidential Computing has become a foundational enabler of modern data-centric innovation, but implementation complexities are hindering widespread adoption.

“Confidential Computing has grown from a niche concept into a vital strategy for data security and trusted AI innovation,” said Nelly Porter, governing board chair, Confidential Computing Consortium. “As international security and compliance regulations tighten, organizations must invest in education and interoperability to meet heightened data confidentiality, integrity and availability standards – and enable secure AI adoption across sensitive environments.”

Major benefits push Confidential Computing into the mainstream

Awareness and adoption of Confidential Computing continue to grow, expanding its footprint into more industries and applications. According to IDC, “this momentum reflects a broader shift toward securing data in use, driven by the need to mitigate urgent threats and enable secure collaboration in environments where sensitive data is routinely handled.” The study finds:

  • 75% of organizations are adopting Confidential Computing; 57% have started piloting/testing, joining the 18% of organizations already in production
  • 88% of respondents report improved data integrity as the primary benefit of Confidential Computing, followed by confidentiality with proven technical assurances (73%) and better regulatory compliance (68%)
  • Confidential Computing enables top business outcomes, including accelerated innovation, enhanced regulatory compliance, increased cost efficiency and more. Confidential Computing combined with AI-driven analytics accelerates innovation by enabling secure model training, inference and AI agents on sensitive data.
  • Confidential Computing stands out as a practical and scalable alternative especially when compared with more complex or resource-intensive methods. It is also applicable for standard computing workloads, without requiring rewriting of applications or algorithms. 

Adoption drivers shift gears

As security, compliance, and innovation imperatives converge, Confidential Computing adoption is being fueled as a response to external regulations and an enabler for internal business transformation goals. The study finds:

  • Regulatory frameworks like the Digital Operational Resilience Act (DORA) are driving adoption as 77% of organizations are more likely to consider Confidential Computing due to DORA’s specific requirement to protect data in-use.
  • Workload security/external threats (56%), Personally Identifiable Information (PII) protection (51%), and compliance (50%) are the top drivers for adoption, but new use cases – especially in AI and cloud – are expanding relevance, with organizations leveraging Confidential Computing to train AI models, run inference, and deploy AI agents on regulated datasets without compromising privacy
  • Public cloud users are the most likely to implement Confidential Computing technology (71%), followed by hybrid/distributed cloud users (45%), with the acceleration in these environments, driven by the need for scalable security and compliance with evolving regulatory requirements.

Geographic and industry differences highlight early leaders and emerging priorities in Confidential Computing

  • Canada and the United States reported the highest percent of Confidential Computing services in full production, at 26% and 24%, respectively, followed by China and the United Kingdom, both at 20%
  • Greater protection from outside attackers is the highest priority use case in the United States and Britain, whereas Canada, France, Germany and China prioritized protection of personally identifiable information, reflecting more stringent privacy regulations
  • The financial services industry has the highest number percent of full production deployments (37%), followed by healthcare (29%) and government (21%)
  • Healthcare respondents place a significantly higher priority on privacy-preserving data collaborations with multiple parties (78%) than financial services (61%) or government (26%), reflecting medical’s need to safeguard highly regulated data, and enable AI-powered diagnostics through privacy-preserving collaborations.

Addressing barriers and accelerating readiness

Despite strong momentum, the IDC study identifies several adoption challenges, including attestation validation (84%), misconception of Confidential Computing as niche technology (77%), and a skills gap (75%), which must be addressed through industry collaboration, education and standardization. 

To address these challenges and realize the full benefits of Confidential Computing, IDC recommends that organizations: 

  • Start with pilot initiatives to demonstrate measurable value
  • Adopt open standards and vendor-agnostic frameworks 
  • Invest in third-party attestation and interoperability testing 
  • Engage in industry-led initiatives such as those led by the CCC to align on technical assurance and trust frameworks

The next era of Confidential Computing unlocks new possibilities in identity, AI, multi-party collaboration and privacy-preserving analytics that were previously out of reach. For more information, read the full white paper here.

About Confidential Computing Consortium

The Confidential Computing Consortium (CCC) is a community focused on projects securing data in use and accelerating the adoption of Confidential Computing through open collaboration. CCC brings together hardware vendors, cloud providers, and software developers to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards. Learn more at www.confidentialcomputing.io

Welcoming Confident Security to the Confidential Computing Consortium

By Blog No Comments

The Confidential Computing Consortium (CCC) is pleased to welcome Confident Security as a new Start-Up Member.

Confident Security is dedicated to making AI truly private, developing technologies and practices that protect data and models in use without compromising performance or accessibility. The company’s mission closely aligns with the CCC’s goal of fostering open collaboration and standards to enable secure computation across industries.

Advancing Confidential AI Through Open Collaboration

By joining the CCC, Confident Security aims to help shape and accelerate the development of Confidential AI standards, ensuring privacy, integrity, and trust in next-generation machine learning systems. The company is particularly focused on frameworks that safeguard sensitive data used in AI training and inference while maintaining openness and interoperability.

In parallel, Confident Security has been expanding its open source contributions, sharing tools that support secure, privacy-preserving communication and computation. Recent releases include:

  • ohttp: privacy-preserving HTTP relay implementation
  • bhttp: binary HTTP protocol support
  • go-nvtrust: Go bindings for NVIDIA Trust extensions
  • twoway: bidirectional secure communication library

Most recently, Confident Security launched its largest open source project to date – OpenPCC, an open framework for privacy-preserving encryption and AI data security. This release was accompanied by an Axios feature and a comprehensive whitepaper outlining the architecture and technical foundations behind the project. OpenPCC represents a major milestone in the company’s vision to make secure, confidential computation accessible to all.

These projects demonstrate Confident Security’s commitment to advancing open, secure innovation and complement the CCC’s mission to drive adoption of confidential computing technologies.

Strengthening a Shared Mission

“It’s our mission to make AI truly private and part of making that happen are standards and education,” said a spokesperson from Confident Security. “For that reason, we’re very excited to join CCC and to contribute and collaborate with all the members to increase adoption and use of Confidential Computing technologies.”

As a recent addition to the CCC, Confident Security aligns itself with a global collective of technology pioneers, researchers, and innovators who are collaboratively striving to establish data protection and trusted execution as fundamental pillars for confidential computing. 

Welcome Acompany to the Confidential Computing Consortium

By Blog No Comments

We’re pleased to welcome Acompany as the newest General Member of the Confidential Computing Consortium (CCC)!

Acompany provides Confidential Computing as a strategic security foundation, powering secure data collaboration and advancing trusted AI. Its technology supports use cases ranging from data clean rooms for a Fortune Global 500 telecom company (KDDI) to optimized manufacturing processes and mission-critical national security initiatives.

Expanding the Global Market for Confidential Computing

Acompany joins the Consortium with a clear vision: to accelerate the global adoption of Confidential Computing through community collaboration and open innovation.

“At Acompany, our mission is ‘Trust. Data. AI.’ We are delighted to join the Confidential Computing Consortium and work with industry leaders to advance secure and trusted AI. Just as HTTPS became the default for the web, Confidential Computing will become the default for AI—and we are proud to help shape that future.”  — Ryosuke Takahashi, CEO, Acompany Co., Ltd.

The company brings proven experience to the community. Its solutions already power secure data clean rooms for KDDI and support ongoing Confidential Computing research in collaboration with Intel Labs. Acompany’s participation will strengthen collective efforts to make Confidential Computing the foundation of secure data processing and privacy-preserving AI worldwide.

Community Collaboration in Action

Acompany is also engaging with CCC-hosted projects, including the Gramine framework. The team has actively participated in GitHub discussions and leveraged Gramine in their own research initiatives, helping to expand the practical applications of Confidential Computing technologies. In addition, Acompany contributes to the Consortium’s global outreach by supporting the Japanese translation of CCC’s White Papers & Reports, helping to broaden access to the Consortium’s insights and advance the global understanding and adoption of Confidential Computing.

Designing AI Data Safeguards Together: A Look Back at CCC’s San Francisco Workshop

By Blog No Comments

Last week in San Francisco, our community came together for a day that reminded us why collaborative learning and shared experimentation are so vital in the confidential computing ecosystem.

Attendees represented a wide range of perspectives, from hyperscale cloud service providers, startups, think tanks, and industry ranging from pharmaceuticals to finance, to discuss Confidential Computing. The day was filled with lively technical exchanges and even laughter over afternoon bacon (yes, bacon is a snack), it was the kind of workshop that makes innovation feel personal.

A Lineup That Inspired Collaboration

We were honored to hear from a remarkable roster of speakers representing organizations at the heart of secure and privacy-preserving computing, including:

  • Britt Law
  • Duality
  • Google
  • Meta / WhatsApp
  • NVIDIA
  • Oblivious
  • ServiceNow with Opaque
  • TikTok
  • Tinfoil

Each talk brought a unique perspective, from real-world deployments delivering measurable business value to bold experiments shaping the future of data protection. The diversity of voices reflected the Consortium’s strength: bringing together researchers, builders, and adopters to turn ideas into impact. The versatility of Confidential Computing was evident from the wide range of solutions and use cases presented.

From Inspiration to Imagination

The day wrapped up with our “Shark Tank”-style challenge, where four teams competed to design new use cases for Confidential Computing. The creativity on display was impressive, but one concept stood out – a secure, verifiable proof of humanity – a vision that perfectly captured the balance of trust, technology, and imagination our community strives for.

Community at the Core

Behind every successful event is a network of people who make it happen. This workshop was no exception. We’re deeply grateful to Laura Martinez (NVIDIA), Mateus Guzzo (TikTok) and Mike Ferron-Jones (Intel) for their incredible leadership in bringing everything together. Their effort ensured that even the smallest logistical details (and photo moments) went smoothly.

Looking Ahead

As we look to future workshops, we’ll keep building spaces like this one: open, hands-on, and human-centered. Because progress happens when we learn together, challenge ideas together, and celebrate the journey as much as the technology itself.

(Photos courtesy of Mateus Guzzo)

Welcoming FuriosaAI to the Confidential Computing Consortium

By Blog No Comments

The Confidential Computing Consortium (CCC) is pleased to welcome FuriosaAI as our newest startup member!

Furiosa is a semiconductor company pioneering a new type of AI chip for data centers and enterprise customers. With a mission to make AI computing sustainable and accessible to everyone, Furiosa offers a full hardware and software stack that enables powerful AI at scale.  Its proprietary Tensor Contraction Processor (TCP) architecture delivers world-class performance for advanced AI models, along with breakthrough energy efficiency compared to GPUs.

Furiosa’s flagship inference chip, RNGD (pronounced “renegade”), accelerates large language models and agentic AI workloads in any data center, including ones with power, cooling, and space constraints that make it difficult or impossible to deploy advanced GPUs. Currently sampling with Fortune 500 customers worldwide, RNGD is designed to power the next generation of AI applications with both high performance and significantly lower operating expenses.

Why Furiosa Joined CCC

As AI workloads scale, protecting data becomes increasingly critical. Furiosa’s energy-efficient chips enable businesses to run their models on-prem, so they can maintain complete control of their data and tooling. By joining the CCC, Furiosa is committed to collaborating with peers across the ecosystem to build a more secure and trustworthy AI infrastructure.

Furiosa hopes to contribute its expertise in hardware-accelerated inference while learning from the community’s efforts to standardize and advance confidential computing practices. The company is particularly interested in trusted execution environments and data security in AI workloads, and looks forward to identifying projects where its AI compute acceleration technology can add meaningful value.

In Their Own Words

“At Furiosa, we believe the future of AI depends on both performance and trust. By joining the Confidential Computing Consortium, we’re excited to collaborate with industry leaders to ensure AI innovation happens securely, sustainably, and at scale.”
Hanjoon Kim, Chief Technology Officer, FuriosaAI

We’re thrilled to have Furiosa join our community and look forward to the collaboration ahead. Welcome to the CCC!

Welcoming Phala to the Confidential Computing Consortium

By Blog No Comments

We are pleased to welcome Phala as the newest General Member of the Confidential Computing Consortium (CCC)! We’re glad to have Phala on board and greatly appreciate their support for our growing community.

About Phala

Phala is a secure cloud platform that enables developers to run AI workloads inside hardware-protected Trusted Execution Environments (TEEs). With a strong commitment to open-source development, Phala provides confidential computing infrastructure that ensures privacy, verifiability, and scalability. Their mission is to make secure and trustworthy AI deployment practical and accessible for developers worldwide.

Why Phala Joined CCC

By joining the CCC, Phala is partnering with industry leaders to advance open standards for confidential computing. Phala brings unique expertise through real-world deployment of one of the largest TEE networks in operation today, contributing valuable experience to help accelerate adoption of confidential computing.

At the same time, Phala looks forward to learning from the broader CCC community and collaborating to strengthen interoperability across the ecosystem.

Contribution to CCC-Hosted Projects

Phala is also contributing directly to CCC-hosted projects. Its open-source project, dstack, is now part of the Linux Foundation under the CCC. dstack is a confidential computing framework that simplifies secure application deployment in TEEs, providing verifiable execution and zero-trust key management to developers.

In Their Own Words

“Confidential computing is essential to the future of secure and trustworthy AI. By joining the Confidential Computing Consortium, we are deepening our commitment to building open-source, hardware-backed infrastructure that empowers developers everywhere. We are excited to contribute our experience operating one of the largest TEE networks and to collaborate with the community on shaping the future of confidential computing.”
Marvin Tong, CEO, Phala Network

QLAD Joins the Confidential Computing Consortium

By Blog No Comments

We’re pleased to welcome QLAD to the Confidential Computing Consortium (CCC), as the latest innovator helping define the next era of secure computing.

QLAD is a Kubernetes-native confidential computing platform that provides runtime protection by default, delivering pod-level Trusted Execution Environments (TEEs) and featuring encrypted Armored Containers™ for enhanced IP protection and post-quantum resilience. With post-quantum resilience and seamless integration, no code rewrites or infrastructure changes required, QLAD enables scalable, production-ready confidentiality for modern workloads.

“At QLAD, we believe confidential computing should be simple. We’re building a platform that delivers drop-in protection for sensitive workloads, without code rewrites or infrastructure disruption. We’re proud to join the CCC community and contribute to the standards, tooling, and trust models that help organizations stay secure across clouds, edges, and collaborative environments.”
Jason Tuschen, CEO, QLAD

Confidential computing is undergoing a transformation, from experimental to essential. QLAD was founded to help accelerate that shift by making trusted execution practical and DevOps-friendly, especially for organizations deploying at scale across cloud, hybrid, and edge environments.

Why QLAD joined CCC

The CCC provides a powerful venue to drive industry alignment on standards, reference architectures, and transparent governance. QLAD sees the consortium as a collaborative platform to:

  • Champion workload-first adoption patterns (beyond VM- or node-level models)
  • Demystify confidential computing for developers and security teams
  • Share insights as it prepares to open-source components of its container security layer in late 2025

What QLAD brings to the community
QLAD engineers are already contributing to CCC-hosted initiatives, including the Confidential Containers (CoCo) project. Contributions to date include:

  • QLAD engineers have contributed directly to the Confidential Containers (CoCo) project, including adding AWS SNP VLEK support across three repositories (trustee, guest-components, and azure-cvm-tooling)
  • Submitted eight pull requests (all merged) to cloud-api-adaptor, advancing workload orchestration in confidential environments
  • Engaged with members of U.S. Congress to raise awareness of Confidential Computing and Confidential Containers, helping ensure the technology receives attention and potential funding at the federal level

As QLAD prepares to open source additional components, it plans to work closely with the CCC Technical Advisory Council to align on contribution pathways and ensure long-term technical alignment.

What QLAD hopes to gain
In joining CCC, QLAD looks forward to:

  • Advancing attestation frameworks, policy enforcement models, and container standards
  • Collaborating with industry peers solving real-world deployment challenges
  • Participating in working groups that shape the future of confidential computing across AI, hybrid cloud, and zero-trust environments

We’re excited to welcome QLAD into the CCC community and look forward to their continued contributions to making confidential computing scalable, practical, and trusted by default.

Harmonizing Open-Source Remote Attestation: My LFX Mentorship Journey

By Blog No Comments

By Harsh Vardhan Mahawar

This blog post encapsulates my experience and contributions during the Linux Foundation Mentorship Program under the Confidential Computing Consortium. The core objective of this mentorship was to advance the standardization of remote attestation procedures, a critical facet of establishing trust in dynamic and distributed computing environments. Through focusing on the IETF’s Remote Attestation Procedures (RATS) architecture, we aimed to enhance interoperability and streamline the integration of various open-source verifier projects like Keylime, JANE, and Veraison.

Motivation: Why Standardization Matters

Open-source remote attestation tools often develop independently, resulting in inconsistencies in how they format and exchange attestation data. This fragmentation poses a challenge for interoperability across verifiers, relying parties, and attesters.

My mentorship focused on aligning these implementations with two crucial IETF drafts:

The goal was to standardize both evidence encoding and attestation result reporting, facilitating smoother integration between systems.

Laying the Foundation: Mapping to the RATS Architecture

Before diving into implementation, a fundamental understanding of the RATS architecture and its alignment with existing solutions was paramount. The RATS Working Group defines a standardized framework for remote attestation, enabling a Relying Party to determine the trustworthiness of an Attester based on evidence produced by such an Attester.

Our initial phase involved a detailed mapping of prominent open-source remote attestation tools—Keylime, JANE, and Veraison—against the RATS architectural model. This exercise was not merely theoretical; it was an actionable analysis driven by key principles:

  • Granularity: Pinpointing specific components and their RATS functions, rather than broad role assignments.
  • Data Flow: Analyzing the journey of evidence, endorsements, and attestation results to align with RATS conveyance models.
  • Standardization Focus: Identifying areas where these projects could adopt RATS-recommended standards.
  • Actionable Insights: Providing clear directions for modifications to enhance RATS compliance.

This foundational work was crucial because it provided a clear roadmap, highlighting where standardization gaps existed and how our contributions could most effectively bridge them, fostering a more unified confidential computing ecosystem.

1. Keylime

Keylime is a comprehensive remote attestation solution for Linux systems, focusing on TPM-based attestation. It ensures cloud infrastructure trustworthiness by continuously collecting and verifying evidence.

2. JANE

Jane Attestation Engine (a fork and major rewrite of the former A10 Nokia Attestation Engine i.e. NAE) is an experimental remote attestation framework designed to be technology-agnostic.

3. Veraison

Veraison is an attestation verification project under the Confidential Computing Consortium. It focuses on providing a flexible and extensible Verifier component for remote attestation, supporting multiple attestation token formats and providing APIs for evidence verification and endorsement provisioning.

Standardizing Evidence: The Conceptual Messages Wrapper (CMW)

A significant challenge in remote attestation is the diversity of evidence formats produced by different attestation technologies. This heterogeneity necessitates complex parsing and integration logic on the Relying Party’s side. The Conceptual Message Wrapper (CMW), as defined by IETF, offers a solution by providing a standardized collection data structure for attestation evidence.

My work involved implementing CMW within Keylime. The goal was to transition Keylime’s custom KeylimeQuote evidence format to the standardized CMW format, specifically targeting a new API version vX.X (version to be finalized). This involved:

  • Encapsulation: Wrapping disparate evidence components—such as TPM TPMS_ATTEST structures, TPMT_SIGNATURE values, PCRs, IMA measurement lists, measured boot logs, and Keylime-specific metadata (e.g., public key, boot time)—into a unified CMW structure.
  • Serialization: Ensuring proper base64url encoding and adhering to a defined JSON schema for the wrapped evidence.
  • Canonical Event Log (CEL) Integration: A crucial part was integrating the Canonical Event Log (CEL) format (from the Trusted Computing Group) for IMA and measured boot logs, further enhancing interoperability. This required careful parsing of raw log data and constructing CEL-compliant entries.
  • API Versioning: Implementing logic within the Keylime agent to serve CMW-formatted evidence for vX.X (version to be finalized) requests, while retaining support for legacy formats.

The motivation behind adopting CMW is clear: it significantly streamlines the implementation process for developers, allowing Relying Parties to remain agnostic to specific attestation technologies. This approach fosters extensibility, enabling easier support for new conceptual messages and attestation technologies without altering the core processing logic.

Standardizing Appraisal Results: EAT Attestation Results (EAR)

Beyond standardizing evidence, it is equally important to standardize the results of attestation. This is where the EAT Attestation Results (EAR) comes into play. EAR provides a flexible and extensible data model for conveying attestation results, allowing a verifier to summarize the trustworthiness of an Attester concisely and verifiably.

My contribution to EAT standardization focused on two main fronts:

  1. Developing a Python Library (python-ear): I developed a Python library (python-ear) that implements the EAT Attestation Results (EAR) data format, as specified in draft-fv-rats-ear. This library provides essential functionalities:
  • Claim Population: Defining and populating various EAR claims (e.g., instance_identity, hardware, executables, configuration) that represent appraisal outcomes.
  • Serialization/Deserialization: Encoding EAR claims as JSON Web Tokens (JWT) or Concise Binary Object Representation Web Tokens (CWT) and decoding them.
  • Signing and Verification: Supporting cryptographic signing of EAR claims with private keys and verification with public keys to ensure data integrity and authenticity.
  • Validation: Implementing validation logic to ensure EAR objects adhere to the specified schema.
  1. Keylime EAT Plugin: This work extends Keylime’s durable attestation framework by integrating EAT-based appraisal logic. The goal is to transform raw attestation evidence and policy data into structured AR4SI TrustVector claims, thereby enhancing the auditability and semantic richness of attestation outcomes. This critical step involved:
  • Evidence Validation: Leveraging Keylime’s existing functions to perform comprehensive validation of TPM quotes, IMA measurements, and measured boot logs.
  • Failure Mapping: Precisely mapping the various Failure events generated during Keylime’s internal validation processes to specific TrustClaim values within the EAT TrustVector. For instance, a quote validation failure indicating an invalid public key would map to an UNRECOGNIZED_INSTANCE claim.
  • State Management: A significant challenge was ensuring that the EAT appraisal logic could utilize Keylime’s validation functions without inadvertently altering the agent’s internal state, which could interfere with Keylime’s continuous attestation workflow. This necessitated careful refactoring and the introduction of flags to prevent state changes.
  • Submodule Status: Defining how the overall status of the EAT submodule (e.g., “affirming,” “warning,” “contraindicated”) is derived from the aggregated TrustClaim values.

The implementation of EAT is vital for realizing the full potential of remote attestation. It provides a common language for trustworthiness, allowing Relying Parties to make automated, policy-driven decisions based on a consistent, verifiable attestation result, irrespective of the underlying hardware or software components being attested.

Conclusion and Future Outlook

This LFX Mentorship has been an invaluable journey, providing a unique opportunity to contribute to the evolving landscape of confidential computing. By focusing on RATS architecture mapping, implementing the Conceptual Message Wrapper for evidence, and integrating Entity Attestation Tokens for appraisal results, we have made tangible steps towards enhancing interoperability, standardization, and the overall security posture of open-source remote attestation solutions.

The work on CMW and EAT is critical for fostering a more robust and scalable trusted and confidential computing ecosystem. It enables easier integration of diverse attestation technologies and provides a unified, machine-readable format for conveying trustworthiness. My gratitude goes to my mentors, Thore Sommer and Thomas Fossati, for their guidance, insights, and continuous support throughout this program.

While significant progress has been made, the journey towards a fully harmonized remote attestation ecosystem continues. Future efforts will involve full upstreaming of these changes into the respective projects and exploring broader adoption across the confidential computing landscape, further solidifying the foundations of trust in a dynamic digital world.

References

  1. IETF’s Remote Attestation Procedures (RATS) architecture
  2. Keylime
  3. JANE
  4. Veraison
  5. CMW (Conceptual Messages Wrapper)
  6. EAT (Entity Attestation Token)
  7. EAR (EAT Attestation Results)
  8. Canonical Event Log (CEL)
  9. python-ear library

Welcoming Tinfoil to the Confidential Computing Consortium

By Blog No Comments

We’re thrilled to welcome Tinfoil as the newest start-up member of the Confidential Computing Consortium (CCC)!

Tinfoil is an open source platform delivering cryptographically verifiable privacy for AI workloads. Their mission is to make it safe to process sensitive data through powerful AI models—without compromising user privacy. By leveraging confidential computing technologies, including NVIDIA’s confidential computing-enabled GPUs, Tinfoil ensures that no one—not even Tinfoil or the cloud provider—can access private user data. The platform also safeguards AI model weights from unauthorized access and supports end-to-end supply chain security guarantees.

“We’re excited to collaborate with the community to make hardware-backed AI privacy the standard.” — Tanya Verma, CEO of Tinfoil

As a company deeply invested in confidential computing, Tinfoil is joining CCC to both learn from and contribute to the broader ecosystem. Their team is especially interested in collaborating with others working at the intersection of secure hardware and AI, and in helping shape future standards for confidential AI. Currently, they’re using Ubuntu Confidential VMs from Canonical and NVIDIA’s verification tools, with plans to contribute to these open source projects over time.

We’re excited to have Tinfoil join the CCC community and look forward to the insights and innovation they’ll bring as we work together to advance the future of trusted, verifiable computing.