We’re pleased to welcome Acompany as the newest General Member of the Confidential Computing Consortium (CCC)!
Acompany provides Confidential Computing as a strategic security foundation, powering secure data collaboration and advancing trusted AI. Its technology supports use cases ranging from data clean rooms for a Fortune Global 500 telecom company (KDDI) to optimized manufacturing processes and mission-critical national security initiatives.
Expanding the Global Market for Confidential Computing
Acompany joins the Consortium with a clear vision: to accelerate the global adoption of Confidential Computing through community collaboration and open innovation.
“At Acompany, our mission is ‘Trust. Data. AI.’ We are delighted to join the Confidential Computing Consortium and work with industry leaders to advance secure and trusted AI. Just as HTTPS became the default for the web, Confidential Computing will become the default for AI—and we are proud to help shape that future.” — Ryosuke Takahashi, CEO, Acompany Co., Ltd.
The company brings proven experience to the community. Its solutions already power secure data clean rooms for KDDI and support ongoing Confidential Computing research in collaboration with Intel Labs. Acompany’s participation will strengthen collective efforts to make Confidential Computing the foundation of secure data processing and privacy-preserving AI worldwide.
Community Collaboration in Action
Acompany is also engaging with CCC-hosted projects, including the Gramine framework. The team has actively participated in GitHub discussions and leveraged Gramine in their own research initiatives, helping to expand the practical applications of Confidential Computing technologies. In addition, Acompany contributes to the Consortium’s global outreach by supporting the Japanese translation of CCC’s White Papers & Reports, helping to broaden access to the Consortium’s insights and advance the global understanding and adoption of Confidential Computing.
Last week in San Francisco, our community came together for a day that reminded us why collaborative learning and shared experimentation are so vital in the confidential computing ecosystem.
Attendees represented a wide range of perspectives, from hyperscale cloud service providers, startups, think tanks, and industry ranging from pharmaceuticals to finance, to discuss Confidential Computing. The day was filled with lively technical exchanges and even laughter over afternoon bacon (yes, bacon is a snack), it was the kind of workshop that makes innovation feel personal.
A Lineup That Inspired Collaboration
We were honored to hear from a remarkable roster of speakers representing organizations at the heart of secure and privacy-preserving computing, including:
Britt Law
Duality
Google
Meta / WhatsApp
NVIDIA
Oblivious
ServiceNow with Opaque
TikTok
Tinfoil
Each talk brought a unique perspective, from real-world deployments delivering measurable business value to bold experiments shaping the future of data protection. The diversity of voices reflected the Consortium’s strength: bringing together researchers, builders, and adopters to turn ideas into impact. The versatility of Confidential Computing was evident from the wide range of solutions and use cases presented.
From Inspiration to Imagination
The day wrapped up with our “Shark Tank”-style challenge, where four teams competed to design new use cases for Confidential Computing. The creativity on display was impressive, but one concept stood out – a secure, verifiable proof of humanity – a vision that perfectly captured the balance of trust, technology, and imagination our community strives for.
Community at the Core
Behind every successful event is a network of people who make it happen. This workshop was no exception. We’re deeply grateful to Laura Martinez (NVIDIA), Mateus Guzzo (TikTok) and Mike Ferron-Jones (Intel) for their incredible leadership in bringing everything together. Their effort ensured that even the smallest logistical details (and photo moments) went smoothly.
Looking Ahead
As we look to future workshops, we’ll keep building spaces like this one: open, hands-on, and human-centered. Because progress happens when we learn together, challenge ideas together, and celebrate the journey as much as the technology itself.
Furiosa is a semiconductor company pioneering a new type of AI chip for data centers and enterprise customers. With a mission to make AI computing sustainable and accessible to everyone, Furiosa offers a full hardware and software stack that enables powerful AI at scale. Its proprietary Tensor Contraction Processor (TCP) architecture delivers world-class performance for advanced AI models, along with breakthrough energy efficiency compared to GPUs.
Furiosa’s flagship inference chip, RNGD (pronounced “renegade”), accelerates large language models and agentic AI workloads in any data center, including ones with power, cooling, and space constraints that make it difficult or impossible to deploy advanced GPUs. Currently sampling with Fortune 500 customers worldwide, RNGD is designed to power the next generation of AI applications with both high performance and significantly lower operating expenses.
Why Furiosa Joined CCC
As AI workloads scale, protecting data becomes increasingly critical. Furiosa’s energy-efficient chips enable businesses to run their models on-prem, so they can maintain complete control of their data and tooling. By joining the CCC, Furiosa is committed to collaborating with peers across the ecosystem to build a more secure and trustworthy AI infrastructure.
Furiosa hopes to contribute its expertise in hardware-accelerated inference while learning from the community’s efforts to standardize and advance confidential computing practices. The company is particularly interested in trusted execution environments and data security in AI workloads, and looks forward to identifying projects where its AI compute acceleration technology can add meaningful value.
In Their Own Words
“At Furiosa, we believe the future of AI depends on both performance and trust. By joining the Confidential Computing Consortium, we’re excited to collaborate with industry leaders to ensure AI innovation happens securely, sustainably, and at scale.” — Hanjoon Kim, Chief Technology Officer, FuriosaAI
We’re thrilled to have Furiosa join our community and look forward to the collaboration ahead. Welcome to the CCC!
We are pleased to welcome Phala as the newest General Member of the Confidential Computing Consortium (CCC)! We’re glad to have Phala on board and greatly appreciate their support for our growing community.
About Phala
Phala is a secure cloud platform that enables developers to run AI workloads inside hardware-protected Trusted Execution Environments (TEEs). With a strong commitment to open-source development, Phala provides confidential computing infrastructure that ensures privacy, verifiability, and scalability. Their mission is to make secure and trustworthy AI deployment practical and accessible for developers worldwide.
Why Phala Joined CCC
By joining the CCC, Phala is partnering with industry leaders to advance open standards for confidential computing. Phala brings unique expertise through real-world deployment of one of the largest TEE networks in operation today, contributing valuable experience to help accelerate adoption of confidential computing.
At the same time, Phala looks forward to learning from the broader CCC community and collaborating to strengthen interoperability across the ecosystem.
Contribution to CCC-Hosted Projects
Phala is also contributing directly to CCC-hosted projects. Its open-source project, dstack, is now part of the Linux Foundation under the CCC. dstack is a confidential computing framework that simplifies secure application deployment in TEEs, providing verifiable execution and zero-trust key management to developers.
In Their Own Words
“Confidential computing is essential to the future of secure and trustworthy AI. By joining the Confidential Computing Consortium, we are deepening our commitment to building open-source, hardware-backed infrastructure that empowers developers everywhere. We are excited to contribute our experience operating one of the largest TEE networks and to collaborate with the community on shaping the future of confidential computing.” — Marvin Tong, CEO, Phala Network
We’re pleased to welcome QLAD to the Confidential Computing Consortium (CCC), as the latest innovator helping define the next era of secure computing.
QLAD is a Kubernetes-native confidential computing platform that provides runtime protection by default, delivering pod-level Trusted Execution Environments (TEEs) and featuring encrypted Armored Containers™ for enhanced IP protection and post-quantum resilience. With post-quantum resilience and seamless integration, no code rewrites or infrastructure changes required, QLAD enables scalable, production-ready confidentiality for modern workloads.
“At QLAD, we believe confidential computing should be simple. We’re building a platform that delivers drop-in protection for sensitive workloads, without code rewrites or infrastructure disruption. We’re proud to join the CCC community and contribute to the standards, tooling, and trust models that help organizations stay secure across clouds, edges, and collaborative environments.” — Jason Tuschen, CEO, QLAD
Confidential computing is undergoing a transformation, from experimental to essential. QLAD was founded to help accelerate that shift by making trusted execution practical and DevOps-friendly, especially for organizations deploying at scale across cloud, hybrid, and edge environments.
Why QLAD joined CCC
The CCC provides a powerful venue to drive industry alignment on standards, reference architectures, and transparent governance. QLAD sees the consortium as a collaborative platform to:
Champion workload-first adoption patterns (beyond VM- or node-level models)
Demystify confidential computing for developers and security teams
Share insights as it prepares to open-source components of its container security layer in late 2025
What QLAD brings to the community QLAD engineers are already contributing to CCC-hosted initiatives, including the Confidential Containers (CoCo) project. Contributions to date include:
QLAD engineers have contributed directly to the Confidential Containers (CoCo) project, including adding AWS SNP VLEK support across three repositories (trustee, guest-components, and azure-cvm-tooling)
Submitted eight pull requests (all merged) to cloud-api-adaptor, advancing workload orchestration in confidential environments
Engaged with members of U.S. Congress to raise awareness of Confidential Computing and Confidential Containers, helping ensure the technology receives attention and potential funding at the federal level
As QLAD prepares to open source additional components, it plans to work closely with the CCC Technical Advisory Council to align on contribution pathways and ensure long-term technical alignment.
What QLAD hopes to gain In joining CCC, QLAD looks forward to:
Advancing attestation frameworks, policy enforcement models, and container standards
Collaborating with industry peers solving real-world deployment challenges
Participating in working groups that shape the future of confidential computing across AI, hybrid cloud, and zero-trust environments
We’re excited to welcome QLAD into the CCC community and look forward to their continued contributions to making confidential computing scalable, practical, and trusted by default.
This blog post encapsulates my experience and contributions during the Linux Foundation Mentorship Program under the Confidential Computing Consortium. The core objective of this mentorship was to advance the standardization of remote attestation procedures, a critical facet of establishing trust in dynamic and distributed computing environments. Through focusing on the IETF’s Remote Attestation Procedures (RATS) architecture, we aimed to enhance interoperability and streamline the integration of various open-source verifier projects like Keylime, JANE, and Veraison.
Motivation: Why Standardization Matters
Open-source remote attestation tools often develop independently, resulting in inconsistencies in how they format and exchange attestation data. This fragmentation poses a challenge for interoperability across verifiers, relying parties, and attesters.
My mentorship focused on aligning these implementations with two crucial IETF drafts:
The goal was to standardize both evidence encoding and attestation result reporting, facilitating smoother integration between systems.
Laying the Foundation: Mapping to the RATS Architecture
Before diving into implementation, a fundamental understanding of the RATS architecture and its alignment with existing solutions was paramount. The RATS Working Group defines a standardized framework for remote attestation, enabling a Relying Party to determine the trustworthiness of an Attester based on evidence produced by such an Attester.
Our initial phase involved a detailed mapping of prominent open-source remote attestation tools—Keylime, JANE, and Veraison—against the RATS architectural model. This exercise was not merely theoretical; it was an actionable analysis driven by key principles:
Granularity: Pinpointing specific components and their RATS functions, rather than broad role assignments.
Data Flow: Analyzing the journey of evidence, endorsements, and attestation results to align with RATS conveyance models.
Standardization Focus: Identifying areas where these projects could adopt RATS-recommended standards.
Actionable Insights: Providing clear directions for modifications to enhance RATS compliance.
This foundational work was crucial because it provided a clear roadmap, highlighting where standardization gaps existed and how our contributions could most effectively bridge them, fostering a more unified confidential computing ecosystem.
Keylime is a comprehensive remote attestation solution for Linux systems, focusing on TPM-based attestation. It ensures cloud infrastructure trustworthiness by continuously collecting and verifying evidence.
Jane Attestation Engine (a fork and major rewrite of the former A10 Nokia Attestation Engine i.e. NAE) is an experimental remote attestation framework designed to be technology-agnostic.
Veraison is an attestation verification project under the Confidential Computing Consortium. It focuses on providing a flexible and extensible Verifier component for remote attestation, supporting multiple attestation token formats and providing APIs for evidence verification and endorsement provisioning.
A significant challenge in remote attestation is the diversity of evidence formats produced by different attestation technologies. This heterogeneity necessitates complex parsing and integration logic on the Relying Party’s side. The Conceptual Message Wrapper (CMW), as defined by IETF, offers a solution by providing a standardized collection data structure for attestation evidence.
My work involved implementing CMW within Keylime. The goal was to transition Keylime’s custom KeylimeQuote evidence format to the standardized CMW format, specifically targeting a new API version vX.X (version to be finalized). This involved:
Encapsulation: Wrapping disparate evidence components—such as TPM TPMS_ATTEST structures, TPMT_SIGNATURE values, PCRs, IMA measurement lists, measured boot logs, and Keylime-specific metadata (e.g., public key, boot time)—into a unified CMW structure.
Serialization: Ensuring proper base64url encoding and adhering to a defined JSON schema for the wrapped evidence.
Canonical Event Log (CEL) Integration: A crucial part was integrating the Canonical Event Log (CEL) format (from the Trusted Computing Group) for IMA and measured boot logs, further enhancing interoperability. This required careful parsing of raw log data and constructing CEL-compliant entries.
API Versioning: Implementing logic within the Keylime agent to serve CMW-formatted evidence for vX.X (version to be finalized) requests, while retaining support for legacy formats.
The motivation behind adopting CMW is clear: it significantly streamlines the implementation process for developers, allowing Relying Parties to remain agnostic to specific attestation technologies. This approach fosters extensibility, enabling easier support for new conceptual messages and attestation technologies without altering the core processing logic.
Beyond standardizing evidence, it is equally important to standardize the results of attestation. This is where the EAT Attestation Results (EAR) comes into play. EAR provides a flexible and extensible data model for conveying attestation results, allowing a verifier to summarize the trustworthiness of an Attester concisely and verifiably.
My contribution to EAT standardization focused on two main fronts:
Developing a Python Library (python-ear): I developed a Python library (python-ear) that implements the EAT Attestation Results (EAR) data format, as specified in draft-fv-rats-ear. This library provides essential functionalities:
Claim Population: Defining and populating various EAR claims (e.g., instance_identity, hardware, executables, configuration) that represent appraisal outcomes.
Serialization/Deserialization: Encoding EAR claims as JSON Web Tokens (JWT) or Concise Binary Object Representation Web Tokens (CWT) and decoding them.
Signing and Verification: Supporting cryptographic signing of EAR claims with private keys and verification with public keys to ensure data integrity and authenticity.
Validation: Implementing validation logic to ensure EAR objects adhere to the specified schema.
Keylime EAT Plugin: This work extends Keylime’s durable attestation framework by integrating EAT-based appraisal logic. The goal is to transform raw attestation evidence and policy data into structured AR4SI TrustVector claims, thereby enhancing the auditability and semantic richness of attestation outcomes. This critical step involved:
Evidence Validation: Leveraging Keylime’s existing functions to perform comprehensive validation of TPM quotes, IMA measurements, and measured boot logs.
Failure Mapping: Precisely mapping the various Failure events generated during Keylime’s internal validation processes to specific TrustClaim values within the EAT TrustVector. For instance, a quote validation failure indicating an invalid public key would map to an UNRECOGNIZED_INSTANCE claim.
State Management: A significant challenge was ensuring that the EAT appraisal logic could utilize Keylime’s validation functions without inadvertently altering the agent’s internal state, which could interfere with Keylime’s continuous attestation workflow. This necessitated careful refactoring and the introduction of flags to prevent state changes.
Submodule Status: Defining how the overall status of the EAT submodule (e.g., “affirming,” “warning,” “contraindicated”) is derived from the aggregated TrustClaim values.
The implementation of EAT is vital for realizing the full potential of remote attestation. It provides a common language for trustworthiness, allowing Relying Parties to make automated, policy-driven decisions based on a consistent, verifiable attestation result, irrespective of the underlying hardware or software components being attested.
Conclusion and Future Outlook
This LFX Mentorship has been an invaluable journey, providing a unique opportunity to contribute to the evolving landscape of confidential computing. By focusing on RATS architecture mapping, implementing the Conceptual Message Wrapper for evidence, and integrating Entity Attestation Tokens for appraisal results, we have made tangible steps towards enhancing interoperability, standardization, and the overall security posture of open-source remote attestation solutions.
The work on CMW and EAT is critical for fostering a more robust and scalable trusted and confidential computing ecosystem. It enables easier integration of diverse attestation technologies and provides a unified, machine-readable format for conveying trustworthiness. My gratitude goes to my mentors, Thore Sommer and Thomas Fossati, for their guidance, insights, and continuous support throughout this program.
While significant progress has been made, the journey towards a fully harmonized remote attestation ecosystem continues. Future efforts will involve full upstreaming of these changes into the respective projects and exploring broader adoption across the confidential computing landscape, further solidifying the foundations of trust in a dynamic digital world.
We’re thrilled to welcome Tinfoil as the newest start-up member of the Confidential Computing Consortium (CCC)!
Tinfoil is an open source platform delivering cryptographically verifiable privacy for AI workloads. Their mission is to make it safe to process sensitive data through powerful AI models—without compromising user privacy. By leveraging confidential computing technologies, including NVIDIA’s confidential computing-enabled GPUs, Tinfoil ensures that no one—not even Tinfoil or the cloud provider—can access private user data. The platform also safeguards AI model weights from unauthorized access and supports end-to-end supply chain security guarantees.
“We’re excited to collaborate with the community to make hardware-backed AI privacy the standard.” — Tanya Verma, CEO of Tinfoil
As a company deeply invested in confidential computing, Tinfoil is joining CCC to both learn from and contribute to the broader ecosystem. Their team is especially interested in collaborating with others working at the intersection of secure hardware and AI, and in helping shape future standards for confidential AI. Currently, they’re using Ubuntu Confidential VMs from Canonical and NVIDIA’s verification tools, with plans to contribute to these open source projects over time.
We’re excited to have Tinfoil join the CCC community and look forward to the insights and innovation they’ll bring as we work together to advance the future of trusted, verifiable computing.
The talks cover community updates, technical discussions, and real-world use cases—offering valuable insights into the future of confidential computing.
Thank you to all our speakers, contributors, and attendees. Stay tuned for more updates of CCC and get involved today.
We’re thrilled to welcome Mainsail Industries as the newest start-up member of the Confidential Computing Consortium (CCC)! As pioneers in secure edge virtualization, Mainsail is joining a global community of leaders who are shaping the future of confidential computing—together.
About Mainsail Industries
Mainsail Industries is on a mission to deliver the world’s most secure edge virtualization platform and common computing environment—safeguarding critical infrastructure and the defense industrial base, while enabling organizations to modernize and achieve mission success.
At the heart of their innovation is Metalvisor, a secure, cloud-native virtualization platform purpose-built for the modern edge. Designed with simplicity, scalability, and security in mind, Metalvisor helps organizations extend the life of their most critical assets and meet the evolving demands of today’s mission-critical workloads.
What is Metalvisor?
Metalvisor is redefining what secure virtualization can look like. Unlike traditional hypervisors, Metalvisor is designed for modern workloads—Virtual Machines (VMs), MicroVMs, and Containers—while eliminating the operational complexity that often comes with secure infrastructure. It leverages cutting-edge technologies to streamline cluster management, support cloud-native patterns, and ensure security through Trusted Execution Environments (TEEs) and Trusted Workload Identity (TWI).
Metalvisor in Action:
Secure Edge Computing: Metalvisor brings cloud-native capabilities to the edge, optimizing size, weight, power, and cost (SWaP-C) for environments where security and performance are paramount.
Secure Containers: Simplifies virtualization for container-based workloads, blending the agility of containers with the protection of next-generation hypervisors.
Secure AI: Protects sensitive AI/ML workloads through TEEs and TWI, ensuring both data and model integrity via hardware-rooted trust.
Why Mainsail Joined the CCC
“Joining the Confidential Computing Consortium is an exciting milestone for Mainsail. As CTO, I’m inspired by the level of thought leadership and collaboration happening within the CCC. It’s rare to find a space where so many different organizations come together to shape the future of secure computing, and I believe this collective effort will have a lasting, global impact.” — Brad Sollar, CTO & Co-Founder
Mainsail sees the CCC as both a community of peers and a catalyst for impact. With deep experience in trusted workloads, confidential virtualization, and workload identity, the team is eager to share insights from building Metalvisor—and to learn from other contributors tackling similar challenges.
Mainsail is especially excited to contribute to the development of standards and best practices around Trusted Workload Identity—a key capability in delivering secure, scalable computing environments.
Contributing to the Ecosystem
Mainsail is actively contributing to the Trusted Workload Identity (TWI) Special Interest Group, collaborating with 21 other contributors to advance the trustworthiness and interoperability of workload identity solutions across platforms.
“Collaborating with 21 other contributors in the Trusted Workload Identity (TWI) SIG reaffirmed Metalvisor’s leadership in confidential computing. We’re proud to be shaping the future of this next-generation technology, bridging the gap between trusted execution environments and trusted workloads—a capability Metalvisor has delivered since day one.” — Eric Wolfe, Chief Engineer
Please join us in giving a warm welcome to the team at Mainsail Industries! We look forward to the expertise and innovation they’ll bring to the Confidential Computing Consortium.
Last month saw the annual gathering of engineers and experts from across the Arm ecosystem for the Linaro Connect 2025 conference, which this year took place in Lisbon. Read our earlier blog post for a preview and some background about this event.
In this blog post, we’ll reflect on proceedings from the Endorsement API Workshop, which was a full-day event that was co-located with the conference. The workshop assembled a diverse group of expert representatives, from across industry and academia, for an intensive day of focused collaboration. The goal was to address a growing challenge in confidential computing: the distribution of the endorsements and reference values that are so essential to the attestation process, without which we cannot establish trust in our confidential computing environments. It is a data management problem that spans the entire industry, from supply chains all the way to application deployment. How do we tame complexity and fragmentation? How do we scale?
The workshop combined a morning of live, hands-on prototyping, alongside an afternoon of presentations, proposals and discussions.
Key Take-Aways
It was a packed and energetic day, with all participants demonstrating their shared belief that there is a lot of work to do and genuine value to be gained for the industry. Here’s a selection of some of the stand-out topics and activities from the day:
A brainstorming conversation to elaborate more precise requirements
An exploration of some of the existing, vendor-specific solutions, and how those might inspire new common solutions
A survey of the standardisation landscape and the organisations involved
A presentation and discussion of the new CoSERV query language, which is designed to facilitate the transfer of endorsement data between producers and consumers in a uniform and scalable way
An update on the proof-of-concept implementation of CoSERV that is currently ongoing in the CCC’s Veraison project.
Read the Full Workshop Report
The workshop has its own repository on GitHub, where you can review the full agenda, along with the list of participants. The full recordings for the afternoon session are also available in the repository, as is the detailed written report. You can also access the report directly here.
Get Involved
The workshop was a chapter in an ongoing story, which you can help to shape. Here are some ways that you can stay informed as this work progresses, or become an active collaborator: