The Linux Foundation Projects
Skip to main content
Category

Blog

Welcoming Phala to the Confidential Computing Consortium

By Blog No Comments

We are pleased to welcome Phala as the newest General Member of the Confidential Computing Consortium (CCC)! We’re glad to have Phala on board and greatly appreciate their support for our growing community.

About Phala

Phala is a secure cloud platform that enables developers to run AI workloads inside hardware-protected Trusted Execution Environments (TEEs). With a strong commitment to open-source development, Phala provides confidential computing infrastructure that ensures privacy, verifiability, and scalability. Their mission is to make secure and trustworthy AI deployment practical and accessible for developers worldwide.

Why Phala Joined CCC

By joining the CCC, Phala is partnering with industry leaders to advance open standards for confidential computing. Phala brings unique expertise through real-world deployment of one of the largest TEE networks in operation today, contributing valuable experience to help accelerate adoption of confidential computing.

At the same time, Phala looks forward to learning from the broader CCC community and collaborating to strengthen interoperability across the ecosystem.

Contribution to CCC-Hosted Projects

Phala is also contributing directly to CCC-hosted projects. Its open-source project, dstack, is now part of the Linux Foundation under the CCC. dstack is a confidential computing framework that simplifies secure application deployment in TEEs, providing verifiable execution and zero-trust key management to developers.

In Their Own Words

“Confidential computing is essential to the future of secure and trustworthy AI. By joining the Confidential Computing Consortium, we are deepening our commitment to building open-source, hardware-backed infrastructure that empowers developers everywhere. We are excited to contribute our experience operating one of the largest TEE networks and to collaborate with the community on shaping the future of confidential computing.”
Marvin Tong, CEO, Phala Network

QLAD Joins the Confidential Computing Consortium

By Blog No Comments

We’re pleased to welcome QLAD to the Confidential Computing Consortium (CCC), as the latest innovator helping define the next era of secure computing.

QLAD is a Kubernetes-native confidential computing platform that provides runtime protection by default, delivering pod-level Trusted Execution Environments (TEEs) and featuring encrypted Armored Containers™ for enhanced IP protection and post-quantum resilience. With post-quantum resilience and seamless integration, no code rewrites or infrastructure changes required, QLAD enables scalable, production-ready confidentiality for modern workloads.

“At QLAD, we believe confidential computing should be simple. We’re building a platform that delivers drop-in protection for sensitive workloads, without code rewrites or infrastructure disruption. We’re proud to join the CCC community and contribute to the standards, tooling, and trust models that help organizations stay secure across clouds, edges, and collaborative environments.”
Jason Tuschen, CEO, QLAD

Confidential computing is undergoing a transformation, from experimental to essential. QLAD was founded to help accelerate that shift by making trusted execution practical and DevOps-friendly, especially for organizations deploying at scale across cloud, hybrid, and edge environments.

Why QLAD joined CCC

The CCC provides a powerful venue to drive industry alignment on standards, reference architectures, and transparent governance. QLAD sees the consortium as a collaborative platform to:

  • Champion workload-first adoption patterns (beyond VM- or node-level models)
  • Demystify confidential computing for developers and security teams
  • Share insights as it prepares to open-source components of its container security layer in late 2025

What QLAD brings to the community
QLAD engineers are already contributing to CCC-hosted initiatives, including the Confidential Containers (CoCo) project. Contributions to date include:

  • QLAD engineers have contributed directly to the Confidential Containers (CoCo) project, including adding AWS SNP VLEK support across three repositories (trustee, guest-components, and azure-cvm-tooling)
  • Submitted eight pull requests (all merged) to cloud-api-adaptor, advancing workload orchestration in confidential environments
  • Engaged with members of U.S. Congress to raise awareness of Confidential Computing and Confidential Containers, helping ensure the technology receives attention and potential funding at the federal level

As QLAD prepares to open source additional components, it plans to work closely with the CCC Technical Advisory Council to align on contribution pathways and ensure long-term technical alignment.

What QLAD hopes to gain
In joining CCC, QLAD looks forward to:

  • Advancing attestation frameworks, policy enforcement models, and container standards
  • Collaborating with industry peers solving real-world deployment challenges
  • Participating in working groups that shape the future of confidential computing across AI, hybrid cloud, and zero-trust environments

We’re excited to welcome QLAD into the CCC community and look forward to their continued contributions to making confidential computing scalable, practical, and trusted by default.

Harmonizing Open-Source Remote Attestation: My LFX Mentorship Journey

By Blog No Comments

By Harsh Vardhan Mahawar

This blog post encapsulates my experience and contributions during the Linux Foundation Mentorship Program under the Confidential Computing Consortium. The core objective of this mentorship was to advance the standardization of remote attestation procedures, a critical facet of establishing trust in dynamic and distributed computing environments. Through focusing on the IETF’s Remote Attestation Procedures (RATS) architecture, we aimed to enhance interoperability and streamline the integration of various open-source verifier projects like Keylime, JANE, and Veraison.

Motivation: Why Standardization Matters

Open-source remote attestation tools often develop independently, resulting in inconsistencies in how they format and exchange attestation data. This fragmentation poses a challenge for interoperability across verifiers, relying parties, and attesters.

My mentorship focused on aligning these implementations with two crucial IETF drafts:

The goal was to standardize both evidence encoding and attestation result reporting, facilitating smoother integration between systems.

Laying the Foundation: Mapping to the RATS Architecture

Before diving into implementation, a fundamental understanding of the RATS architecture and its alignment with existing solutions was paramount. The RATS Working Group defines a standardized framework for remote attestation, enabling a Relying Party to determine the trustworthiness of an Attester based on evidence produced by such an Attester.

Our initial phase involved a detailed mapping of prominent open-source remote attestation tools—Keylime, JANE, and Veraison—against the RATS architectural model. This exercise was not merely theoretical; it was an actionable analysis driven by key principles:

  • Granularity: Pinpointing specific components and their RATS functions, rather than broad role assignments.
  • Data Flow: Analyzing the journey of evidence, endorsements, and attestation results to align with RATS conveyance models.
  • Standardization Focus: Identifying areas where these projects could adopt RATS-recommended standards.
  • Actionable Insights: Providing clear directions for modifications to enhance RATS compliance.

This foundational work was crucial because it provided a clear roadmap, highlighting where standardization gaps existed and how our contributions could most effectively bridge them, fostering a more unified confidential computing ecosystem.

1. Keylime

Keylime is a comprehensive remote attestation solution for Linux systems, focusing on TPM-based attestation. It ensures cloud infrastructure trustworthiness by continuously collecting and verifying evidence.

2. JANE

Jane Attestation Engine (a fork and major rewrite of the former A10 Nokia Attestation Engine i.e. NAE) is an experimental remote attestation framework designed to be technology-agnostic.

3. Veraison

Veraison is an attestation verification project under the Confidential Computing Consortium. It focuses on providing a flexible and extensible Verifier component for remote attestation, supporting multiple attestation token formats and providing APIs for evidence verification and endorsement provisioning.

Standardizing Evidence: The Conceptual Messages Wrapper (CMW)

A significant challenge in remote attestation is the diversity of evidence formats produced by different attestation technologies. This heterogeneity necessitates complex parsing and integration logic on the Relying Party’s side. The Conceptual Message Wrapper (CMW), as defined by IETF, offers a solution by providing a standardized collection data structure for attestation evidence.

My work involved implementing CMW within Keylime. The goal was to transition Keylime’s custom KeylimeQuote evidence format to the standardized CMW format, specifically targeting a new API version vX.X (version to be finalized). This involved:

  • Encapsulation: Wrapping disparate evidence components—such as TPM TPMS_ATTEST structures, TPMT_SIGNATURE values, PCRs, IMA measurement lists, measured boot logs, and Keylime-specific metadata (e.g., public key, boot time)—into a unified CMW structure.
  • Serialization: Ensuring proper base64url encoding and adhering to a defined JSON schema for the wrapped evidence.
  • Canonical Event Log (CEL) Integration: A crucial part was integrating the Canonical Event Log (CEL) format (from the Trusted Computing Group) for IMA and measured boot logs, further enhancing interoperability. This required careful parsing of raw log data and constructing CEL-compliant entries.
  • API Versioning: Implementing logic within the Keylime agent to serve CMW-formatted evidence for vX.X (version to be finalized) requests, while retaining support for legacy formats.

The motivation behind adopting CMW is clear: it significantly streamlines the implementation process for developers, allowing Relying Parties to remain agnostic to specific attestation technologies. This approach fosters extensibility, enabling easier support for new conceptual messages and attestation technologies without altering the core processing logic.

Standardizing Appraisal Results: EAT Attestation Results (EAR)

Beyond standardizing evidence, it is equally important to standardize the results of attestation. This is where the EAT Attestation Results (EAR) comes into play. EAR provides a flexible and extensible data model for conveying attestation results, allowing a verifier to summarize the trustworthiness of an Attester concisely and verifiably.

My contribution to EAT standardization focused on two main fronts:

  1. Developing a Python Library (python-ear): I developed a Python library (python-ear) that implements the EAT Attestation Results (EAR) data format, as specified in draft-fv-rats-ear. This library provides essential functionalities:
  • Claim Population: Defining and populating various EAR claims (e.g., instance_identity, hardware, executables, configuration) that represent appraisal outcomes.
  • Serialization/Deserialization: Encoding EAR claims as JSON Web Tokens (JWT) or Concise Binary Object Representation Web Tokens (CWT) and decoding them.
  • Signing and Verification: Supporting cryptographic signing of EAR claims with private keys and verification with public keys to ensure data integrity and authenticity.
  • Validation: Implementing validation logic to ensure EAR objects adhere to the specified schema.
  1. Keylime EAT Plugin: This work extends Keylime’s durable attestation framework by integrating EAT-based appraisal logic. The goal is to transform raw attestation evidence and policy data into structured AR4SI TrustVector claims, thereby enhancing the auditability and semantic richness of attestation outcomes. This critical step involved:
  • Evidence Validation: Leveraging Keylime’s existing functions to perform comprehensive validation of TPM quotes, IMA measurements, and measured boot logs.
  • Failure Mapping: Precisely mapping the various Failure events generated during Keylime’s internal validation processes to specific TrustClaim values within the EAT TrustVector. For instance, a quote validation failure indicating an invalid public key would map to an UNRECOGNIZED_INSTANCE claim.
  • State Management: A significant challenge was ensuring that the EAT appraisal logic could utilize Keylime’s validation functions without inadvertently altering the agent’s internal state, which could interfere with Keylime’s continuous attestation workflow. This necessitated careful refactoring and the introduction of flags to prevent state changes.
  • Submodule Status: Defining how the overall status of the EAT submodule (e.g., “affirming,” “warning,” “contraindicated”) is derived from the aggregated TrustClaim values.

The implementation of EAT is vital for realizing the full potential of remote attestation. It provides a common language for trustworthiness, allowing Relying Parties to make automated, policy-driven decisions based on a consistent, verifiable attestation result, irrespective of the underlying hardware or software components being attested.

Conclusion and Future Outlook

This LFX Mentorship has been an invaluable journey, providing a unique opportunity to contribute to the evolving landscape of confidential computing. By focusing on RATS architecture mapping, implementing the Conceptual Message Wrapper for evidence, and integrating Entity Attestation Tokens for appraisal results, we have made tangible steps towards enhancing interoperability, standardization, and the overall security posture of open-source remote attestation solutions.

The work on CMW and EAT is critical for fostering a more robust and scalable trusted and confidential computing ecosystem. It enables easier integration of diverse attestation technologies and provides a unified, machine-readable format for conveying trustworthiness. My gratitude goes to my mentors, Thore Sommer and Thomas Fossati, for their guidance, insights, and continuous support throughout this program.

While significant progress has been made, the journey towards a fully harmonized remote attestation ecosystem continues. Future efforts will involve full upstreaming of these changes into the respective projects and exploring broader adoption across the confidential computing landscape, further solidifying the foundations of trust in a dynamic digital world.

References

  1. IETF’s Remote Attestation Procedures (RATS) architecture
  2. Keylime
  3. JANE
  4. Veraison
  5. CMW (Conceptual Messages Wrapper)
  6. EAT (Entity Attestation Token)
  7. EAR (EAT Attestation Results)
  8. Canonical Event Log (CEL)
  9. python-ear library

Welcoming Tinfoil to the Confidential Computing Consortium

By Blog No Comments

We’re thrilled to welcome Tinfoil as the newest start-up member of the Confidential Computing Consortium (CCC)!

Tinfoil is an open source platform delivering cryptographically verifiable privacy for AI workloads. Their mission is to make it safe to process sensitive data through powerful AI models—without compromising user privacy. By leveraging confidential computing technologies, including NVIDIA’s confidential computing-enabled GPUs, Tinfoil ensures that no one—not even Tinfoil or the cloud provider—can access private user data. The platform also safeguards AI model weights from unauthorized access and supports end-to-end supply chain security guarantees.

“We’re excited to collaborate with the community to make hardware-backed AI privacy the standard.” — Tanya Verma, CEO of Tinfoil

As a company deeply invested in confidential computing, Tinfoil is joining CCC to both learn from and contribute to the broader ecosystem. Their team is especially interested in collaborating with others working at the intersection of secure hardware and AI, and in helping shape future standards for confidential AI. Currently, they’re using Ubuntu Confidential VMs from Canonical and NVIDIA’s verification tools, with plans to contribute to these open source projects over time.

We’re excited to have Tinfoil join the CCC community and look forward to the insights and innovation they’ll bring as we work together to advance the future of trusted, verifiable computing.

Now Available – Recordings from CCC’s Mini Summit at OSS NA 2025

By Blog No Comments

The recordings from the Confidential Computing Consortium Mini Summit at Open Source Summit North America 2025 are now available.

All sessions have been uploaded to the CCC YouTube channel, featuring a range of insightful talks from across the confidential computing ecosystem.

If you missed the summit or want to revisit any of the sessions, be sure to catch up here:
Watch now:

Introduction – Mike Bursell

Confidential Computing for Scaling Inference Workloads – Julian Stephen

Scaling Trust for Autonomous Intelligence with NVIDIA – Karthik Mandakolathur

Trustless Attestation Verification in Distributed Confidential Computing – Donghang Lu

Wrap Up – Mike Bursell

The talks cover community updates, technical discussions, and real-world use cases—offering valuable insights into the future of confidential computing.

Thank you to all our speakers, contributors, and attendees. Stay tuned for more updates of CCC and get involved today.

Welcome Mainsail Industries as a New Confidential Computing Consortium Start-up Member!

By Blog No Comments
WelcomeMainsail

We’re thrilled to welcome Mainsail Industries as the newest start-up member of the Confidential Computing Consortium (CCC)! As pioneers in secure edge virtualization, Mainsail is joining a global community of leaders who are shaping the future of confidential computing—together.

About Mainsail Industries

Mainsail Industries is on a mission to deliver the world’s most secure edge virtualization platform and common computing environment—safeguarding critical infrastructure and the defense industrial base, while enabling organizations to modernize and achieve mission success.

At the heart of their innovation is Metalvisor, a secure, cloud-native virtualization platform purpose-built for the modern edge. Designed with simplicity, scalability, and security in mind, Metalvisor helps organizations extend the life of their most critical assets and meet the evolving demands of today’s mission-critical workloads.

What is Metalvisor?

Metalvisor is redefining what secure virtualization can look like. Unlike traditional hypervisors, Metalvisor is designed for modern workloads—Virtual Machines (VMs), MicroVMs, and Containers—while eliminating the operational complexity that often comes with secure infrastructure. It leverages cutting-edge technologies to streamline cluster management, support cloud-native patterns, and ensure security through Trusted Execution Environments (TEEs) and Trusted Workload Identity (TWI).

Metalvisor in Action:

  • Secure Edge Computing: Metalvisor brings cloud-native capabilities to the edge, optimizing size, weight, power, and cost (SWaP-C) for environments where security and performance are paramount.
  • Secure Containers: Simplifies virtualization for container-based workloads, blending the agility of containers with the protection of next-generation hypervisors.
  • Secure AI: Protects sensitive AI/ML workloads through TEEs and TWI, ensuring both data and model integrity via hardware-rooted trust.

Why Mainsail Joined the CCC

“Joining the Confidential Computing Consortium is an exciting milestone for Mainsail. As CTO, I’m inspired by the level of thought leadership and collaboration happening within the CCC. It’s rare to find a space where so many different organizations come together to shape the future of secure computing, and I believe this collective effort will have a lasting, global impact.”
— Brad Sollar, CTO & Co-Founder

Mainsail sees the CCC as both a community of peers and a catalyst for impact. With deep experience in trusted workloads, confidential virtualization, and workload identity, the team is eager to share insights from building Metalvisor—and to learn from other contributors tackling similar challenges.

Mainsail is especially excited to contribute to the development of standards and best practices around Trusted Workload Identity—a key capability in delivering secure, scalable computing environments.

Contributing to the Ecosystem

Mainsail is actively contributing to the Trusted Workload Identity (TWI) Special Interest Group, collaborating with 21 other contributors to advance the trustworthiness and interoperability of workload identity solutions across platforms.

“Collaborating with 21 other contributors in the Trusted Workload Identity (TWI) SIG reaffirmed Metalvisor’s leadership in confidential computing. We’re proud to be shaping the future of this next-generation technology, bridging the gap between trusted execution environments and trusted workloads—a capability Metalvisor has delivered since day one.”
— Eric Wolfe, Chief Engineer

Please join us in giving a warm welcome to the team at Mainsail Industries! We look forward to the expertise and innovation they’ll bring to the Confidential Computing Consortium.

Reporting on the Endorsement API Workshop at Linaro Connect 2025

By Blog No Comments

Last month saw the annual gathering of engineers and experts from across the Arm ecosystem for the Linaro Connect 2025 conference, which this year took place in Lisbon. Read our earlier blog post for a preview and some background about this event.

As promised, confidential computing was an important theme at this year’s conference. Highlights included this keynote from Mike Bursell, and a presentation from Fujitsu on how Confidential AI workloads will be powered by their FUJITSU-MONAKA processor, based on Arm’s Confidential Compute Architecture (CCA).

In this blog post, we’ll reflect on proceedings from the Endorsement API Workshop, which was a full-day event that was co-located with the conference. The workshop assembled a diverse group of expert representatives, from across industry and academia, for an intensive day of focused collaboration. The goal was to address a growing challenge in confidential computing: the distribution of the endorsements and reference values that are so essential to the attestation process, without which we cannot establish trust in our confidential computing environments. It is a data management problem that spans the entire industry, from supply chains all the way to application deployment. How do we tame complexity and fragmentation? How do we scale?

The workshop combined a morning of live, hands-on prototyping, alongside an afternoon of presentations, proposals and discussions.

Key Take-Aways

It was a packed and energetic day, with all participants demonstrating their shared belief that there is a lot of work to do and genuine value to be gained for the industry. Here’s a selection of some of the stand-out topics and activities from the day:

  • A brainstorming conversation to elaborate more precise requirements
  • An exploration of some of the existing, vendor-specific solutions, and how those might inspire new common solutions
  • A survey of the standardisation landscape and the organisations involved
  • An innovative proposal to use Manufacturer Usage Description (MUD) files as a resource for the discovery of endorsement artifacts and services
  • A presentation and discussion of the new CoSERV query language, which is designed to facilitate the transfer of endorsement data between producers and consumers in a uniform and scalable way
  • An update on the proof-of-concept implementation of CoSERV that is currently ongoing in the CCC’s Veraison project.

Read the Full Workshop Report

The workshop has its own repository on GitHub, where you can review the full agenda, along with the list of participants. The full recordings for the afternoon session are also available in the repository, as is the detailed written report. You can also access the report directly here.

Get Involved

The workshop was a chapter in an ongoing story, which you can help to shape. Here are some ways that you can stay informed as this work progresses, or become an active collaborator:

  • Follow the IETF RATS Working Group through its meetings and mailing list
  • Follow the CCC Attestation SIG and join its regular public meetings or its Slack community
  • Follow the Veraison project through its regular meetings or its Zulip chat community

Let’s keep working together, openly, to make the attestation-based secure ecosystem a success.

EQTY Lab Joins the Confidential Computing Consortium to Reinvent Trust in AI

By Blog No Comments

EQTY Lab, a pioneering startup dedicated to securing the future of artificial intelligence, is joining the Confidential Computing Consortium (CCC) as a Startup Member. Known for its innovative work in cryptographic AI governance, EQTY Lab has developed technologies that bring integrity, transparency, and accountability to high-stakes AI deployments across sectors like the public sector, life sciences, and media.

The CCC is excited to welcome EQTY Lab into its growing community of leaders advancing confidential computing. By joining the consortium, EQTY Lab deepens its commitment to building systems that protect sensitive data and enable trust throughout the AI lifecycle. Their flagship solution, the AI Integrity Suite, uses confidential computing and verifiable compute to provide cryptographic proofs of AI operations, making agentic training and inference both secure and auditable.

“At EQTY Lab, we believe the future of AI depends on creating systems that can be trusted with sensitive data and mission-critical decisions,” said Jonathan Dotan, CEO of EQTY Lab. “Joining the Confidential Computing Consortium represents a significant step in our mission to build verifiable AI systems that operate with both privacy and accountability that can now begin on the processor itself.”

EQTY Lab’s recent launch of a Verifiable Compute solution marks a milestone in confidential AI. The platform uses hardware-based cryptographic notaries, leveraging CCC technologies like VirTEE on AMD SEV and exploring future adoption of COCONUT-SVSM. This ensures a tamper-proof record of every data object and code executed during AI workloads.

By participating in CCC, EQTY Lab aims to integrate deeper with open source projects and contribute to developing next-generation specifications for secure AI. Their work spans from implementing Intel’s TDX and Tiber solutions to contributing to Linux Foundation efforts like SPDX and SLSA, aligning secure enclave attestations with modern SBOM standards.

EQTY Lab joins a vibrant community of innovators within the CCC, committed to ensuring that confidential computing becomes the foundation of secure, trustworthy, and privacy-preserving technologies.

Confidential Computing Consortium Resources:

Follow us on X or LinkedIn

Shaping the Future of Attestation: Linaro to Host Endorsement API Workshop at Linaro Connect 2025

By Blog No Comments

This year’s Linaro Connect conference in Lisbon promises to be a landmark event for the confidential computing community. With multiple talks, workshops, and roundtables focused on trusted execution environments, attestation, and supply chain trust, confidential computing has emerged as an important theme of the 2025 conference.

Among the highlights: a keynote address from Mike Bursell, Executive Director of the Confidential Computing Consortium, who will share his insights on how industry-wide collaboration and open source are essential for the long-term success of this technology as it becomes mainstream.
Mike’s keynote is especially timely and relevant in the context of this year’s conference, where no fewer than 10 technical sessions are listed in the confidential computing track, from organisations including Arm, Linaro, Fujitsu and Huawei.

And it doesn’t end there.

On Tuesday May 13th (the day before the main conference), Linaro have allocated a full-day workshop on the topic of Endorsement APIs. This workshop brings together engineers, researchers, standards bodies, and open source contributors to tackle one of the most pressing challenges in remote attestation: how to securely and efficiently distribute Endorsements and Reference Values across the diverse ecosystem of confidential computing platforms and applications.

Why Endorsement APIs Matter

In Remote Attestation (RATS) architecture, Endorsements and Reference Values are essential artefacts for attestation evidence appraisal. They can originate from various sources throughout the supply chain, including silicon manufacturers, hardware integrators, firmware providers, and software providers. Their distribution is influenced by technical, commercial, and even geopolitical factors. The potential consumers of these artefacts, referred to as “Verifiers” in RATS terms, include cloud-hosted verification services, local verifiers bundled with relying parties, constrained nodes, and endpoint devices. This acute diversity creates challenges for software integration and poses fragmentation risks. Aligning on data formats and APIs will help address these challenges and maximise software component reuse for data transactions between endpoints.

A Space for Open Collaboration

Sharing its venue with the main Linaro Connect conference — the Corinthia Hotel in Lisbon — the workshop will combine hackathon-style prototyping sessions in the morning with interactive presentations and roundtables in the afternoon.
Confirmed participants include representatives from:

  • Arm
  • Intel
  • Microsoft Azure
  • Fujitsu
  • Oracle
  • IBM Research
  • NIST
  • Fraunhofer SIT
  • Alibaba
  • CanaryBit
  • and several university research groups

Activities on the day will include:

  • Gathering requirements from stakeholders
  • Surveying existing services and tools
  • Examining the interaction models between producers and consumers
  • Designing standardised APIs for retrieving endorsement artefacts from the supply chain
  • Hands-on prototyping

And most importantly, this is a space where implementers and spec authors can come together to turn ideas into prototypes, and prototypes into common solutions.

What is Linaro Connect?

If you’re new to the event, Linaro Connect is the premier open engineering forum for Arm software ecosystems. It brings together maintainers of open source projects, engineers from major silicon vendors, and contributors to key standards and security initiatives — all under one roof.

Whether you’re working on Linux kernel internals, UEFI, Trusted Firmware, or emerging attestation stacks, Linaro Connect is the place to share ideas, get feedback, and shape the direction of trusted computing.

You can view the full schedule for this year’s conference here.

Stay Tuned

We’ll publish a follow-up blog after the workshop, summarizing key outcomes, emerging standards proposals, and concrete next steps. Whether you’re building a verifier, defining a token format, or just starting to explore confidential computing, this is a conversation you’ll want to follow.

See you in Lisbon.

Showcasing ManaTEE at FOSDEM 2025

By Blog No Comments

This post was originally shared on deverlopers.tiktok.com

By Dayeol Lee, Research Scientist at TikTok Privacy Innovation Lab, and Mateus Guzzo, Open Source Advocate

At FOSDEM 2025, Dayeol Lee, a Research Scientist at TikTok’s Privacy Innovation Lab, introduced ManaTEE, an open-source framework designed to facilitate privacy-preserving data analytics for public research. The framework integrates Privacy-Enhancing Techniques (PETs), including confidential computing, to safeguard data privacy without compromising usability. It offers an interactive interface through JupyterLab, providing an intuitive experience for researchers and data scientists. ManaTEE leverages Trusted Execution Environments (TEEs) to ensure both data confidentiality and execution integrity, fostering trust between data owners and analysts. Additionally, it provides proof of execution through attestation, enabling researchers to demonstrate the reproducibility and integrity of their results. The framework simplifies deployment by leveraging cloud-based confidential computing backends, making secure and private data analytics accessible and scalable for diverse use cases.

The video recording of Dayeol Lee’s presentation is available for viewing here.

ManaTEE was originally developed by TikTok as a privacy solution for secure data collaboration and has been donated to the Linux Foundation’s Confidential Computing Consortium. Also, ManaTEE is the core privacy preserving technology powering TikTok Research Tools, such as the TikTok Virtual Compute Environment (VCE). The framework is designed to meet the increasing need for secure data collaboration, addressing critical challenges in data privacy and security.

Private data for public interest

Private data is considered very valuable for businesses, as they can extract significant value from it. However, many miss the value of private data for public interest. Personal or proprietary data can be combined to provide insights into various public research domains such as public health, public safety, and education. For example, medical data could be combined with personal dietary data to offer insights into how personal habits impact health.

Data analytics for public interest often requires the combination of numerous datasets to ensure accurate insights and conclusions. Sometimes these datasets come from different sources. There are several challenges to fully combining these datasets. Multiple data providers may have conflicting interests and enforce different privacy policies and compliances. Moreover, data may be distributed across many platforms, including on-premise clusters, clouds, and data warehouses, making it hard to ensure all computations on the data are accountable and transparent.

What is ManaTEE?

To fully enable privacy-preserving data analytics for public interest, we need a standardized approach that provides strong privacy protection with technical enforcement, as well as accountability and transparency. Moreover, we need a framework that is easy to deploy and use.

We find that existing technical solutions such as differential privacy and trusted execution environments offer great properties to achieve our goals. We believe that a well-designed system could use existing techniques to offer a standardized way of private data analytics.

We decided to design and build ManaTEE, a framework that allows data owners to securely share their data for public research, with technically enforced privacy, accountability, and transparency guarantees. With the framework, researchers can gain accurate insights from private or proprietary datasets.

ManaTEE community release


The first community release of ManaTEE
 includes easy deployment options, a comprehensive demo tutorial, and an extensible framework ready for contributions. Future plans for ManaTEE involve expanding backend support to multi-cloud and on-prem solutions, integrating privacy-compliant data pipelines, enhancing output privacy protections, and supporting confidential GPUs for AI workloads.

For those interested in exploring ManaTEE further, the project is available on GitHub, and the community is encouraged to contribute to its development. The open governance model under the Confidential Computing Consortium aims to foster a vibrant ecosystem of contributors to enhance the project with new features, improved security, and more use cases.