The Linux Foundation Projects
Skip to main content
Yearly Archives

2025

Does Confidential Computing work with Containers? 

By Blog No Comments

By Dan Middleton, Intel Senior Principal Engineer and Chair, CCC Technical Advisory Council

The term container can be ambiguous. Here are 3 different representations of what people might mean by a container.

The term container can be ambiguous. Here are 3 different representations of what people might mean by a container.

I’m often posed with questions about Confidential Computing and containers. Often, the question is something to the effect of, “Does Confidential Computing work with Containers?” or “Can I use Confidential Computing without redesigning my containers?” (Spoilers: Yes and Yes. But it also depends on your security and operational goals.)

The next question tends to be, “How much work will it be for me to get my containerized applications protected by Confidential Computing?” But there are a lot of variations to these questions, and it’s often not quite clear what the end goal is. Part of the confusion comes from “container” being a sort of colloquialism; it can mean a few different things depending on the context.

In Confidential Computing, we talk about the protection of data in use, in contrast with the protection of data at rest or in transit. So, if we apply the same metaphor to containers, we can see three different embodiments of what a container might mean.

In the first case, a container is simply a form of packaging, much like a Debian file or an RPM. You could think of it as a glorified zip file. It contains your application and its dependencies. This is really the only standardized definition of a container from the OCI image spec. There’s not a lot of considerations for packaging that are relevant for Confidential Computing, so this part is pretty much a no-op.

The next thing people might mean when they talk about a container is that containerized application during runtime. That container image file included an entry point which is the process that’s going to be launched. Now, that process is also pretty boring. It’s just a normal Linux process. There’s no special layer intermediating instructions like a JVM or anything like that. The thing that makes it different is that the operating system blinds the process from the rest of the system (namespacing) and can restrict its resources (cgroups). This is also referred to as sandboxing. So again, from a Confidential Computing perspective, there’s nothing different that we would do for a container process than what we would do for another process.

However, because the container image format and sandboxing have become so popular, an ecosystem has grown up around these providing orchestration. Orchestration is another term that’s used colloquially. When you want to launch a whole bunch of web applications spread across maybe a few different geographies, you don’t want to do that same task 1000 times manually. We want it to be automated. And so, I think 90% of the time, maybe 99% of the time, that people ask questions about containers and Confidential Computing, they’re wondering whether Confidential Computing is compatible with their orchestration system.

Visualizing the Control Plane and Nodes.

Administrative users operate a control plane which starts and stops containers inside nodes (which are often virtual machines). A Pod is a Kubernetes abstraction which has no operating system meaning – it is one or more containers each of which is a process.

One of the most popular orchestration systems is Kubernetes (K8s for short). Now, there are many distributions of K8s under different names, and there are many orchestration systems that have nothing to do with K8s. But given its popularity, let’s use K8s as an example to understand security considerations.

For our purposes, we’ll consider two K8s abstractions: the Control Plane and Nodes. The Control Plane is a collection of services that are used to send commands out to start, monitor, and stop containers across a fleet of nodes. Conventionally, a node is a virtual machine, and your containerized applications can be referred to as pods. From an operating system perspective, a pod is not a distinct abstraction. It’s sufficient for us to just think of a pod as one or more containers or equivalently one or more Linux processes. So, we have this control plane, which are a few services that help manage the containers that are launched across a fleet of virtual machines.

Now we can finally get into the Confidential Computing-related security considerations. If we were talking about adversary capabilities, the Control Plane has remote code execution, which is about as dangerous as an attacker can be. But is the Control Plane an adversary? What is it that we really want to isolate here, and what is it that we trust? There are any number of possible permutations, but they really collapse down to about four different patterns.

Four isolation patterns

Four isolation patterns that recognize different trust relationships with the control plane.

In the first pattern, we want to isolate our container, and we trust nothing else. In the second case, we may have multiple containers on the same node that need to work together, and so our isolation unit we could think of as a pod, but it’s more properly or more pragmatically a virtual machine. Now, in both of these cases, but especially the second, the control plane still has influence over the container and its environment, no matter how it’s isolated. To be clear, the control plane can’t directly snoop on the container in either case, but you may want to limit the amount of configuration you delegate to the the control plane.

And so, in the third case, we put the whole control plane, which means each of the control plane services, inside a Confidential Computing environment. Maybe more importantly, we operate the control plane ourselves removing the 3rd party administrator entirely. It’s commonly the case, though, that companies don’t want to operate all of the K8s infrastructure by themselves, and that’s why there are managed K8s offerings from cloud service providers. And that brings us to our last case, where we decide that we trust the CSP, and we’re just going to sort of ignore the fact that the control plane has remote code execution inside what is otherwise our isolated VM for our pods or containers.

Process and VM Isolation examples

Process and VM Isolation examples with associated open source and commercial projects.

Let’s make this a little bit more concrete with some example open-source projects and commercial offerings. The only way to actually isolate a container, which means isolating a process, is with Intel® Software Guard Extensions (Intel® SGX) using an open-source project like Gramine or Occlum. So, if we come back to the question, “How much work do I have to do here?” there is at least a little bit of work because you’ll use these frameworks to repackage your application. You don’t have to rewrite your application, you don’t have to change its APIs, but you do need to use one of these projects to wrap your application in an enclave. This arguably gives you the most stringent protection because here you are only trusting your own application code and the Gramine or Occlum projects.

To the right, your next choice could be to isolate by pod. In practice, this means to isolate at the granularity of a virtual machine (VM). Using an open-source project like CNCF Confidential Containers (CoCo) lets you take your existing containers and use the orchestration system to target Confidential Computing hardware. CoCo can also target Intel® SGX hardware using Occlum, but more commonly CoCo is used with VM isolation capabilities through Intel® Trust Domain Extensions (Intel®TDX), AMD Secure Encrypted Virtualization (SEV)*, Arm Confidential Computing Architecture (CCA)*, or eventually RISC-V CoVE*. And there’s a little bit of work here too. You don’t have to repackage your application, but you do need to use this enlightened orchestration system from CoCo (or a distribution like Red Hat OpenShift Sandbox Containers*). These systems will launch each pod in a separate confidential virtual machine. They have taken pains to limit what the control plane can do and inspect, and there is a good barrier between the CVM and the control plane. However, it is a balancing act to limit the capabilities of the control plane when those capabilities are largely why you are using orchestration to begin with.

Edgeless Systems Constellation* strikes a little different balance. If you don’t want to trust the control plane but you still want to use CSP infrastructure or some other untrusted data center, Constellation will run each control plane service in a confidential VM and then also launch your pods in confidential VMs. But operating K8s isn’t for everyone, so when it comes to how much work is involved, it depends on whether you operate k8s or not. If you don’t normally operate k8s then this would be a significant increase. There are no changes that you need to make to your applications, though, and if your company is already in the business of operating their own orchestration systems, then there’s arguably no added cost or effort here.

But for those organizations who do rely on managed services from CSPs, you can make use of confidential instances in popular CSPs such as Azure Kubernetes Service (AKS)* and Google Kubernetes Engine (GKE)*. And this is generally very simple, like checkbox simple, but it comes with a caveat that you do trust the CSP’s control of your control plane. Google makes this explicit in some very nice documentation: (https://cloud.google.com/kubernetes-engine/docs/concepts/control-plane-security).

Now, which one of these four is right for your organization depends on the things that we’ve just covered, but also a few other considerations. The user that chooses container isolation generally is one that has a security-sensitive workload where a compromise of the workload has real consequences. They might also have a multiparty workload where management of that workload by any one of those parties works against the common interests of that group.

Personas and container isolation preferences

Typically, those with security sensitive or multiparty workloads will isolate at process granularity. VM isolation can be implemented differently based on whether the control plane is trusted or not.

Users of CNCF Confidential Containers probably don’t fully trust the CSP, or they want defense in depth against the data center operator, whether that’s a CSP or their own enterprise on-prem data center. More importantly they probably only want to deploy sensitive information or a cryptographic secret, if they can assess the security state of the system. This is called a Remote Attestation. Attestation is a fun topic and one of the most exciting parts of Confidential Computing, but it can be an article unto itself. So, to keep things brief, we’ll just stick with the idea that you can make an automated runtime decision whether a system is trustworthy before deploying something sensitive.

Now let’s look at the last two personas on the right of the diagram. Users of Constellation may not trust a CSP, or they may use a multi-cloud hosting strategy where it’s more advantageous for them to operate the K8s control plane themselves anyway. For users of CSP managed K8s, the CSP does not present a risk but the user certainly wants defense in depth protections against other tenants using that same shared infrastructure. In these latter two cases, Remote Attestations may also be desired, but used in more passive ways. For example, from an auditing perspective, logging the Attestation can show compliance that an application was run with protection of data in use.

In this article, we’ve covered more than a few considerations, but certainly, each of these four patterns has more to be understood to make an informed choice when it comes to security and operational considerations. I hope that this arms you, though, with the next set of questions to go pursue that informed choice.

[Edit 3/7: Clarified control plane influence per feedback from Benny Fuhry.]

Legal Disclaimers

Intel technologies may require enabled hardware, software or service activation.

No product or component can be absolutely secure. 

Your costs and results may vary. 

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others.

Welcome to the 2025 February Newsletter

By Newsletter No Comments

February’s Issue:

  • From the Executive Director
  • NEW! Outreach: Job Board Page Now Live!
  • Upcoming Events
  • From the TAC: Trustworthy Workload Identity
  • Recent News

Hello Community Member, welcome to our latest newsletter, where we share some highlights from my February travels across Europe and exciting updates in Confidential Computing.

From the Executive Director (ED)

February has been Europe-heavy for me, which  makes a change (and works for me as I’m based in the UK).  There were three different conferences – FOSDEM in Brussels, State of Open Con in London and the AI Security Summit in Paris.  FOSDEM was (as usual!) packed and chaotic, but with devrooms for Confidential Computing and Attestation both busy, and an extra pre-summit meeting around Attestation (there were just too many talks submitted to have them all in the official conference), the amount of interest at the developer level is clearly really picking up.

At State of Open Con, I presented at pun-heavy 15 minute session on PII (Personally Identifiable Information) and also appeared on a panel around Open Source Security, excellently moderated by Divya Mohan.  Sal Kimmich, our out-going Technical Community Arcthitect, presented a session on Secure Isolation and Trust Boundaries: A Crash Course for Engineers.  State of Open Con is now in its third year, and continues to be one of the best open source conferences of the year.

The AI Security Summit was held the day before the huge international AI summit in Paris, and was notable for me in that the number of people who had actually heard of Confidential Computing was much higher than I’m used to.  I gave an introduction to remote attestation and why it’s so important, and found myself able to dive deeper into the technical side than I’m used to: with FOSDEM and this, it really feels like the message is getting out there.

The last thing I’d like to do is mention a new Premier member to the CCC: Shielded Technologies joined us this month. We look forward to working with them and the various General and Associate members who have also recently joined.

Outreach: Job Board Page Now Live! 

We’re thrilled to announce that the CCC Job Board is now live! It features exciting career opportunities for professionals passionate about advancing secure computing technologies, with roles in research, development, and the implementation of cutting-edge confidential computing solutions. 

Check out the available positions and add your job postings to the board and connect with top talent! Visit the Job Board

Upcoming Events: 

From the TAC: Trustworthy Workload Identity 

The ability to identify a workload across the internet with cryptographic certainty is one of the key capabilities of Confidential Computing. However, much of the ecosystem still relies on less secure mechanisms, such as using filenames or other easily spoofable features, to identify code. Identifying workloads with Confidential Computing techniques offers significant benefits, but we still face ease-of-use challenges. A new community effort is emerging to improve both industry standards for Workload Identity and its ease of use. Like our other open source initiatives, these meetings and documents are publicly accessible. If you’d like to get involved, you can find the latest updates on meetings and discussions on the TAC mailing list.

Recent News

  • OC3 2025 Registrations are Open: The Open Confidential Computing Conference registrations are free and already open! Join us on March 27th, either online or on-site in Berlin, to learn all about the latest developments in confidential computing by thought leaders at Microsoft, Arm, NVIDIA and more!
    OC3
  • Intel Announces TEE-IO Support in Latest Xeon 6 Processors: On February 24, Intel launched the latest processors in the Intel Xeon 6 family and announced support for Trusted Execution Environment-IO (TEE-IO).  The Intel Xeon 6 processors with P-cores (formerly code-named “Granite Rapids”) include hardware support for Intel TDX Connect, Intel’s implementation of TEE-IO.  Intel TDX Connect will enhance the performance and flexibility of Confidential Computing use cases that include confidential operations on both the CPU and a PCIe-connected device such as GPU-accelerated confidential AI. Solutions based on Intel TDX Connect will require a capable CPU, an enabled host OS/hypervisor, and a TEE-IO capable device.  Intel is engaged throughout the ecosystem to accelerate enablement of complete solutions.

Image source: https://www.businesswire.com/news/home/20250224348229/en/

Subscribe to our newsletter!

🎉 Happy New Year! Welcome to the 2025 January Newsletter

By Newsletter No Comments

January’s Issue:

  • From the TAC:  Path Forward for Confidential Computing in 2025
  • Welcome 2025 Leadership
  • NEW! Confidential Computing Messaging Guide
  • Confidential Computing Summit Call for Papers
  • Recent News

From the TAC

Confidential Computing in 2024: Growth, Security, and Collaboration Pave the Way for an Exciting 2025

In 2024 We did a great job working together on the TAC to make the world more secure with Confidential Computing, than any of us could have done as individuals or individual companies.

As an open source organization, seeing our projects grow is nearest and dearest to our hearts. Long standing projects like Gramine advanced with more and more adoption.

We grew our portfolio by 60% with new projects contributed by Intel, Samsung, Suse, and TikTok. These new projects span:

  • Fundamental support for AI Accelerators to directly enable AI Cleanroom Capabilities. 
  • As 2025 kicks off we already have a new project in the pipeline. Stay tuned for some news coming up real soon on that.

Our projects, already security focused, improved their security posture adopting best practices from the Open Source Security Foundation. All CCC projects have completed or initiated the OpenSSF Best Practice BadgeIn 2025, we’ll help our projects get even more robust as we assess how Scorecards can identify additional improvements.  

Special Interest Groups

We revamped our mentorship program, welcoming a new cohort of mentees who are actively contributing to CCC projects and gaining security expertise—thanks to our dedicated maintainers who generously mentor them.

Our Special Interest Groups (SIGs) made great strides:

  • The Kernel SIG is accelerating Confidential Computing feature upstreaming in the Linux Kernel.
  • The Attestation SIG fosters collaboration on attestation data standards and protocols, with impactful developments expected in 2025.
  • The Governance, Risk, and Compliance (GRC) SIG is driving awareness of critical concepts like Workload Identity, which the TAC will explore further this year.

Curious about Workload Identity? Come join us at a TAC or GRC SIG meeting or follow the discussion on our mailing lists.

Welcome to Our 2025 CCC Leaders

We’re excited to kick off 2025 by introducing our new leadership team:

Governing Board

Chair: Nelly Porter (Google)

Vice-Chair: Emily Fox (Red Hat)

TAC

Chair: Dan Middleton (Intel)

Vice-Chair Yash Mankad (Red Hat)

Outreach Committee

Chair: Rachel Wan (IBM)

Vice-Chair: Mike Ferron-Jones (Intel)

CCC Outreach

Driving Adoption and Engagement: Reflecting on 2024 and the Path Forward for Confidential Computing in 2025

Looking back at 2024, the Outreach Committee launched brand repositioning efforts, completed the Confidential Computing Messaging Guide, and shifted the focus from “What is Confidential Computing” to “Why Confidential Computing Matters.” The committee also proposed creative agency work, updates to the CCC website, and refinements to the logo and mascot. Additionally, we engaged IDC for a market analysis white paper to update market data and expand Confidential Computing coverage. We participated in key industry events, including FOSDEM, OC3, RSAC, CC Summit, and OSFF.

For 2025, our strategy focuses on enhancing CCC’s presence through creative agency work, market analysis, community outreach, events, and educational resources. These initiatives aim to strengthen our mission while increasing engagement and visibility across industries.

We invite the community to contribute by submitting use cases, sharing insights, and participating in upcoming events. Your involvement is vital to shaping the future of Confidential Computing and driving collective progress.

The Confidential Computing Summit Returns for Year 3!

Mark your calendar for June 17-18 in San Francisco as the Confidential Computing Consortium collaborates with Opaque Systems for the third annual Confidential Computing Summit.

Bringing together the brightest minds in confidential computing, secure AI, and privacy-preserving technologies, the Summit will explore the transformative potential of generative AI across industries like finance, healthcare, and manufacturing while learning how to keep sensitive data secure.

Snag the Early Bird rate now and watch last year’s inspiring keynotes.

Call for Speakers: Deadline February 17

Have a real-world use case or breakthrough to share? Submit your session proposal and join the conversation shaping the future of confidential computing and trustworthy AI. Submit here.

Recent News

Subscribe to our newsletter

The SIMI Group Joins the Confidential Computing Consortium to Advance Data Security and Public Health Innovation

By Announcement, Blog No Comments

The SIMI Group, Inc. (SIMI), a pioneer in health information exchange and analytics services since 1996, continues to push boundaries in public health and healthcare informatics. By addressing critical data gaps across public health agencies, healthcare systems, community organizations, payers, pharmaceutical companies, and researchers, SIMI delivers near real-time situation awareness while prioritizing privacy. Their expertise transforms complex data into actionable insights that drive community health and wellness.

The Confidential Computing Consortium (CCC) is excited to welcome The SIMI Group, Inc. (SIMI) as a startup member. By joining the CCC, SIMI reinforces its commitment to advancing data security and driving the global adoption of trusted execution environments (TEEs). This strategic collaboration with industry leaders like Microsoft and AMD positions SIMI to meet the rigorous privacy, security, and compliance standards of healthcare and public health, while building trust among the public and community partners.

“SIMI is excited to join the CCC and collaborate with Microsoft and AMD,” said Nilesh Dadi, Director of Trusted & Predictive Analytics at SIMI. “This partnership empowers us to support healthcare systems and public health by leveraging trusted execution environments. With this technology, we enable near real-time situation awareness of vaccinations, outbreaks, and medical emergencies in a transparent and privacy-protecting manner.”

SIMI’s leadership in public health innovation stems from firsthand experience with real-world challenges. “SIMI was boots-on-the-ground from the earliest days of the COVID-19 pandemic in the United States,” said Dan Desmond, President & Chief Innovation Officer at SIMI. “The world can no longer rely on faxes, massive group phone calls, and spreadsheets to manage medical and public health emergencies. We’re working with CCC collaborators to build on our progress with the Confidential Consortium Framework, moving toward an accountable and attestable zero-trust future.”

As a CCC member, SIMI is poised to drive the adoption of secure, privacy-first technologies, shaping the future of public health and healthcare informatics through collaboration and innovation.

Confidential Computing Consortium Resources:

MITRE Joins the Confidential Computing Consortium to Advance Cloud Security

By Announcement, Blog No Comments

We are thrilled to announce that MITRE has joined the Confidential Computing Consortium (CCC), further solidifying its commitment to advancing cybersecurity innovation. As a leader in providing technical expertise to the U.S. MITRE’s participation will play a pivotal role in shaping the future of secure cloud computing.

A New Era of Cloud Security

With the growing migration of IT resources to the cloud, securing sensitive data has become more critical than ever. Confidential Computing represents a groundbreaking advancement in cybersecurity by enabling encryption for “data in use” and supporting hardware-bound “enclave attestation.” These capabilities reduce the cyber threat surface, offering unparalleled protection for sensitive data processed in cloud environments.

MITRE’s cybersecurity engineers regularly address the most complex and critical challenges in information systems security as they partner with the Government. By leveraging Confidential Computing, MITRE seeks to enhance cloud security while addressing uncertainties and mitigating potential new risks introduced by emerging technologies.

Through its membership in the CCC, MITRE aims to stay at the forefront of:

  • Understanding Emerging Use Cases: Identifying practical applications of Confidential Computing across industries and government sectors.
  • Evaluating Implementation Methods: Exploring best practices for adopting Confidential Computing standards and technologies.
  • Assessing Value Propositions: Demonstrating the tangible benefits of Confidential Computing for cloud security and operational efficiency.
  • Analyzing Vulnerabilities: Investigating potential risks and threats associated with emerging products, standards, and cloud services.

Driving Collaboration and Innovation

MITRE’s expertise in cybersecurity will contribute significantly to the CCC’s mission of broadening the adoption of Confidential Computing. By collaborating with industry leaders, MITRE will help establish robust standards, develop practical solutions, and ensure secure implementation methods that meet the needs of both Government and private sectors.

As Confidential Computing continues to evolve, MITRE’s involvement will enable greater innovation and confidence in cloud security, benefiting the Government and the broader technology community. Together, we can address the challenges of tomorrow and build a more secure digital landscape.

Confidential Computing Consortium Resources:

Introducing the Messaging Guide for Confidential Computing

By Blog No Comments

Adopting on-demand computing for sensitive and private workloads is critical in today’s interconnected and data-driven world. We need simple, fast, and reliable security mechanisms to protect data, code, and runtimes to achieve this. The Confidential Computing Consortium’s new Messaging Guide is a comprehensive resource that explores how Confidential Computing (CC) addresses these challenges and supports organizations in securing their workloads.

Confidential Computing capabilities protect against unauthorized access and data leaks, enabling organizations to collaborate securely and comply with regulatory requirements. By encrypting data at rest, in transit, and during processing, CC technology allows sensitive workloads to move to the clud without requiring full trust in cloud providers, including administrators and hypervisors.

This white paper outlines the motivations, use cases, and solutions made possible by Confidential Computing, empowering organizations to:

 

Who Should Read This Guide?

This document is tailored to meet the needs of a diverse audience, including:

  • Organization leaders to explore use cases and services that can be enabled by Confidential Computing.
  • Organization leaders considering whether to use Confidential Computing for securing a new or existing
  • product(s), projects, services, and capabilities
  • Regulators, standards bodies and ecosystem members in Data Privacy and related fields.
  • General public/Mainstream Media/Publication to raise general awareness of Confidential Computing and its benefits

Why This Matters

As the demand for secure data processing grows, Confidential Computing provides a critical solution to meet the challenges of modern cloud environments. Whether enabling secure AI applications, fostering inter-organizational data collaboration, or addressing compliance needs, CC empowers organizations to innovate without compromising security.

We invite you to explore the Messaging Guide and discover how Confidential Computing can transform your approach to secure computing. Together, we can build a future where privacy and security are foundational to all digital workloads.

Read the full report here.

ManaTEE: Transforming Private Data Analytics First Community Release

By Blog No Comments

The Confidential Computing Consortium proudly announces the first community release of ManaTEE, an open-source framework for private data analytics. Originally developed by TikTok as a privacy solution for secure data collaboration, ManaTEE is now open-sourced and part of the Linux Foundation’s Confidential Computing Consortium.

Highlights of the Community Release:

  • Easy Deployment: Test ManaTEE locally with minikube, no cloud account needed.
  • Comprehensive Demo Tutorial: Step-by-step guidance to get started.
  • Extensible Framework: Refactored code with Bazel builds and a CI/CD pipeline, ready for contributions.

What’s Next?

ManaTEE is evolving with plans to:

  • Expand backend support to multi-cloud and on-prem solutions.
  • Integrate privacy-compliant data pipelines.
  • Enhance output privacy protections.
  • Support confidential GPUs for AI workloads.

Technical Steering Committee will soon guide the project’s future.

Learn More

Explore the potential of ManaTEE and join the community effort. Read the full announcement: First Community Release of ManaTEE.

 

Applied Blockchain Joins the Confidential Computing Consortium as a General Member

By Announcement, Blog No Comments

We are excited to announce that Applied Blockchain has rejoined the Confidential Computing Consortium (CCC) as a General Member, reinforcing its longstanding commitment to advancing innovation in Confidential Computing and Trusted Execution Environment (TEE) technology. This move aligns with CCC’s mission to enhance trust and privacy in business applications and marks a continued dedication to tackling some of the most pressing challenges in digital privacy.

As one of the few organizations that are members of the Confidential Computing Consortium and the LF Decentralised Trust, Applied Blockchain stands out for its cross-domain expertise in privacy-preserving technology. This dual membership uniquely positions the company to foster collaboration and drive progress across both ecosystems, promoting secure, transparent, and trustworthy solutions for the future of technology.

Applied Blockchain’s renewed involvement comes directly from its groundbreaking work on the Silent Data platform. By integrating TEE technology with blockchain, Silent Data provides a robust solution for privacy-conscious companies.

“We are thrilled to rejoin the Confidential Computing Consortium as a General Member, reinforcing our commitment to advancing Trusted Execution Environment (TEE) technologies. Our continued work on Silent Data demonstrates how we can tackle privacy challenges, and we look forward to collaborating with CCC members to drive innovation, enhance trust, and protect sensitive data.”
— Adi Ben-Ari, Founder & CEO at Applied Blockchain

Applied Blockchain focuses on safeguarding consumer and business data in critical sectors such as banking, energy trading, and supply chains. With its renewed membership, the company is positioned to make significant strides in evolving privacy-enhancing technologies, helping organizations across industries protect sensitive data while driving trust and security in their operations.

We look forward to Applied Blockchain’s continued impact as they collaborate with CCC members and help shape the future of Confidential Computing.

Happy Holidays!🎄 Welcome to the 2024 December Newsletter

By Newsletter No Comments

December’s Issue:

  1. Adieu, 2024. Outreach Year In Review Quick Snapshot
  2. Executive Director Year In Review
  3. TAC Year In Review
  4. CCC Mentorships are Open!
  5. Community News

Welcome to the December edition of our newsletter – your guide to awesome happenings in our CCC community. We’re excited to continue to connect with you and help drive innovation. Let’s go!

CCC Presence in 2024 & Looking Ahead

The CCC has grown tremendously with lots of activities this year. Thanks to all the CCC community members for their participation and collaboration. We could not do what we do without our members’ involvement. 

The CCC showed up at more than 20 events this year, delivering talks, demos, and networking opportunities. We’ve also published more than 47 blogs, white papers, and tech talk/webinars hosted on our platform. One of the biggest publications was The Case for Confidential Computing white paper. Our social media interaction has increased more than 93%, making an impressive milestone for our community.

Awesome job this year!

In the new year, we have many more activities forming up. Our focus is to double down on impactful engagement with a more targeted approach. Our events will be reduced in quantity but more targeted to industry verticals, driving meaningful engagement. We’re working on engaging with analysts for a white paper to assess the Confidential Computing market, and a refreshed branding and messaging guide will be introduced as we kick off the new year. Our Outreach Meetings are open to all, if you’re curious about our engagement or want to get involved, feel free to join us!

Executive Director Update

November was a busy month for the CCC and we’ve managed a number of important tasks.  The first is approval of a budget for 2025 and the second is the election of new chairs and vice chairs to our various committees.

I’m delighted to welcome:

  • Governing Board
    • Chair: Nelly Porter (Google)
    • Vice-chair: Emily Fox (Red Hat)
    • General member representatives: Manu Fontaine (Hushmesh), Samuel Ortiz (Rivos Inc.), Mark Medum Bundgaard (Partisia)
  • TAC
    • Chair: Dan Middleton (Intel)
    • Co-Chair: Yash Mankad (Red Hat)
  • Outreach
    • Chair: Rachel Wan (IBM)
    • Vice-chair: Mike Ferron-Jones (Intel)

Thank you to everyone who participated in the elections both as candidates and voters.

We also attended, spoken, and exhibited at KubeCon NA.  It was great to see a growing number of sessions involving Confidential Computing at the conference and also to welcome representatives from various members to staff, share resources, and speak at our booth.  The ability to make use of CCC booths at conferences we’re attending is one of the great benefits of membership in the consortium, particularly for smaller companies and we always welcome representation.

Though things are calming down as December proceeds, there are still activities ongoing.  One of note is a Linux Foundation workshop in Brussels around the new European Union Cyber Resilience Act (CRA).  This is likely to have an impact on members, the CCC, and its projects, and I will be attending to find out more and ensure that we have as much information and input as possible.  Having read the (81-page!) report on the day it was released, I’m planning to produce a summary for members that will help provide a shorter and more readable description of the possible actions we and our members should take as this legislation moves into its implementation phase.

TAC Year In Review

We have for the last couple years organized our work around Projects, Ecosystem, and Community.

Community
Yash Mankad gave us an update on our mentorship program. A big shoutout to Sal for their hard work in facilitating these efforts! Yash also mentioned that for 2025, we aim to expand this program to help keep our project repositories up-to-date.

Fritz Alder gave us a rundown of the Tech Talks coordinated in 2024. The pipeline for 2025 is already growing, and Fritz is committed to organizing more talks, with a focus on academic contributions.

Ecosystem
Alec Fernandez provided insights into our ecosystem work. As security practitioners, we’ve been focusing on security and privacy compliance, standards, and research. One notable improvement is the addition of “data in use” to the Cloud Controls Matrix. 

Mark Novak has led the drafting of a collection of compliance guidelines that we plan to get out early in 2025 as one of our first sets of accomplishments.

Projects
Catherine Zhang updated us on the Linux Kernel SIG’s efforts to facilitate upstreaming CC features into the Linux Kernel. 

Mingshen Sun shared valuable lessons learned from the ManaTEE project. These insights will be instrumental in supporting future projects, particularly in areas like mentorship, hardware, and cloud credits.

We’d also like to celebrate significant progress in OpenSSF compliance across our projects, with COCONUT-SVSM achieving an exceptional 107% compliance score and earning the OpenSSF Passing Badge, SPDM-RS advancing to 97% compliance and nearing badge status, and the Certifier Framework reaching 84% compliance. As we look to 2025, our focus is on increasing compliance across all projects to 90% or higher and standardizing OpenSSF compliance into the onboarding process for new projects, ensuring a consistent commitment to security and excellence.

Mentorship Opportunities Now Open!

NEW! Several CCC projects are now accepting mentorship applications. These mentorships provide hands-on experience in key areas of confidential computing, perfect for developers eager to enhance their skills while contributing to meaningful open source projects.

These mentorships offer an excellent opportunity to develop expertise in confidential computing while contributing to industry-leading projects. We encourage interested participants to apply and join us in shaping the future of confidential computing! Please share these opportunities with your network!

Community News

·        Podcast: TEEs and Confidential Computing: Paving the Way for Onchain AI

·        ACSAC 2024 Cybersecurity Artifact Award: “Rapid Deployment of Confidential Cloud Applications with Gramine”

·        Using trusted execution environments for advertising use cases

Subscribe to our newsletter

Verified Confidential Computing: Bridging Security and Explainability

By Blog No Comments

January 6, 2025

Author: Sal Kimmich

The rapid adoption of AI and data-driven technologies has revolutionized industries, but it has also exposed a critical tension: the need to balance robust security with explainability. Traditionally, these two priorities have been at odds. High-security systems often operate in opaque “black box” environments, while efforts to make AI systems transparent can expose vulnerabilities. 

Verified Computing bridges this gap that reconciles these conflicting needs. It enables organizations to achieve unparalleled data security while maintaining the transparency and accountability required for compliance and trust.

The Core Technologies That Make It Possible

1. Trusted Execution Environments (TEEs)

TEEs are hardware-based secure enclaves that isolate sensitive computations from the rest of the system. They protect data and processes even if the operating system or hypervisor is compromised. Examples include Intel® SGX, Intel® TDX and AMD SEV.

  • How They Work: TEEs operate as secure zones within a processor, where data and code are encrypted and inaccessible to external actors. For example, during a financial transaction, a TEE ensures that sensitive computations like risk assessments are performed without exposure to the broader system.
  • Why They Matter: They protect data “in use,” closing a crucial gap in the data lifecycle that encryption alone cannot address.

2. Remote Attestation

Remote attestation provides cryptographic proof that a TEE is genuine and operating as expected. This ensures trust in the environment, particularly in cloud or collaborative settings.

  • How It Works: A TEE generates an attestation report, including a cryptographic signature tied to the hardware. This report confirms the integrity of the software and hardware running within the enclave (source).
  • Why It Matters: Remote attestation reassures stakeholders that computations occur in a secure and uncompromised environment, a critical requirement in multi-tenant cloud infrastructures.

3. Confidential Virtual Machines (VMs)

Confidential VMs extend TEE principles to virtualized environments, making secure computing scalable for complex workloads. Technologies like Intel® TDX allow organizations to isolate entire virtual machines.

  • How They Work: Confidential VMs use memory encryption to ensure that data remains secure during processing. Encryption keys are hardware-managed, inaccessible to the hypervisor or OS (source).
  • Why They Matter: They enable secure data processing in public clouds, even in shared infrastructures.

4. Verified Compute Frameworks

Verified Compute frameworks build on TEEs by introducing mechanisms for generating immutable logs and cryptographic proofs of computations. An example is EQTY Lab’s Verifiable Compute.

  • How They Work: These frameworks capture the details of computations (inputs, outputs, and environment integrity) in tamper-proof logs. These logs are cryptographically verifiable, ensuring transparency without compromising confidentiality.
  • Why They Matter: They allow organizations to meet regulatory requirements and provide explainable AI outputs while safeguarding proprietary algorithms and sensitive data.

5. Homomorphic Encryption and Secure Multi-Party Computation (SMPC)

In cases where external collaboration or ultra-sensitive data handling is needed, additional cryptographic techniques enhance confidentiality.

  • Homomorphic Encryption: Enables computations on encrypted data without decryption.
  • SMPC: Distributes computations across multiple parties, ensuring that no single party has access to the complete dataset.
  • Why They Matter: These techniques complement TEEs by enabling secure collaboration across untrusted parties.

How Verified Computing Bridges Security and Explainability

Achieving Transparency Without Sacrificing Security

Traditionally, efforts to make AI systems explainable required exposing internal processes or sharing sensitive data—practices that risked data breaches or model theft. Verified confidential computing changes the game by:

  • Allowing computations to occur in TEEs or confidential VMs, ensuring data is secure at all times.
  • Using verified compute frameworks to provide cryptographic evidence of computation integrity, allowing external parties to trust results without accessing sensitive details.

For example, a healthcare provider running an AI diagnostic tool can securely process patient data in a TEE. The AI’s decisions can be explained to regulators or patients using cryptographic proofs, without exposing proprietary algorithms or patient information.

Supporting Regulatory Compliance

Regulations like the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) demand both robust security and transparent handling of sensitive data. Verified confidential computing offers a solution by generating immutable logs and proofs that demonstrate compliance. This reduces audit complexity and ensures adherence to privacy laws.

Building Trust in AI Systems

As AI plays a growing role in critical sectors, trust is paramount. Verified computing ensures that stakeholders can verify:

  1. The Security of the Data: Through TEEs and confidential VMs.
  2. The Integrity of Computations: Via cryptographic attestation and verifiable compute frameworks.
  3. The Explainability of Results: Through transparent logging and auditable records.

For instance, financial institutions can use verified computing to process loan applications, providing regulators with evidence of fairness and transparency without compromising customer data security.

Verified Compute

Verified Computing is more than a technological advancement—it is a paradigm shift. By integrating technologies like TEEs, remote attestation, confidential VMs, and verifiable compute frameworks, it resolves the long-standing tension between security and explainability. Organizations can now protect sensitive data, ensure compliance, and provide transparent, trustworthy AI systems.

As industries adopt this approach, verified computing will become the gold standard for secure and accountable digital transformation. Bridging these historically conflicting priorities paves the way for a future where trust in AI is not just an aspiration, but a guarantee. For more insights and resources, visit the Confidential Computing Consortium.