The Linux Foundation Projects
Skip to main content
All Posts By

jshelby

Building Trust Among the Untrusting: How Super Protocol Redefines AI Collaboration  

By Blog No Comments

What if you could collaborate on AI projects, run complex models, fine-tune them, and even monetize both your models and data – all while retaining full control and ensuring confidentiality? It might sound impossible, especially when involving multiple participants you don’t need to trust – or even know.

In his article, Web3 plus Confidential Computing,” Mike Bursell, Executive Director of the Confidential Computing Consortium,  delves into this challenge: It turns out that allowing the creation of trust relationships between mutually un-trusting parties is extremely complex, but one way that this can be done is what we will now address.

Mike explores the synergy of Confidential Computing, blockchain, and smart contracts, showcasing Super Protocol as a real-world implementation of this vision. He explains: Central to Super Protocol’s approach are two aspects: that it is open source, and that remote attestation is required to allow the client to have sufficient assurance of the system’s security. Smart contracts – themselves open source – enable resources from various actors to be combined into an offer placed on the blockchain, ready for execution by anyone with access and sufficient resources. What makes this approach Web3 is that none of these actors needs to be connected contractually.”

This approach enables the network effect… building huge numbers of interconnected Web3 agents and applications, operating with the benefits of integrity and confidentiality offered by Confidential Computing, and backed up by remote attestation. Unlike Web2 ecosystems, often criticized for their fragility and lack of flexibility (not to mention the problems of securing them in the first place),” here is an opportunity to create “complex, flexible, and robust ecosystems where decentralized agents and applications can collaborate, with privacy controls designed in and clearly defined security assurances and policies.

As Mike aptly puts it: Technologies, when combined, sometimes yield fascinating – and commercially exciting – results.

Explore the full article to dive into the basics, the synergy of these technologies, and the technical details of how Super Protocol is turning this vision into reality.

Read the Full Article

Guide to Confidential Computing Sessions at KubeCon + CloudNativeCon North America, Salt Lake City 2024

By Blog, CCC Events No Comments

Ready to explore the forefront of Confidential Computing (CC) at KubeCon Salt Lake City? This guide highlights the key sessions and demos to get the most out of the KubeCon Schedule, from hands-on workshops and insightful talks to live demos at the Confidential Computing Consortium (CCC) booth. Here’s your roadmap to navigating CC at KubeCon:

Must-Attend Confidential Computing Sessions at KubeCon Salt Lake City

1. Confidential Containers 101: A Hands-On Workshop

  • When: Wednesday, 14:30 – 16:00
  • Where: Level 1, Grand Ballroom G

Presented by: Microsoft
This in-depth workshop by Microsoft provides an introduction to Confidential Containers, with practical insights into container security and data privacy. Participants will learn best practices for deploying applications with Confidential Computing to address privacy and security in multi-tenant environments. Expect hands-on experience that is perfect for practitioners interested in integrating CC into their Kubernetes workloads.

2. From Silicon to Service: Ensuring Confidentiality in Serverless GPU Cloud Functions

  • When: Thursday, 11:00 – 11:35
  • Where: Level 1, Room 151 G
  • Presented by: NVIDIA
    Join NVIDIA’s session to discover how Confidential Computing powers secure serverless GPU cloud functions, ideal for supporting AI and machine learning operations with sensitive data. This talk will walk you through securing data from the silicon level up to cloud services, offering insights on GPU-optimized applications that maintain data confidentiality in the cloud. NVIDIA’s approach is essential for anyone interested in GPU-based Confidential Computing and scalable cloud AI functions.

3. Privacy in the Age of Big Compute

  • When: Friday, 16:00 – 16:35
  • Where: Level 1, Grand Ballroom A
  • Presented by: Confidential Computing Consortium
    Led by the CCC, this session dives into privacy management across massive compute environments, essential for industries with stringent data protection needs. Attendees will gain a perspective on the evolving landscape of privacy within cloud-native and confidential workloads, from regulatory challenges to innovative privacy solutions. This session is key for those looking to understand how Confidential Computing fits into large-scale compute architectures.

4. Confidential Compute Use Cases Mini Session

  • When: Wednesday, 18:00 – 18:30; Thursday, 14:30 – 16:30
  • Where: CCC Booth Q25
  • Presented by: Red Hat
    Red Hat’s mini-session offers a glimpse into real-world applications of Confidential Computing. Using case studies and practical examples, this session will highlight how organizations leverage CC for secure, private compute solutions. Perfect for those curious about real-world implementations, it’s a great chance to see how CC meets industry privacy and compliance needs.

5. Confidential Collaborative AI

  • When: Wednesday, 16:00 – 16:30
  • Where: CCC Booth Q25
  • Presented by: Ultraviolet
    This session explores how Confidential Computing enables secure, collaborative AI model sharing while safeguarding sensitive data. Ultraviolet will discuss how CC facilitates multi-organization AI collaboration without sacrificing data privacy. Attendees interested in secure, cross-partner AI projects will gain insight into CC’s applications in collaborative ML environments.

6. Protecting LLMs with Confidential Computing

  • When: Thursday, 16:30 – 17:00
  • Where: CCC Booth Q25
  • Presented by: Ultraviolet
    Ultraviolet’s talk addresses the growing need for securing large language models (LLMs) with Confidential Computing. As LLMs handle more sensitive data, securing these models from unauthorized access becomes crucial. This session is ideal for those working with AI models in regulated industries, providing strategies to ensure data protection without compromising model functionality.

CCC Booth Q25: Live Demos and Networking Opportunities

Stop by the Confidential Computing Consortium Booth Q25 for demos, mini-sessions, and networking opportunities with industry leaders. Here are some key events:

Remote Attestation with Veraison: Live Demo

  • When: Wednesday and Thursday, 10:45 – 12:45
  • Presented by: Linaro
    This live demo from Linaro showcases Veraison’s remote attestation capabilities, an essential process for verifying workload integrity within Confidential Computing environments. Attendees will witness how Veraison’s open-source solution enhances trust in CC workloads, making this a must-see demo for anyone focused on workload security.

Don’t Miss: CCC Power User Bingo Card

Get your CCC Power User Bingo Card at the CCC booth and complete activities as you participate in sessions and demos. Play along during KubeCrawl and become a CC expert when securing data in use through Confidential Computing!

Decentralized Data Governance in Multi-Cloud Environments with Confidential Computing

By Blog No Comments

Author: Sal Kimmich

Introduction:

As enterprises increasingly adopt multi-cloud architectures, managing data governance across distributed systems has become more complex. With data privacy regulations like GDPR and CCPA requiring organizations to maintain strict control over sensitive information, ensuring compliance while leveraging the flexibility of multi-cloud systems presents a significant challenge.

Enter Confidential Computing: by using trusted execution environments (TEEs) and remote attestation across cloud platforms, organizations can ensure that sensitive data is processed in a secure and compliant manner. This blog will explore how decentralized data governance can be achieved in multi-cloud environments using confidential computing technologies.

Why Is Confidential Computing Essential for Multi-Cloud Data Security?

In a multi-cloud setup, organizations often distribute workloads across multiple cloud providers to meet their operational needs. However, this also increases the potential attack surface, as data flows through various infrastructures. Ensuring that data remains secure and compliant with regulations across these disparate environments is critical.

Confidential computing provides a solution by ensuring that sensitive data is processed in secure enclaves within TEEs, which isolate the data from unauthorized access. Using remote attestation, these TEEs can be verified, ensuring that the code executing within the enclave is trustworthy.

This ability to isolate and verify processing environments makes confidential computing essential for ensuring data security and governance across multi-cloud deployments.

What Is Decentralized Data Governance and Why Does It Matter in the Cloud?

Decentralized data governance refers to the practice of managing data policies, access controls, and compliance requirements across multiple locations or platforms without relying on a single centralized authority. In a multi-cloud environment, this is particularly challenging, as each cloud provider may have different security standards, policies, and regulatory requirements.

By decentralizing data governance, organizations can ensure that each cloud provider adheres to specific security and compliance rules. Confidential computing enables this by allowing organizations to enforce strict access controls and data policies at the TEE level, ensuring that data governance is maintained consistently, regardless of where the data is processed.

This approach to governance is crucial for businesses that need to operate in multiple jurisdictions or across cloud infrastructures, ensuring that they meet all relevant regulatory requirements.

How Open Enclave SDK Powers Secure Data Governance in Multi-Cloud Environments

One of the key tools that enables secure data governance in a multi-cloud environment is the Open Enclave SDK. Developed under the Confidential Computing Consortium, the Open Enclave SDK provides a consistent abstraction for creating TEEs across different platforms, including Azure, AWS, and Google Cloud.

By using the Open Enclave SDK, developers can build applications that securely process data in TEEs across multiple cloud environments without having to rewrite code for each cloud provider. This ensures that data remains secure and compliant with governance policies, regardless of the cloud infrastructure being used.

Additionally, the Open Enclave SDK supports remote attestation, allowing organizations to verify that data is being processed in trusted environments across all cloud platforms.

How Remote Attestation Ensures Compliance Across Multi-Cloud Systems

As organizations move workloads across different cloud providers, ensuring that each platform complies with relevant data privacy laws is a key concern. Remote attestation provides a mechanism to verify the security and integrity of TEEs, ensuring that sensitive data is processed only within approved environments.

In the context of GDPR, for example, remote attestation can help ensure that personal data is processed only within TEEs that meet the necessary security and privacy requirements. This ability to verify compliance on the fly allows businesses to confidently use multi-cloud infrastructures while maintaining adherence to data protection regulations.

Remote attestation helps organizations remain agile in the cloud while still upholding strict data sovereignty requirements, ensuring compliance with the CCPA, GDPR, and other global regulations.

Case Study: Confidential Computing in Real-World Data Sovereignty Challenges

A real-world example of decentralized data governance using confidential computing is the case of Italy’s Sovereign Private Cloud initiative. Italy’s government aimed to ensure that critical public sector workloads were processed within secure and private environments, adhering to the country’s strict data sovereignty laws.

By adopting confidential computing and remote attestation, Italy’s sovereign cloud enabled secure processing of sensitive public data across distributed environments. This approach ensured that even when data was processed outside of government infrastructure, it was handled securely in trusted execution environments, and compliance with Italian data protection laws was maintained.

To dive deeper into this solution, you can watch the session titled Sovereign Private Cloud: A Confidential Computing Solution for the Italian Public Administration from the Confidential Computing Summit 2024, where the implementation of the Sovereign Cloud is discussed in detail. The recording is available here.

This use case highlights how confidential computing can help address data sovereignty concerns, enabling organizations to operate securely across multiple cloud infrastructures without compromising compliance.

Achieving Decentralized Data Governance with Confidential Computing

As organizations continue to embrace multi-cloud strategies, managing data governance across distributed environments becomes more complex. Confidential computing offers a powerful solution by securing data in trusted execution environments and enabling remote attestation to verify compliance.

By leveraging tools like the Open Enclave SDK, businesses can maintain control over their data policies and ensure that sensitive information is processed in secure, compliant environments across all cloud platforms. As data sovereignty concerns grow, particularly in industries like healthcare and finance, confidential computing will play an increasingly important role in ensuring data governance and regulatory compliance across the multi-cloud landscape.

Hyperlinks Summary:

October Newsletter

By Newsletter No Comments

October Recap: Highlights include KubeCon + CloudNativeCon NA, new CCC project tech talks, and top community blog posts.

In this month’s issue:

  1. Executive Director October Recap
  2. KubeCon & CloudNativeCon NA
  3. Tech Talks + New CCC Project
  4. Community Blog Highlights

Executive Director Update

October/November is voting time at the Confidential Computing Consortium, and so if you are a member of the consortium, we welcome your application to stand as chair or vice chair of any of our three committees: Governing Board, Technical Advisory Committee, and Outreach Committee.  It is with sadness that we say goodbye to Ron Perez, who has served as Chair of the Governing Board with great wisdom and patience, providing his experience to all and sundry.  We wish him well and thank him for his work with the Consortium: I personally have benefited immensely from his counsel and advice during his tenure.

The CCC also appeared at OSS Japan again this year.  Mark Medum Bundgaard of Partisia and I hosted a Birds of a Feather session on Privacy-Enhancing Technologies (PETs) and presented a session on Confidential Computing for AI, Multi-Party Collaboration and Web3: as always, I’m very happy to share my slides and discuss with anybody with an interest.  Next month a number of members will be in Salt Lake City for Kubecon North America – if you can make it, we’d love to see you there.

Meet Us at KubeCon NA

Come Join Us For Some Fun!!

Stop by the CCC Booth (Q25) for various activities throughout the event.

We have prepared;

  • Privacy Jeopardy during KubeCrawl
  • CC Scavenger Hunt
  • Mini Sessions
  • Demos
  • Fun Swags

You can use our 20% discount code to register: KCNA24TYKAN20

Can’t wait to see you there!!

Register Here

Tech Talks

Our Tech Talk series continued strong with a presentation from Caroline Perez-Vargas on Microsoft’s new OpenHCL project. Since Caroline’s talk the project has been made available on GitHub with an open source license. There’s a natural next step for this project but I just can’t put my finger on it. 😉   Oh well, we’ll just have to see what they have in mind to expand the contributor base with Confidential Computing subject matter experts.

We also heard from Chanda Nelogal on Extending Confidentiality to Data Storage. Chandra introduced us to intersections with Confidential Computing and Self Encrypting Drives. As we see Confidential Computing capable devices enter the market, some of us have focused on accelerators, but storage devices are an interesting and important category. We look forward to Chandra returning to take the conversation further.

TAC Tech Talk playlist 

CCC Welcomes New Open Source Project

We are excited to announce the addition of a new project to the Confidential Computing Consortium (CCC) portfolio: ManaTEE. This innovative platform creates secure data clean rooms, enabling privacy-compliant collaboration for industries like healthcare and finance. ManaTEE supports tools such as Jupyter Notebooks, providing a flexible environment for secure multi-party research and analysis.

Learn about ManaTEE Here

Let’s grow our community!
Share this with your network.

Confidential Computing Consortium Welcomes ManaTEE as a New Open-Source Project

By Announcement No Comments

The Confidential Computing Consortium is delighted to announce ManaTEE, a new open-source project designed to enable secure data collaboration without compromising the privacy of individual data. Published by TikTok in June 2024 as part of their ongoing Privacy Innovation efforts, ManaTEE started as a core use case of TikTok. Now part of the Confidential Computing Consortium, ManaTEE addresses the growing challenges of balancing privacy, usability, and accuracy in enterprise data collaboration.

The Challenge of Data Collaboration

While data collaboration is essential, designing and building a secure framework involves significant effort and numerous caveats. Existing solutions, such as differential privacy or commercial data clean rooms, often fail to provide a balance between privacy, accuracy, and usability, particularly when handling large-scale data.

Introducing ManaTEE: A Two-Stage Data Clean Room

ManaTEE introduces a two-stage data clean room model to provide an interactive interface for exploring data while protecting private data during processing. It combines different privacy-enhancing technologies (PETs) across two stages:

  • Programming Stage: Data consumers explore datasets using low-risk data, employing different PETs such as pseudonymization or differentially private synthetic data generation.
  • Secure Execution Stage: Workloads run in a trusted execution environment (TEE), which provides attestable integrity and confidentiality guarantees for the workload in the cloud.

Key Benefits of ManaTEE

  • Cloud-Ready: ManaTEE can be easily deployed to existing cloud TEE backends such as Google Confidential Space. We plan to support other backends as well, eliminating the need to build the entire TEE infrastructure to set up the framework.
  • Flexible PET: Data providers can control the protection mechanisms at each stage to tailor to specific privacy requirements of the data.
  • Trusted Execution Environment: By leveraging TEEs, ManaTEE ensures a high level of confidence in data confidentiality and program integrity for both data providers and data consumers.
  • Accuracy and Utility: ManaTEE employs a two-stage design to ensure that result accuracy is not compromised for the sake of privacy.

Features of ManaTEE’s Data Clean Room

  • Interactive Programming: Integrated with Jupyter Notebook, allowing data consumers to work with Python and other popular languages.
  • Multiparty Collaboration: Enables collaboration with multiple data providers.
  • Flexibility: Adaptable to specific enterprise needs.

ManaTEE Use Cases

  • Trusted Research Environments (TREs): Secure data analysis for public health, economics, and more, while maintaining data privacy.
  • Advertising & Marketing: Lookalike segment analysis and private ad tracking without compromising user data.
  • Machine Learning: Enables private model training without exposing sensitive data or algorithms.

Open Collaboration and Governance

ManaTEE encourages open collaboration within its growing community. Currently led by TikTok’s founding developers, ManaTEE plans to expand its leadership through a Technical Steering Committee (TSC). Eventually, future project milestones and growth plans will be publicly discussed and governed transparently.

The ManaTEE project welcomes anyone interested in confidential computing and private data collaboration to participate and contribute.

Conclusion

ManaTEE is a significant step forward in secure data collaboration, balancing privacy, usability, and accuracy. Organizations can safely collaborate on sensitive datasets by leveraging TEEs and a two-stage clean room approach.

To learn more, visit the Confidential Computing Consortium or explore ManaTEE on GitHub.

SETIT Solutions Joins the Confidential Computing Consortium as a Startup Tier Member

By Uncategorized No Comments

The Confidential Computing Consortium (CCC) is pleased to welcome SETIT Solutions as a Startup Tier member. SETIT Solutions is committed to advancing global standards in secure computing and will play a key role in supporting the Consortium’s mission.

Confidential Computing is essential for secure data management in today’s digital landscape. With the growing demand for robust data protection mechanisms, SETIT Solutions’ expertise and innovation will contribute to the Consortium’s efforts to develop technologies that protect sensitive information, even during processing.

SETIT Solutions shares the CCC’s vision of community-driven innovation. By joining the Consortium, they become part of a global network advancing confidential computing across industries, fostering greater trust in data protection.

Aligned with the CCC’s goal of securing data through open frameworks, SETIT Solutions will promote the broad adoption of confidential computing, helping organizations of all sizes confidently manage sensitive data.

As a startup specializing in transformative technologies, SETIT Solutions brings fresh perspectives and is committed to setting new standards for securing data across cloud, edge, and on-premises environments. They see confidential computing as key to driving innovation in AI, multi-party collaboration, and data privacy.

SETIT Solutions’ participation will support the Consortium’s efforts to enhance data security and trust across industries. Together with other CCC members, SETIT Solutions will help shape the future of Confidential Computing, contributing to a more secure and trustworthy digital world.

Stay tuned for updates on SETIT Solutions’ contributions to the CCC and their work within the broader confidential computing community.

Confidential Computing Consortium Resources

What Is Remote Attestation? Enhancing Data Governance with Confidential Computing

By Blog No Comments

Author:  Sal Kimmich

Introduction

Imagine you’re working for a large healthcare provider. You have patient data that needs to be processed in the cloud, but you also want to make sure that this data isn’t accessed or tampered with by anyone, including the cloud provider itself. How can you trust that the cloud server is secure before sending sensitive information to it? That’s where remote attestation comes in. It’s like a virtual “security checkpoint” that ensures the environment where your data will be processed is trustworthy.

Now imagine you’re managing thousands of IoT devices in a smart city, such as street lights or traffic sensors, which are constantly sending data back to central systems. You need to know that these devices haven’t been compromised by hackers. Remote attestation (specifically, RATestation)  helps verify that these devices are secure and haven’t been tampered with, ensuring reliable and secure communication.

Remote attestation is a core component of Confidential Computing that helps verify the integrity of a processing environment in both cloud and IoT setups, building trust across these systems. 

As organizations increasingly adopt cloud and distributed systems, securing sensitive data has become more critical than ever. Remote attestation, a core component of Confidential Computing, verifies the integrity of a data processing environment before sensitive workloads are accessed. This technology builds trust across multi-cloud environments by ensuring workloads run securely within Trusted Execution Environments (TEEs). However, this need for security also extends to the rapidly growing Internet of Things (IoT), where secure real-time operations are crucial.

Remote Attestation in Cloud and IoT: The Key to Secure Data Processing

Remote attestation operates differently in cloud and IoT environments, but its core function remains the same: verifying that a piece of code or application is running inside a secure, trusted environment (TEE).

In cloud computing, remote attestation assures that sensitive workloads, such as financial transactions or healthcare data, are processed securely within TEEs. In IoT, where devices operate in often uncontrolled environments, remote attestation ensures that each device remains trustworthy and untampered with, allowing it to communicate securely with cloud services or other devices.

Confidential Computing for Cloud: In cloud computing, multi-cloud architectures require trust across several infrastructures. Remote attestation ensures that sensitive workloads run in verified TEEs, providing a secure way to meet strict compliance requirements such as GDPR and HIPAA.

Confidential Computing for IoT: Real-Time Security: In IoT environments, remote attestation ensures the continuous integrity of distributed devices. For example, connected medical devices or autonomous vehicles must maintain their trustworthiness during real-time operations. Remote attestation allows organizations to verify these devices dynamically, preventing compromised systems from accessing sensitive networks.

Several CCC projects actively contribute to remote attestation in cloud and IoT:

  • Gramine: Primarily focused on Intel® SGX, Gramine supports secure workload execution across multi-cloud infrastructures, providing compatibility for legacy applications that require trusted execution environments.
  • Veraison: This flexible framework verifies attestation evidence from TEEs across multiple architectures, validating the integrity of both cloud and IoT devices.
  • Keylime: Particularly useful in IoT environments, Keylime offers remote boot attestation and real-time integrity monitoring, ensuring that IoT devices maintain a secure status during operations.
  • SPDM Tools: Developed to secure TEE-I/O in both cloud and IoT, SPDM Tools verify that communications between devices remain secure within trusted execution environments.
  • Open Enclave SDK: This project abstracts hardware differences and provides a unified API for building secure enclave applications, supporting both cloud-based and IoT use cases.

For more information on all of these projects, see the links below and visit our CCC Project Portfolio

How Remote Attestation Ensures Compliance with Global Data Privacy Laws

In industries governed by stringent data privacy laws such as GDPR (General Data Protection Regulation) in Europe and HIPAA (Health Insurance Portability and Accountability Act) in the US, compliance is a top priority. Remote attestation plays a pivotal role in ensuring that sensitive data is processed securely, in compliance with global privacy regulations.

  1. GDPR Compliance: Remote attestation ensures that personal data is processed in verified, secure TEEs, preventing unauthorized access or tampering. This is particularly critical for organizations in Europe, where GDPR mandates stringent data protection and privacy standards. The ability to verify the integrity of the cloud infrastructure before processing data allows organizations to prove compliance during audits.
  2. HIPAA Compliance: In the healthcare sector, remote attestation is essential for ensuring that sensitive patient data is processed securely in environments that comply with HIPAA. By confirming the integrity of the TEE, healthcare providers can securely manage electronic health records (EHRs), ensuring that patient data remains protected during transmission and processing.

Remote attestation provides organizations with the assurance that sensitive data is handled within secure environments that comply with privacy laws. As multi-cloud and IoT networks grow, ensuring compliance with these laws through verified environments will become even more critical.

Real-Time Trust in IoT: The Importance of Continuous Attestation

The challenge in IoT environments lies in ensuring that every device continuously adheres to security standards. For example, Keylime enables real-time integrity monitoring, ensuring that compromised IoT devices can be detected and isolated immediately. This is especially crucial in industries like healthcare, where real-time decision-making is directly influenced by the security status of devices.

The Future of Remote Attestation: From Cloud to Edge

Remote attestation is evolving to meet the demands of both cloud and IoT environments. As organizations adopt more complex multi-cloud infrastructures and IoT networks, the role of remote attestation will expand. Post-quantum cryptography and enhanced security measures such as multi-party attestation will improve the scalability of remote attestation in the future, making it more robust against emerging threats.

Conclusion: Building Trust with Remote Attestation

Remote attestation is a crucial tool for building trust in both cloud and IoT environments. Whether securing sensitive workloads in multi-cloud infrastructures or maintaining the integrity of millions of IoT devices, remote attestation ensures that data is processed in trusted, verified environments. CCC open-source projects such as Veraison, Gramine, Keylime, and SPDM Tools are leading the way in making remote attestation scalable and secure. As Confidential Computing continues to evolve, remote attestation will remain a cornerstone for ensuring security and trust across distributed systems.

Origin and Motivations Behind ATtestation

The RATtestation documentation emerged from the need to standardize remote attestation protocols across diverse Confidential Computing environments. The document addresses the challenge of securely verifying the integrity of systems in distributed and multi-cloud architectures. ATtestation defines best practices for trust establishment, data protection, and secure communication, ensuring the integrity of Trusted Execution Environments (TEEs). It emphasizes the role of remote attestation in enabling secure collaboration while maintaining compliance with privacy regulations such as GDPR and HIPAA.

RATtestation Documentation 

Resources Summary:

September Newsletter: CC Mini Summit Recordings, Tech Talks, Secure AI Pipelines, and more

By Newsletter No Comments

Hello Community!

Welcome to the 2024 September Newsletter

In Today’s Issue:

  1. Executive Director September Recap
  2. Recordings from the CC Mini Summit @ OSSEU
  3. TAC Tech Talks & Upcoming Discussions
  4. Community Blog Highlights

Welcome to the September edition of our newsletter – your guide to awesome happenings in our CCC community. Let’s go!

Executive Director Update

September saw us holding a Confidential Computing Mini-Summit, co-located with Open Source Summit Europe in Vienna.  Despite torrential rain and major flooding in the preceding days, all of the speakers and panel members made it and we had an interesting – and sometimes spirited! – set of discussions.  I particularly enjoyed moderating a panel on attestation – see below for more on that topic.  The slide decks from the speakers as well as the video recordings at the Mini-Summit will be available for you to watch.

I also popped over to Dublin for the Eyes Off Data Summit, where I appeared as a panel member in a session about the opportunities and challenges of Confidential Computing.

The main thing that I’m seeing at the moment in the community is a realization that while there’s still a lot of work to be done educating the wider world on the basics of Confidential Computing and TEEs, the really interesting work and the really exciting business opportunities are likely to revolve around attestation.  This is reflected in the conversations we’re having at conferences and the work that we’re doing in the CCC.  There are two main streams of work: the technical, where we’re looking at definitions, protocols and related areas; and business questions such as “who should run an attestation verification service?” and “what sorts of policies should we expect an attestation verification service to enforce?”.  Spanning these streams is the work by the Governance, Risk and Compliance (GRC) SIG, which also considers issues around regulation.

If any of this sounds interesting to you, or you’d like to be involved in any way in the work of the CCC, we’d love to hear from you.

Get in touch

CC Mini Summit Recordings & Slides

On Demand Content is Available NOW!

Enjoy the recordings from the Confidential Computing Mini Summit at OSS EU.

Watch the Recording

TAC Update

This month we had three really deep tech talks. A couple are more on the advanced end of the spectrum but don’t let that scare you away from checking them out. They were all presented in really accessible formats. You’ll see the TAC Tech Talks playlist alongside our other playlists on the CCC YouTube channel:

TAC Tech Talk playlist 

Heading into October we’re in our final quarter to complete the goals we set for ourselves for the year. One of the big topics is getting Confidential Computing Features upstreamed into the Linux Kernel. The primary maintainers conference (The Linux Plumbers Conference) just concluded in late September so we’ll be getting some feedback from that in the TAC in October.

We’re also looking at starting some new work related to attestation verification. Feedback from another exercise showed us that there’s still areas that need a common definition. Among them, being able to identify entities that are in and out of the Trusted Computing Base (TCB), also informally called the trust boundary. Entities like CSPs are pretty big and we want to be more granular to more accurately reflect who is and isn’t trusted for a given deployment – or at least what sort of questions an adopter should think through.

Community Blog Highlights

Key Takeaways from the Confidential Computing Consortium Mini Summit at OSS EU

By Blog No Comments

The Confidential Computing Consortium (CCC) recently participated in the Open Source Summit Europe (OSS EU), hosting a dedicated Confidential Computing Mini Summit. 

The event gathered some of the brightest minds in the industry to discuss the evolving landscape of Confidential Computing, its capabilities, and its impact across various industries. 

Check it out—All sessions from the summit are now available on the CCC YouTube channel for anyone who missed the event or wants to revisit the discussions.

Mini Summit Recap

The Mini Summit featured an impressive lineup of speakers and thought leaders, offering insights into the latest trends and innovations in Confidential Computing. Here’s a recap of the key sessions:

Opening Keynote- Confidential Computing: Enabling New Workloads and Use Cases

Mike Bursell, Executive Director of the CCC, opened the summit with a deep dive into Confidential Computing, showcasing how hardware-based Trusted Execution Environments (TEEs) now support new workloads. He highlighted its role in securing data with hardware-backed security and attestation, while exploring emerging applications in Generative AI, Web3, and multi-party computation.

Mike emphasized the transformative power of Confidential Computing, enabling secure workloads through the fusion of hardware security and cryptographic assurances. As Confidential Computing grows, remote attestation is becoming crucial, ensuring confidentiality and integrity in sensitive workloads across diverse environments.

Presentation here

Mini Summit Sessions

Cocos AI – Confidential Computing

  • Drasko Draskovic (CEO, Abstract Machines) and Dusan Borovcanin (Ultraviolet) shared, with a demo, how Cocos AI, using Confidential Computing, is leveraging computing to create more secure AI environments.

Presentation here

TikTok’s Privacy Innovation- A Secure and Private Platform for Transparent Research Access with Privacy-Enhancing Technologies

  • Mingshen Sun (Research Scientist, TikTok) presented TikTok’s approach to privacy-enhancing technologies, showcasing a secure and private platform designed for transparent research access.  The TikTok project is currently going through the process of being accepted as an open source project under the CCC.

Panel Session:  Attestation and Its Role in Confidential Computing

  • This panel, moderated by Mike Bursell, included expert perspectives from Paul Howard (Principal System Solutions Architect, Arm), Yuxuan Song (Ph.D. student, Inria Paris, and Sorbonne University), Ian Oliver(Cybersecurity Consultant), and Hannes Tschofenig (Professor, University of Applied Sciences Bonn-Rhein-Sieg). They explored how remote attestation serves as a key enabler for confidentiality and integrity, driving business value by assuring the trustworthiness of computing environments.  A wide-ranging – and at times quite lively! – discussion covered topics from IoT use cases to issues of transparency, from attestation models to approaches to integration.

Supporting Confidential Computing Across Europe’s Cloud-Edge Continuum

  • Francisco Picolini (Open Source Community Manager, OpenNebula Systems) highlighted the efforts to extend Confidential Computing capabilities within a new European project, looking across in the cloud and edge computing spaces.

Presentation here

Hiding Attestation with Linux Keyring in Confidential Virtual Machines

  • Mikko Ylinen (Cloud Software Architect, Intel) presented an innovative approach to using Linux Keyring to enhance security in confidential virtual machines, offering new techniques for securing workloads.

Presentation here

Looking Ahead

The Confidential Computing Mini Summit at OSS EU provided attendees with a comprehensive view of Confidential Computing’s present and future potential. Discussions around Gen AI, Web3, and multi-party computation showed how Confidential Computing is set to play a pivotal role in shaping the future of technology by enabling more secure, trusted, and scalable computing environments.

Join the conversation with the CCC and its ecosystem of members for more on how Confidential Computing is transforming industries and unlocking new capabilities. The future of secure computation is just beginning, and there’s much more to discover.

Confidential Computing Consortium Resources

Confidential Computing for Secure AI Pipelines: Protecting the Full Model Lifecycle

By Blog No Comments

By Sal Kimmich

As AI and machine learning continue to evolve, securing the entire lifecycle of AI models—from training to deployment—has become a critical priority for organizations handling sensitive data. The need for privacy and security is especially crucial in industries like healthcare, finance, and government, where AI models are often trained on data subject to GDPR, HIPAA, or CCPA regulations.

In this blog, we’ll explore how confidential computing enhances security across the entire AI model lifecycle, ensuring that sensitive data, models, and computations are protected at every stage. We’ll also examine the role of technologies like Intel SGX, ARM TrustZone, and trusted execution environments (TEEs) in achieving end-to-end security for AI workflows.

The AI Model Lifecycle: From Training to Deployment

The AI model lifecycle consists of several stages where sensitive data is exposed to potential risks:

  1. Data Collection and Preprocessing: This is the stage where data is gathered and prepared for model training. In regulated industries, this data often contains personally identifiable information (PII) or other sensitive details.
  2. Model Training: During training, AI models are fed data to learn patterns. This process is compute-intensive and often requires distributed systems or multi-cloud environments.
  3. Inference and Deployment: Once trained, AI models are deployed to make predictions on new data. At this stage, the model itself and the inference data need to remain secure.

Each stage presents unique security challenges. Data can be exposed during preprocessing, models can be stolen during training, and sensitive inputs or outputs can be compromised during inference. Securing all aspects of the AI pipeline is critical to maintaining data privacy and ensuring compliance with regulations like GDPR and HIPAA.

How Confidential Computing Protects AI at Each Stage

Confidential computing provides a solution to these challenges by using trusted execution environments (TEEs) to secure data, models, and computations throughout the AI pipeline.

  • Data Collection and Preprocessing: In this stage, TEEs ensure that sensitive data can be preprocessed in a secure enclave. Technologies like Intel SGX and ARM TrustZone create isolated environments where data can be cleaned, transformed, and anonymized without exposing it to unauthorized access.
  • Model Training: Confidential computing plays a critical role during AI model training, where TEEs are used to protect both the training data and the model itself. By running the training process within a secure enclave, organizations can ensure that no external party—whether malicious actors or cloud providers—can access or steal the model.
  • Inference and Deployment: After training, confidential computing ensures that the model remains protected during inference. Remote attestation allows organizations to verify that the AI model is running in a secure environment before it is deployed. This prevents data leakage during inference and ensures that the model’s predictions are based on trusted data inputs.

Intel SGX and ARM TrustZone: Securing AI Workflows

Intel SGX and ARM TrustZone are two leading technologies that enable confidential computing in AI pipelines by securing sensitive workloads at every stage.

  • Intel SGX: Intel SGX provides hardware-based security by creating secure enclaves that isolate data and code during processing. In AI workflows, Intel SGX is used to protect data during preprocessing and model training, ensuring that sensitive data and AI models remain secure even in multi-cloud environments.
  • ARM TrustZone: ARM TrustZone enables secure computation on mobile and IoT devices, providing isolated execution environments for sensitive AI models. ARM TrustZone is particularly useful in edge computing, where AI models are deployed close to data sources, and confidentiality is critical.

Both Intel SGX and ARM TrustZone provide the infrastructure needed to implement confidential AI pipelines, from data collection and training to inference and deployment.

Real-World Use Case: Confidential AI in Healthcare

A prime example of how confidential computing secures AI pipelines is in the healthcare industry, where AI models are often used to analyze sensitive patient data. By using confidential computing, healthcare organizations can ensure that patient records are protected during model training, and predictions are made without exposing sensitive data to unauthorized access.

In this case, confidential computing helps healthcare providers comply with regulations like HIPAA, while still benefiting from the insights generated by AI models.

Confidential Computing and AI Regulations: Ensuring Compliance with GDPR and HIPAA

As AI becomes more embedded in regulated industries, maintaining compliance with data privacy laws like GDPR and HIPAA is essential. Confidential computing ensures that sensitive data and AI models are protected at every stage of the AI lifecycle, reducing the risk of data breaches or unauthorized access.

By securing both data and models, confidential computing helps organizations meet the requirements for data minimization, transparency, and consent, ensuring that AI workflows remain compliant with global regulations.

AI Pipelines with Confidential Computing

As AI workflows become more complex and data privacy concerns grow, confidential computing will play a central role in securing the AI model lifecycle. From data preprocessing to model inference, confidential computing ensures that data and AI models remain protected in trusted execution environments, enabling organizations to deploy AI securely and compliantly.

With technologies like Intel SGX and ARM TrustZone, organizations can now secure their AI pipelines at every stage, ensuring privacy, security, and regulatory compliance in industries like healthcare, finance, and national security.

Hyperlinks Summary: