The Linux Foundation Projects
Skip to main content
Category

Blog

Podcast: TEEs and Confidential Computing: Paving the Way for Onchain AI

By Blog No Comments

Don’t miss the latest Zero Gravity podcast episode, “TEEs and Confidential Computing: Paving the Way for Onchain AI.” Join industry experts in Confidential Computing as they explore how Trusted Execution Environments (TEEs) are revolutionizing AI and data-driven collaboration, with a special focus on Super Protocol’s impactful contributions.

Don’t miss the latest Zero Gravity podcast episode, “TEEs and Confidential Computing: Paving the Way for Onchain AI.” Join industry experts in Confidential Computing as they explore how Trusted Execution Environments (TEEs) are revolutionizing AI and data-driven collaboration, with a special focus on Super Protocol’s impactful contributions.

Mike Bursell (Executive Director, Confidential Computing Consortium):

Open Source as the Foundation of Trust:

Mike emphasizes that “Magic pixie dust to all of these is open source because you need to know that the software which is guaranteed seeing all of this stuff has been correctly written and there are no people trying to exfiltrate your data or do evil stuff with these keys as you go along…….. without that you just don’t get the scale taking off, that’s really important.

Simplifying Complex Technologies:

Mike also highlights the importance of abstracting complex technologies like TEEs to make them accessible to users without deep technical expertise. “That’s exactly what companies like Super Protocol are doing and the sort of thing that we are encouraging in the Confidential Computing Consortium as well. So, reducing the friction, bringing it to users who don’t need to know the really low-level detail- it does get very, very techy very, very quickly…

Nukri Basharuli (Founder and CEO, Super Protocol):

Effective Collaboration Among Companies:

Nukri Basharuli points out “the last McKinsey report says 90% of large and medium companies want to collaborate based on their data. But at the same time, there are two opposite vectors: on one side, you need to collaborate on data with your partners, even with your direct competitors – to observe the market, to find insights, and to grow. But at the same time, you need to prevent these leakages and risks of cannibalization of each other. That’s why verifiable and confidential computing gives us opportunities to make this collaboration effective and provable.”

Accessibility of TEEs:

Nuri discusses Super Protocol’s development of a “ready-made AI & Data Marketplace within a confidential cloud based on TEE. “In just a few clicks, you will be able to launch your model, upload your model from our Marketplace or from Hugging Face, in a fully private decentralized environment. Just a few clicks – deploy a smart contract and… this is why we are building Super: to make this road as easy as possible for millions of projects developing billions of personal AI agents based on personal data for businesses, private needs, and so on…….And you can make this connection verifiable for all participants – that’s why this is a big difference and next is that everything behind smart contracts in Super is governed only by smart contracts – all services, all computation services, 100% of services are governed only by smart contracts. This is another difference from a centralized cloud which is governed by an administrator or owner of the service.

David Attermann (Head of Web3 Investments, M31 Capital):

Growth of the Confidential Computing Market:

David Attermann predicts: “The confidential computing market is expected to grow 50% annually for the next 10 years. The demand for it is real, and it’s becoming a major industry now. Within Web3, TEEs have gained momentum as the most practical way to verify compute. For the next five years, TEEs will likely serve as the foundation for all verifiable compute in Web3.

Unique Capabilities of Super Protocol: 

David also notes that “even without an interest in cryptocurrencies, one can appreciate the unique functionalities offered by Super Protocol.

Listen to to the full podcast: https://www.youtube.com/watch?v=gFql1SUNM-o

For a deeper dive into Super Protocol’s architecture check out NVIDIA’s article

 https://developer.nvidia.com/blog/exploring-the-case-of-super-protocol-with-self-sovereign-ai-and-nvidia-confidential-computing/

Honeypotz Inc. Joins the Confidential Computing Consortium as a Startup Tier Member

By Announcement, Blog No Comments

Honeypotz Inc., a leader in the field of Confidential Computing, has joined the Confidential Computing Consortium (CCC) as a start up member. This partnership underscores Honeypotz’s commitment to enhancing data security and contributing to the broader adoption of trusted execution environments (TEEs) worldwide.

As part of the CCC, Honeypotz will collaborate with industry leaders like RedHat to elevate security standards and foster innovation in data privacy and protection. This partnership underscores a shared commitment to delivering cutting-edge solutions that ensure data remains secure and private, even in the most sensitive computing environments.

Honeypotz specializes in secure computing technologies that protect data in use, empowering organizations to confidently deploy and manage mission-critical applications. By working alongside RedHat and other CCC members, Honeypotz aims to push the boundaries of Confidential Computing, making secure and reliable solutions more accessible to businesses around the globe.

“We are excited to join the CCC and collaborate with RedHat,” said Vladimir Lialine, Founder of Honeypotz Inc. “This partnership will enable us to accelerate the adoption of trusted execution environments and continue delivering innovative solutions that address the evolving security needs of our customers.”

The CCC unites industry leaders, innovators, and experts to create a collaborative ecosystem for advancing the adoption of Confidential Computing technologies. By joining this consortium, Honeypotz reaffirms its position as a leader in data security and a driving force behind the future of Confidential Computing.

Learn more about Honeypotz’s mission and its role in the CCC by visiting Confidential Computing Consortium.

Confidential Computing Consortium Resources

Building Trust Among the Untrusting: How Super Protocol Redefines AI Collaboration  

By Blog No Comments

What if you could collaborate on AI projects, run complex models, fine-tune them, and even monetize both your models and data – all while retaining full control and ensuring confidentiality? It might sound impossible, especially when involving multiple participants you don’t need to trust – or even know.

In his article, Web3 plus Confidential Computing,” Mike Bursell, Executive Director of the Confidential Computing Consortium,  delves into this challenge: It turns out that allowing the creation of trust relationships between mutually un-trusting parties is extremely complex, but one way that this can be done is what we will now address.

Mike explores the synergy of Confidential Computing, blockchain, and smart contracts, showcasing Super Protocol as a real-world implementation of this vision. He explains: Central to Super Protocol’s approach are two aspects: that it is open source, and that remote attestation is required to allow the client to have sufficient assurance of the system’s security. Smart contracts – themselves open source – enable resources from various actors to be combined into an offer placed on the blockchain, ready for execution by anyone with access and sufficient resources. What makes this approach Web3 is that none of these actors needs to be connected contractually.”

This approach enables the network effect… building huge numbers of interconnected Web3 agents and applications, operating with the benefits of integrity and confidentiality offered by Confidential Computing, and backed up by remote attestation. Unlike Web2 ecosystems, often criticized for their fragility and lack of flexibility (not to mention the problems of securing them in the first place),” here is an opportunity to create “complex, flexible, and robust ecosystems where decentralized agents and applications can collaborate, with privacy controls designed in and clearly defined security assurances and policies.

As Mike aptly puts it: Technologies, when combined, sometimes yield fascinating – and commercially exciting – results.

Explore the full article to dive into the basics, the synergy of these technologies, and the technical details of how Super Protocol is turning this vision into reality.

Read the Full Article

Guide to Confidential Computing Sessions at KubeCon + CloudNativeCon North America, Salt Lake City 2024

By Blog, CCC Events No Comments

Ready to explore the forefront of Confidential Computing (CC) at KubeCon Salt Lake City? This guide highlights the key sessions and demos to get the most out of the KubeCon Schedule, from hands-on workshops and insightful talks to live demos at the Confidential Computing Consortium (CCC) booth. Here’s your roadmap to navigating CC at KubeCon:

Must-Attend Confidential Computing Sessions at KubeCon Salt Lake City

1. Confidential Containers 101: A Hands-On Workshop

  • When: Wednesday, 14:30 – 16:00
  • Where: Level 1, Grand Ballroom G

Presented by: Microsoft
This in-depth workshop by Microsoft provides an introduction to Confidential Containers, with practical insights into container security and data privacy. Participants will learn best practices for deploying applications with Confidential Computing to address privacy and security in multi-tenant environments. Expect hands-on experience that is perfect for practitioners interested in integrating CC into their Kubernetes workloads.

2. From Silicon to Service: Ensuring Confidentiality in Serverless GPU Cloud Functions

  • When: Thursday, 11:00 – 11:35
  • Where: Level 1, Room 151 G
  • Presented by: NVIDIA
    Join NVIDIA’s session to discover how Confidential Computing powers secure serverless GPU cloud functions, ideal for supporting AI and machine learning operations with sensitive data. This talk will walk you through securing data from the silicon level up to cloud services, offering insights on GPU-optimized applications that maintain data confidentiality in the cloud. NVIDIA’s approach is essential for anyone interested in GPU-based Confidential Computing and scalable cloud AI functions.

3. Privacy in the Age of Big Compute

  • When: Friday, 16:00 – 16:35
  • Where: Level 1, Grand Ballroom A
  • Presented by: Confidential Computing Consortium
    Led by the CCC, this session dives into privacy management across massive compute environments, essential for industries with stringent data protection needs. Attendees will gain a perspective on the evolving landscape of privacy within cloud-native and confidential workloads, from regulatory challenges to innovative privacy solutions. This session is key for those looking to understand how Confidential Computing fits into large-scale compute architectures.

4. Confidential Compute Use Cases Mini Session

  • When: Wednesday, 18:00 – 18:30; Thursday, 14:30 – 16:30
  • Where: CCC Booth Q25
  • Presented by: Red Hat
    Red Hat’s mini-session offers a glimpse into real-world applications of Confidential Computing. Using case studies and practical examples, this session will highlight how organizations leverage CC for secure, private compute solutions. Perfect for those curious about real-world implementations, it’s a great chance to see how CC meets industry privacy and compliance needs.

5. Confidential Collaborative AI

  • When: Wednesday, 16:00 – 16:30
  • Where: CCC Booth Q25
  • Presented by: Ultraviolet
    This session explores how Confidential Computing enables secure, collaborative AI model sharing while safeguarding sensitive data. Ultraviolet will discuss how CC facilitates multi-organization AI collaboration without sacrificing data privacy. Attendees interested in secure, cross-partner AI projects will gain insight into CC’s applications in collaborative ML environments.

6. Protecting LLMs with Confidential Computing

  • When: Thursday, 16:30 – 17:00
  • Where: CCC Booth Q25
  • Presented by: Ultraviolet
    Ultraviolet’s talk addresses the growing need for securing large language models (LLMs) with Confidential Computing. As LLMs handle more sensitive data, securing these models from unauthorized access becomes crucial. This session is ideal for those working with AI models in regulated industries, providing strategies to ensure data protection without compromising model functionality.

CCC Booth Q25: Live Demos and Networking Opportunities

Stop by the Confidential Computing Consortium Booth Q25 for demos, mini-sessions, and networking opportunities with industry leaders. Here are some key events:

Remote Attestation with Veraison: Live Demo

  • When: Wednesday and Thursday, 10:45 – 12:45
  • Presented by: Linaro
    This live demo from Linaro showcases Veraison’s remote attestation capabilities, an essential process for verifying workload integrity within Confidential Computing environments. Attendees will witness how Veraison’s open-source solution enhances trust in CC workloads, making this a must-see demo for anyone focused on workload security.

Don’t Miss: CCC Power User Bingo Card

Get your CCC Power User Bingo Card at the CCC booth and complete activities as you participate in sessions and demos. Play along during KubeCrawl and become a CC expert when securing data in use through Confidential Computing!

Decentralized Data Governance in Multi-Cloud Environments with Confidential Computing

By Blog No Comments

Author: Sal Kimmich

Introduction:

As enterprises increasingly adopt multi-cloud architectures, managing data governance across distributed systems has become more complex. With data privacy regulations like GDPR and CCPA requiring organizations to maintain strict control over sensitive information, ensuring compliance while leveraging the flexibility of multi-cloud systems presents a significant challenge.

Enter Confidential Computing: by using trusted execution environments (TEEs) and remote attestation across cloud platforms, organizations can ensure that sensitive data is processed in a secure and compliant manner. This blog will explore how decentralized data governance can be achieved in multi-cloud environments using confidential computing technologies.

Why Is Confidential Computing Essential for Multi-Cloud Data Security?

In a multi-cloud setup, organizations often distribute workloads across multiple cloud providers to meet their operational needs. However, this also increases the potential attack surface, as data flows through various infrastructures. Ensuring that data remains secure and compliant with regulations across these disparate environments is critical.

Confidential computing provides a solution by ensuring that sensitive data is processed in secure enclaves within TEEs, which isolate the data from unauthorized access. Using remote attestation, these TEEs can be verified, ensuring that the code executing within the enclave is trustworthy.

This ability to isolate and verify processing environments makes confidential computing essential for ensuring data security and governance across multi-cloud deployments.

What Is Decentralized Data Governance and Why Does It Matter in the Cloud?

Decentralized data governance refers to the practice of managing data policies, access controls, and compliance requirements across multiple locations or platforms without relying on a single centralized authority. In a multi-cloud environment, this is particularly challenging, as each cloud provider may have different security standards, policies, and regulatory requirements.

By decentralizing data governance, organizations can ensure that each cloud provider adheres to specific security and compliance rules. Confidential computing enables this by allowing organizations to enforce strict access controls and data policies at the TEE level, ensuring that data governance is maintained consistently, regardless of where the data is processed.

This approach to governance is crucial for businesses that need to operate in multiple jurisdictions or across cloud infrastructures, ensuring that they meet all relevant regulatory requirements.

How Open Enclave SDK Powers Secure Data Governance in Multi-Cloud Environments

One of the key tools that enables secure data governance in a multi-cloud environment is the Open Enclave SDK. Developed under the Confidential Computing Consortium, the Open Enclave SDK provides a consistent abstraction for creating TEEs across different platforms, including Azure, AWS, and Google Cloud.

By using the Open Enclave SDK, developers can build applications that securely process data in TEEs across multiple cloud environments without having to rewrite code for each cloud provider. This ensures that data remains secure and compliant with governance policies, regardless of the cloud infrastructure being used.

Additionally, the Open Enclave SDK supports remote attestation, allowing organizations to verify that data is being processed in trusted environments across all cloud platforms.

How Remote Attestation Ensures Compliance Across Multi-Cloud Systems

As organizations move workloads across different cloud providers, ensuring that each platform complies with relevant data privacy laws is a key concern. Remote attestation provides a mechanism to verify the security and integrity of TEEs, ensuring that sensitive data is processed only within approved environments.

In the context of GDPR, for example, remote attestation can help ensure that personal data is processed only within TEEs that meet the necessary security and privacy requirements. This ability to verify compliance on the fly allows businesses to confidently use multi-cloud infrastructures while maintaining adherence to data protection regulations.

Remote attestation helps organizations remain agile in the cloud while still upholding strict data sovereignty requirements, ensuring compliance with the CCPA, GDPR, and other global regulations.

Case Study: Confidential Computing in Real-World Data Sovereignty Challenges

A real-world example of decentralized data governance using confidential computing is the case of Italy’s Sovereign Private Cloud initiative. Italy’s government aimed to ensure that critical public sector workloads were processed within secure and private environments, adhering to the country’s strict data sovereignty laws.

By adopting confidential computing and remote attestation, Italy’s sovereign cloud enabled secure processing of sensitive public data across distributed environments. This approach ensured that even when data was processed outside of government infrastructure, it was handled securely in trusted execution environments, and compliance with Italian data protection laws was maintained.

To dive deeper into this solution, you can watch the session titled Sovereign Private Cloud: A Confidential Computing Solution for the Italian Public Administration from the Confidential Computing Summit 2024, where the implementation of the Sovereign Cloud is discussed in detail. The recording is available here.

This use case highlights how confidential computing can help address data sovereignty concerns, enabling organizations to operate securely across multiple cloud infrastructures without compromising compliance.

Achieving Decentralized Data Governance with Confidential Computing

As organizations continue to embrace multi-cloud strategies, managing data governance across distributed environments becomes more complex. Confidential computing offers a powerful solution by securing data in trusted execution environments and enabling remote attestation to verify compliance.

By leveraging tools like the Open Enclave SDK, businesses can maintain control over their data policies and ensure that sensitive information is processed in secure, compliant environments across all cloud platforms. As data sovereignty concerns grow, particularly in industries like healthcare and finance, confidential computing will play an increasingly important role in ensuring data governance and regulatory compliance across the multi-cloud landscape.

Hyperlinks Summary:

What Is Remote Attestation? Enhancing Data Governance with Confidential Computing

By Blog No Comments

Author:  Sal Kimmich

Introduction

Imagine you’re working for a large healthcare provider. You have patient data that needs to be processed in the cloud, but you also want to make sure that this data isn’t accessed or tampered with by anyone, including the cloud provider itself. How can you trust that the cloud server is secure before sending sensitive information to it? That’s where remote attestation comes in. It’s like a virtual “security checkpoint” that ensures the environment where your data will be processed is trustworthy.

Now imagine you’re managing thousands of IoT devices in a smart city, such as street lights or traffic sensors, which are constantly sending data back to central systems. You need to know that these devices haven’t been compromised by hackers. Remote attestation (specifically, RATestation)  helps verify that these devices are secure and haven’t been tampered with, ensuring reliable and secure communication.

Remote attestation is a core component of Confidential Computing that helps verify the integrity of a processing environment in both cloud and IoT setups, building trust across these systems. 

As organizations increasingly adopt cloud and distributed systems, securing sensitive data has become more critical than ever. Remote attestation, a core component of Confidential Computing, verifies the integrity of a data processing environment before sensitive workloads are accessed. This technology builds trust across multi-cloud environments by ensuring workloads run securely within Trusted Execution Environments (TEEs). However, this need for security also extends to the rapidly growing Internet of Things (IoT), where secure real-time operations are crucial.

Remote Attestation in Cloud and IoT: The Key to Secure Data Processing

Remote attestation operates differently in cloud and IoT environments, but its core function remains the same: verifying that a piece of code or application is running inside a secure, trusted environment (TEE).

In cloud computing, remote attestation assures that sensitive workloads, such as financial transactions or healthcare data, are processed securely within TEEs. In IoT, where devices operate in often uncontrolled environments, remote attestation ensures that each device remains trustworthy and untampered with, allowing it to communicate securely with cloud services or other devices.

Confidential Computing for Cloud: In cloud computing, multi-cloud architectures require trust across several infrastructures. Remote attestation ensures that sensitive workloads run in verified TEEs, providing a secure way to meet strict compliance requirements such as GDPR and HIPAA.

Confidential Computing for IoT: Real-Time Security: In IoT environments, remote attestation ensures the continuous integrity of distributed devices. For example, connected medical devices or autonomous vehicles must maintain their trustworthiness during real-time operations. Remote attestation allows organizations to verify these devices dynamically, preventing compromised systems from accessing sensitive networks.

Several CCC projects actively contribute to remote attestation in cloud and IoT:

  • Gramine: Primarily focused on Intel® SGX, Gramine supports secure workload execution across multi-cloud infrastructures, providing compatibility for legacy applications that require trusted execution environments.
  • Veraison: This flexible framework verifies attestation evidence from TEEs across multiple architectures, validating the integrity of both cloud and IoT devices.
  • Keylime: Particularly useful in IoT environments, Keylime offers remote boot attestation and real-time integrity monitoring, ensuring that IoT devices maintain a secure status during operations.
  • SPDM Tools: Developed to secure TEE-I/O in both cloud and IoT, SPDM Tools verify that communications between devices remain secure within trusted execution environments.
  • Open Enclave SDK: This project abstracts hardware differences and provides a unified API for building secure enclave applications, supporting both cloud-based and IoT use cases.

For more information on all of these projects, see the links below and visit our CCC Project Portfolio

How Remote Attestation Ensures Compliance with Global Data Privacy Laws

In industries governed by stringent data privacy laws such as GDPR (General Data Protection Regulation) in Europe and HIPAA (Health Insurance Portability and Accountability Act) in the US, compliance is a top priority. Remote attestation plays a pivotal role in ensuring that sensitive data is processed securely, in compliance with global privacy regulations.

  1. GDPR Compliance: Remote attestation ensures that personal data is processed in verified, secure TEEs, preventing unauthorized access or tampering. This is particularly critical for organizations in Europe, where GDPR mandates stringent data protection and privacy standards. The ability to verify the integrity of the cloud infrastructure before processing data allows organizations to prove compliance during audits.
  2. HIPAA Compliance: In the healthcare sector, remote attestation is essential for ensuring that sensitive patient data is processed securely in environments that comply with HIPAA. By confirming the integrity of the TEE, healthcare providers can securely manage electronic health records (EHRs), ensuring that patient data remains protected during transmission and processing.

Remote attestation provides organizations with the assurance that sensitive data is handled within secure environments that comply with privacy laws. As multi-cloud and IoT networks grow, ensuring compliance with these laws through verified environments will become even more critical.

Real-Time Trust in IoT: The Importance of Continuous Attestation

The challenge in IoT environments lies in ensuring that every device continuously adheres to security standards. For example, Keylime enables real-time integrity monitoring, ensuring that compromised IoT devices can be detected and isolated immediately. This is especially crucial in industries like healthcare, where real-time decision-making is directly influenced by the security status of devices.

The Future of Remote Attestation: From Cloud to Edge

Remote attestation is evolving to meet the demands of both cloud and IoT environments. As organizations adopt more complex multi-cloud infrastructures and IoT networks, the role of remote attestation will expand. Post-quantum cryptography and enhanced security measures such as multi-party attestation will improve the scalability of remote attestation in the future, making it more robust against emerging threats.

Conclusion: Building Trust with Remote Attestation

Remote attestation is a crucial tool for building trust in both cloud and IoT environments. Whether securing sensitive workloads in multi-cloud infrastructures or maintaining the integrity of millions of IoT devices, remote attestation ensures that data is processed in trusted, verified environments. CCC open-source projects such as Veraison, Gramine, Keylime, and SPDM Tools are leading the way in making remote attestation scalable and secure. As Confidential Computing continues to evolve, remote attestation will remain a cornerstone for ensuring security and trust across distributed systems.

Origin and Motivations Behind ATtestation

The RATtestation documentation emerged from the need to standardize remote attestation protocols across diverse Confidential Computing environments. The document addresses the challenge of securely verifying the integrity of systems in distributed and multi-cloud architectures. ATtestation defines best practices for trust establishment, data protection, and secure communication, ensuring the integrity of Trusted Execution Environments (TEEs). It emphasizes the role of remote attestation in enabling secure collaboration while maintaining compliance with privacy regulations such as GDPR and HIPAA.

RATtestation Documentation 

Resources Summary:

Key Takeaways from the Confidential Computing Consortium Mini Summit at OSS EU

By Blog No Comments

The Confidential Computing Consortium (CCC) recently participated in the Open Source Summit Europe (OSS EU), hosting a dedicated Confidential Computing Mini Summit. 

The event gathered some of the brightest minds in the industry to discuss the evolving landscape of Confidential Computing, its capabilities, and its impact across various industries. 

Check it out—All sessions from the summit are now available on the CCC YouTube channel for anyone who missed the event or wants to revisit the discussions.

Mini Summit Recap

The Mini Summit featured an impressive lineup of speakers and thought leaders, offering insights into the latest trends and innovations in Confidential Computing. Here’s a recap of the key sessions:

Opening Keynote- Confidential Computing: Enabling New Workloads and Use Cases

Mike Bursell, Executive Director of the CCC, opened the summit with a deep dive into Confidential Computing, showcasing how hardware-based Trusted Execution Environments (TEEs) now support new workloads. He highlighted its role in securing data with hardware-backed security and attestation, while exploring emerging applications in Generative AI, Web3, and multi-party computation.

Mike emphasized the transformative power of Confidential Computing, enabling secure workloads through the fusion of hardware security and cryptographic assurances. As Confidential Computing grows, remote attestation is becoming crucial, ensuring confidentiality and integrity in sensitive workloads across diverse environments.

Presentation here

Mini Summit Sessions

Cocos AI – Confidential Computing

  • Drasko Draskovic (CEO, Abstract Machines) and Dusan Borovcanin (Ultraviolet) shared, with a demo, how Cocos AI, using Confidential Computing, is leveraging computing to create more secure AI environments.

Presentation here

TikTok’s Privacy Innovation- A Secure and Private Platform for Transparent Research Access with Privacy-Enhancing Technologies

  • Mingshen Sun (Research Scientist, TikTok) presented TikTok’s approach to privacy-enhancing technologies, showcasing a secure and private platform designed for transparent research access.  The TikTok project is currently going through the process of being accepted as an open source project under the CCC.

Panel Session:  Attestation and Its Role in Confidential Computing

  • This panel, moderated by Mike Bursell, included expert perspectives from Paul Howard (Principal System Solutions Architect, Arm), Yuxuan Song (Ph.D. student, Inria Paris, and Sorbonne University), Ian Oliver(Cybersecurity Consultant), and Hannes Tschofenig (Professor, University of Applied Sciences Bonn-Rhein-Sieg). They explored how remote attestation serves as a key enabler for confidentiality and integrity, driving business value by assuring the trustworthiness of computing environments.  A wide-ranging – and at times quite lively! – discussion covered topics from IoT use cases to issues of transparency, from attestation models to approaches to integration.

Supporting Confidential Computing Across Europe’s Cloud-Edge Continuum

  • Francisco Picolini (Open Source Community Manager, OpenNebula Systems) highlighted the efforts to extend Confidential Computing capabilities within a new European project, looking across in the cloud and edge computing spaces.

Presentation here

Hiding Attestation with Linux Keyring in Confidential Virtual Machines

  • Mikko Ylinen (Cloud Software Architect, Intel) presented an innovative approach to using Linux Keyring to enhance security in confidential virtual machines, offering new techniques for securing workloads.

Presentation here

Looking Ahead

The Confidential Computing Mini Summit at OSS EU provided attendees with a comprehensive view of Confidential Computing’s present and future potential. Discussions around Gen AI, Web3, and multi-party computation showed how Confidential Computing is set to play a pivotal role in shaping the future of technology by enabling more secure, trusted, and scalable computing environments.

Join the conversation with the CCC and its ecosystem of members for more on how Confidential Computing is transforming industries and unlocking new capabilities. The future of secure computation is just beginning, and there’s much more to discover.

Confidential Computing Consortium Resources

Confidential Computing for Secure AI Pipelines: Protecting the Full Model Lifecycle

By Blog No Comments

By Sal Kimmich

As AI and machine learning continue to evolve, securing the entire lifecycle of AI models—from training to deployment—has become a critical priority for organizations handling sensitive data. The need for privacy and security is especially crucial in industries like healthcare, finance, and government, where AI models are often trained on data subject to GDPR, HIPAA, or CCPA regulations.

In this blog, we’ll explore how confidential computing enhances security across the entire AI model lifecycle, ensuring that sensitive data, models, and computations are protected at every stage. We’ll also examine the role of technologies like Intel SGX, ARM TrustZone, and trusted execution environments (TEEs) in achieving end-to-end security for AI workflows.

The AI Model Lifecycle: From Training to Deployment

The AI model lifecycle consists of several stages where sensitive data is exposed to potential risks:

  1. Data Collection and Preprocessing: This is the stage where data is gathered and prepared for model training. In regulated industries, this data often contains personally identifiable information (PII) or other sensitive details.
  2. Model Training: During training, AI models are fed data to learn patterns. This process is compute-intensive and often requires distributed systems or multi-cloud environments.
  3. Inference and Deployment: Once trained, AI models are deployed to make predictions on new data. At this stage, the model itself and the inference data need to remain secure.

Each stage presents unique security challenges. Data can be exposed during preprocessing, models can be stolen during training, and sensitive inputs or outputs can be compromised during inference. Securing all aspects of the AI pipeline is critical to maintaining data privacy and ensuring compliance with regulations like GDPR and HIPAA.

How Confidential Computing Protects AI at Each Stage

Confidential computing provides a solution to these challenges by using trusted execution environments (TEEs) to secure data, models, and computations throughout the AI pipeline.

  • Data Collection and Preprocessing: In this stage, TEEs ensure that sensitive data can be preprocessed in a secure enclave. Technologies like Intel SGX and ARM TrustZone create isolated environments where data can be cleaned, transformed, and anonymized without exposing it to unauthorized access.
  • Model Training: Confidential computing plays a critical role during AI model training, where TEEs are used to protect both the training data and the model itself. By running the training process within a secure enclave, organizations can ensure that no external party—whether malicious actors or cloud providers—can access or steal the model.
  • Inference and Deployment: After training, confidential computing ensures that the model remains protected during inference. Remote attestation allows organizations to verify that the AI model is running in a secure environment before it is deployed. This prevents data leakage during inference and ensures that the model’s predictions are based on trusted data inputs.

Intel SGX and ARM TrustZone: Securing AI Workflows

Intel SGX and ARM TrustZone are two leading technologies that enable confidential computing in AI pipelines by securing sensitive workloads at every stage.

  • Intel SGX: Intel SGX provides hardware-based security by creating secure enclaves that isolate data and code during processing. In AI workflows, Intel SGX is used to protect data during preprocessing and model training, ensuring that sensitive data and AI models remain secure even in multi-cloud environments.
  • ARM TrustZone: ARM TrustZone enables secure computation on mobile and IoT devices, providing isolated execution environments for sensitive AI models. ARM TrustZone is particularly useful in edge computing, where AI models are deployed close to data sources, and confidentiality is critical.

Both Intel SGX and ARM TrustZone provide the infrastructure needed to implement confidential AI pipelines, from data collection and training to inference and deployment.

Real-World Use Case: Confidential AI in Healthcare

A prime example of how confidential computing secures AI pipelines is in the healthcare industry, where AI models are often used to analyze sensitive patient data. By using confidential computing, healthcare organizations can ensure that patient records are protected during model training, and predictions are made without exposing sensitive data to unauthorized access.

In this case, confidential computing helps healthcare providers comply with regulations like HIPAA, while still benefiting from the insights generated by AI models.

Confidential Computing and AI Regulations: Ensuring Compliance with GDPR and HIPAA

As AI becomes more embedded in regulated industries, maintaining compliance with data privacy laws like GDPR and HIPAA is essential. Confidential computing ensures that sensitive data and AI models are protected at every stage of the AI lifecycle, reducing the risk of data breaches or unauthorized access.

By securing both data and models, confidential computing helps organizations meet the requirements for data minimization, transparency, and consent, ensuring that AI workflows remain compliant with global regulations.

AI Pipelines with Confidential Computing

As AI workflows become more complex and data privacy concerns grow, confidential computing will play a central role in securing the AI model lifecycle. From data preprocessing to model inference, confidential computing ensures that data and AI models remain protected in trusted execution environments, enabling organizations to deploy AI securely and compliantly.

With technologies like Intel SGX and ARM TrustZone, organizations can now secure their AI pipelines at every stage, ensuring privacy, security, and regulatory compliance in industries like healthcare, finance, and national security.

Hyperlinks Summary:

Strengthening Multi-Cloud Security: The Role of COCONUT-SVSM in Confidential Virtual Machines

By Blog No Comments

By Sal Kimich

Introduction:

As businesses increasingly adopt multi-cloud environments to run their critical workloads, ensuring data security and compliance with regional privacy regulations becomes paramount. The proliferation of sensitive workloads across different cloud providers raises concerns about the safety of data, particularly in virtualized environments where virtual machines (VMs) handle vast amounts of personal and regulated data.

This is where COCONUT-SVSM (Secure Virtual Machine Service Module) shines. Designed to provide secure services and device emulations for confidential virtual machines (CVMs), COCONUT-SVSM ensures that sensitive workloads remain secure, even in distributed or potentially untrusted cloud environments. In this blog, we will explore the value of COCONUT-SVSM in safeguarding virtualized workloads, highlighting how it strengthens multi-cloud security.

Why Secure Virtual Machines Matter in Multi-Cloud Environments

Virtual machines (VMs) are a critical part of the modern cloud infrastructure, enabling organizations to efficiently allocate resources and scale their operations. However, traditional VMs are vulnerable to attacks from both external threats and privileged insiders, especially when data is processed in the cloud.

In multi-cloud environments, workloads can span multiple cloud providers, making it difficult to ensure that each environment is secure. This is where confidential computing and technologies like COCONUT-SVSM come into play. By creating confidential virtual machines (CVMs), organizations can isolate sensitive workloads from the underlying host operating system, ensuring that data remains protected, even if the host is compromised.

The Architecture of COCONUT-SVSM: Providing Security for Confidential VMs

At the heart of COCONUT-SVSM is its ability to provide secure services to CVMs through device emulations and remote attestation. These features enable organizations to run sensitive workloads with the assurance that both the data and the virtual machine environment are secure from unauthorized access.

Key features of COCONUT-SVSM include:

  • TPM Emulation: Emulating a Trusted Platform Module (TPM), COCONUT-SVSM enables secure key management and encryption within the virtual machine.
  • Secure Boot: Using UEFI variable storage, COCONUT-SVSM ensures that VMs can only boot in secure environments, preventing malicious actors from modifying the boot process.
  • Live Migration Support: In multi-cloud environments, VMs often need to be moved between physical hosts. COCONUT-SVSM supports secure live migration, ensuring that sensitive data remains protected during transitions.

These features help organizations comply with strict data privacy regulations, such as GDPR and CCPA, by maintaining control over how and where sensitive data is processed.

How COCONUT-SVSM Enhances Compliance in Multi-Cloud Systems

Compliance with data sovereignty and privacy regulations is a major challenge for organizations operating across multiple jurisdictions. For example, regulations like GDPR mandate that personal data is processed and stored within specific geographic boundaries, while ensuring that security controls are in place to prevent unauthorized access.

COCONUT-SVSM enhances compliance by ensuring that data processed in confidential virtual machines is always secured, regardless of where the data is physically located. This is particularly important for businesses with operations in multiple regions, as it allows them to securely process sensitive workloads while adhering to local regulations.

Additionally, remote attestation provided by COCONUT-SVSM ensures that workloads are only processed in trusted environments, providing an additional layer of security for organizations handling sensitive data.

Real-World Applications: COCONUT-SVSM in Healthcare and Finance

The healthcare and finance sectors are two prime examples of industries that can benefit from the enhanced security provided by COCONUT-SVSM. Both industries handle vast amounts of personal and financial data, making security and compliance critical to their operations.

  • Healthcare: In healthcare, COCONUT-SVSM can be used to protect sensitive patient data during AI-driven diagnostics or clinical trials. By creating secure environments for processing healthcare data, COCONUT-SVSM helps healthcare providers comply with regulations like HIPAA while ensuring that patient privacy is maintained.
  • Finance: In the financial sector, COCONUT-SVSM can be used to secure fraud detection models or other sensitive financial operations. By protecting virtual machines used to process financial transactions, COCONUT-SVSM helps financial institutions comply with PCI-DSS standards and other financial regulations.

COCONUT-SVSM as a Pillar of Multi-Cloud Security

As organizations continue to embrace multi-cloud strategies, the importance of securing virtualized environments cannot be overstated. COCONUT-SVSM provides the tools needed to ensure that confidential virtual machines (CVMs) remain secure and compliant, even when workloads are distributed across multiple cloud providers.

By leveraging features like TPM emulation, secure boot, and remote attestation, COCONUT-SVSM enables organizations to maintain control over their data and adhere to data sovereignty regulations, making it an essential part of any confidential computing strategy. As industries like healthcare and finance continue to handle sensitive data, COCONUT-SVSM will play a critical role in protecting workloads and ensuring compliance in multi-cloud environments.

Hyperlinks Summary:

 

Exploring Enclave SDKs: Enhancing Confidential Computing

By Blog No Comments

Author:  Sal Kimmich

 

In the realm of confidential computing, enclave SDKs play a pivotal role in ensuring secure and private execution environments. These software development kits provide developers with the necessary tools and frameworks to build, deploy, and manage applications that operate within enclaves. In this blog, we will explore three prominent open-source enclave SDKs: Open Enclave, Keystone, and Veracruz. Additionally, we will touch upon the Certifier Framework, which, while slightly different, contributes significantly to the landscape of confidential computing.

Open Enclave

Open Enclave is a versatile SDK that provides a unified API surface for creating enclaves on various Trusted Execution Environments (TEEs) such as Intel SGX and ARM TrustZone. Developed and maintained by a broad community, Open Enclave aims to simplify the development of secure applications by offering a consistent and portable interface across different hardware platforms.

Key Features of Open Enclave:

  • Cross-Platform Support: One of the standout features of Open Enclave is its ability to support multiple hardware architectures, making it a flexible choice for developers working in diverse environments.
  • Rich Documentation and Community Support: Open Enclave boasts extensive documentation and a supportive community, providing ample resources for developers to learn and troubleshoot.
  • Comprehensive Security Measures: The SDK incorporates robust security features, including memory encryption, attestation, and secure storage, ensuring that applications remain secure and tamper-resistant.

Keystone

Keystone is an open-source framework designed to provide secure enclaves on RISC-V architecture. It is highly modular and customizable, allowing developers to tailor the security features to meet the specific needs of their applications.

Key Features of Keystone:

  • Modularity: Keystone’s design philosophy revolves around modularity, enabling developers to customize the enclave’s components, such as the security monitor, runtime, and drivers.
  • RISC-V Architecture: Keystone is built specifically for the RISC-V architecture, leveraging its open and extensible nature to offer a unique and highly configurable enclave solution.
  • Research and Innovation: Keystone is often used in academic and research settings, driving innovation in the field of confidential computing and providing a platform for experimental security enhancements.

Veracruz

Veracruz is an open-source project that aims to create a collaborative computing environment where multiple parties can jointly compute over shared data without compromising privacy. It emphasizes data confidentiality and integrity, making it ideal for scenarios involving sensitive data.

Key Features of Veracruz:

  • Collaborative Computing: Veracruz enables secure multi-party computation, allowing different stakeholders to collaborate on computations without revealing their individual data.
  • Privacy-Preserving: The framework ensures that data remains confidential throughout the computation process, leveraging TEEs to provide strong privacy guarantees.
  • Flexible Deployment: Veracruz supports various deployment models, including cloud, edge, and on-premises, making it adaptable to different use cases and environments.

Certifier Framework: A Slightly Different Approach

While the Certifier Framework for Confidential Computing shares the goal of enhancing security and privacy in computational environments, it adopts a distinct approach compared to traditional enclave SDKs.

Certifier Framework focuses on providing a unified certification and attestation infrastructure for confidential computing environments. It aims to ensure that the software and hardware components in a system can be securely attested and certified, providing trust guarantees to end-users and applications.

Key Features of the Certifier Framework:

  • Certification and Attestation: The primary focus of the Certifier Framework is on certification and attestation, ensuring that all components of a confidential computing environment meet stringent security standards.
  • Unified Approach: The framework offers a unified approach to certification across different TEEs, simplifying the process of establishing trust in diverse environments.
  • Integration with Existing Solutions: The Certifier Framework can be integrated with other enclave SDKs and confidential computing solutions, enhancing their security posture through robust certification mechanisms.

Conclusion

Enclave SDKs like Open Enclave, Keystone, and Veracruz are critical tools for developers aiming to build secure and private applications in the realm of confidential computing. Each of these projects brings unique strengths and features to the table, catering to different hardware architectures and use cases. Meanwhile, the Certifier Framework provides an essential layer of trust and certification, complementing these SDKs and ensuring that confidential computing environments meet the highest security standards. By leveraging these powerful tools, developers can create innovative solutions that protect sensitive data and maintain user privacy in an increasingly digital world.

Confidential Computing Consortium Resources