The Linux Foundation Projects
Skip to main content
Category

Blog

Key Takeaways from the Confidential Computing Consortium Mini Summit at OSS EU

By Blog No Comments

The Confidential Computing Consortium (CCC) recently participated in the Open Source Summit Europe (OSS EU), hosting a dedicated Confidential Computing Mini Summit. 

The event gathered some of the brightest minds in the industry to discuss the evolving landscape of Confidential Computing, its capabilities, and its impact across various industries. 

Check it out—All sessions from the summit are now available on the CCC YouTube channel for anyone who missed the event or wants to revisit the discussions.

Mini Summit Recap

The Mini Summit featured an impressive lineup of speakers and thought leaders, offering insights into the latest trends and innovations in Confidential Computing. Here’s a recap of the key sessions:

Opening Keynote- Confidential Computing: Enabling New Workloads and Use Cases

Mike Bursell, Executive Director of the CCC, opened the summit with a deep dive into Confidential Computing, showcasing how hardware-based Trusted Execution Environments (TEEs) now support new workloads. He highlighted its role in securing data with hardware-backed security and attestation, while exploring emerging applications in Generative AI, Web3, and multi-party computation.

Mike emphasized the transformative power of Confidential Computing, enabling secure workloads through the fusion of hardware security and cryptographic assurances. As Confidential Computing grows, remote attestation is becoming crucial, ensuring confidentiality and integrity in sensitive workloads across diverse environments.

Presentation here

Mini Summit Sessions

Cocos AI – Confidential Computing

  • Drasko Draskovic (CEO, Abstract Machines) and Dusan Borovcanin (Ultraviolet) shared, with a demo, how Cocos AI, using Confidential Computing, is leveraging computing to create more secure AI environments.

Presentation here

TikTok’s Privacy Innovation- A Secure and Private Platform for Transparent Research Access with Privacy-Enhancing Technologies

  • Mingshen Sun (Research Scientist, TikTok) presented TikTok’s approach to privacy-enhancing technologies, showcasing a secure and private platform designed for transparent research access.  The TikTok project is currently going through the process of being accepted as an open source project under the CCC.

Panel Session:  Attestation and Its Role in Confidential Computing

  • This panel, moderated by Mike Bursell, included expert perspectives from Paul Howard (Principal System Solutions Architect, Arm), Yuxuan Song (Ph.D. student, Inria Paris, and Sorbonne University), Ian Oliver(Cybersecurity Consultant), and Hannes Tschofenig (Professor, University of Applied Sciences Bonn-Rhein-Sieg). They explored how remote attestation serves as a key enabler for confidentiality and integrity, driving business value by assuring the trustworthiness of computing environments.  A wide-ranging – and at times quite lively! – discussion covered topics from IoT use cases to issues of transparency, from attestation models to approaches to integration.

Supporting Confidential Computing Across Europe’s Cloud-Edge Continuum

  • Francisco Picolini (Open Source Community Manager, OpenNebula Systems) highlighted the efforts to extend Confidential Computing capabilities within a new European project, looking across in the cloud and edge computing spaces.

Presentation here

Hiding Attestation with Linux Keyring in Confidential Virtual Machines

  • Mikko Ylinen (Cloud Software Architect, Intel) presented an innovative approach to using Linux Keyring to enhance security in confidential virtual machines, offering new techniques for securing workloads.

Presentation here

Looking Ahead

The Confidential Computing Mini Summit at OSS EU provided attendees with a comprehensive view of Confidential Computing’s present and future potential. Discussions around Gen AI, Web3, and multi-party computation showed how Confidential Computing is set to play a pivotal role in shaping the future of technology by enabling more secure, trusted, and scalable computing environments.

Join the conversation with the CCC and its ecosystem of members for more on how Confidential Computing is transforming industries and unlocking new capabilities. The future of secure computation is just beginning, and there’s much more to discover.

Confidential Computing Consortium Resources

Confidential Computing for Secure AI Pipelines: Protecting the Full Model Lifecycle

By Blog No Comments

By Sal Kimmich

As AI and machine learning continue to evolve, securing the entire lifecycle of AI models—from training to deployment—has become a critical priority for organizations handling sensitive data. The need for privacy and security is especially crucial in industries like healthcare, finance, and government, where AI models are often trained on data subject to GDPR, HIPAA, or CCPA regulations.

In this blog, we’ll explore how confidential computing enhances security across the entire AI model lifecycle, ensuring that sensitive data, models, and computations are protected at every stage. We’ll also examine the role of technologies like Intel SGX, ARM TrustZone, and trusted execution environments (TEEs) in achieving end-to-end security for AI workflows.

The AI Model Lifecycle: From Training to Deployment

The AI model lifecycle consists of several stages where sensitive data is exposed to potential risks:

  1. Data Collection and Preprocessing: This is the stage where data is gathered and prepared for model training. In regulated industries, this data often contains personally identifiable information (PII) or other sensitive details.
  2. Model Training: During training, AI models are fed data to learn patterns. This process is compute-intensive and often requires distributed systems or multi-cloud environments.
  3. Inference and Deployment: Once trained, AI models are deployed to make predictions on new data. At this stage, the model itself and the inference data need to remain secure.

Each stage presents unique security challenges. Data can be exposed during preprocessing, models can be stolen during training, and sensitive inputs or outputs can be compromised during inference. Securing all aspects of the AI pipeline is critical to maintaining data privacy and ensuring compliance with regulations like GDPR and HIPAA.

How Confidential Computing Protects AI at Each Stage

Confidential computing provides a solution to these challenges by using trusted execution environments (TEEs) to secure data, models, and computations throughout the AI pipeline.

  • Data Collection and Preprocessing: In this stage, TEEs ensure that sensitive data can be preprocessed in a secure enclave. Technologies like Intel SGX and ARM TrustZone create isolated environments where data can be cleaned, transformed, and anonymized without exposing it to unauthorized access.
  • Model Training: Confidential computing plays a critical role during AI model training, where TEEs are used to protect both the training data and the model itself. By running the training process within a secure enclave, organizations can ensure that no external party—whether malicious actors or cloud providers—can access or steal the model.
  • Inference and Deployment: After training, confidential computing ensures that the model remains protected during inference. Remote attestation allows organizations to verify that the AI model is running in a secure environment before it is deployed. This prevents data leakage during inference and ensures that the model’s predictions are based on trusted data inputs.

Intel SGX and ARM TrustZone: Securing AI Workflows

Intel SGX and ARM TrustZone are two leading technologies that enable confidential computing in AI pipelines by securing sensitive workloads at every stage.

  • Intel SGX: Intel SGX provides hardware-based security by creating secure enclaves that isolate data and code during processing. In AI workflows, Intel SGX is used to protect data during preprocessing and model training, ensuring that sensitive data and AI models remain secure even in multi-cloud environments.
  • ARM TrustZone: ARM TrustZone enables secure computation on mobile and IoT devices, providing isolated execution environments for sensitive AI models. ARM TrustZone is particularly useful in edge computing, where AI models are deployed close to data sources, and confidentiality is critical.

Both Intel SGX and ARM TrustZone provide the infrastructure needed to implement confidential AI pipelines, from data collection and training to inference and deployment.

Real-World Use Case: Confidential AI in Healthcare

A prime example of how confidential computing secures AI pipelines is in the healthcare industry, where AI models are often used to analyze sensitive patient data. By using confidential computing, healthcare organizations can ensure that patient records are protected during model training, and predictions are made without exposing sensitive data to unauthorized access.

In this case, confidential computing helps healthcare providers comply with regulations like HIPAA, while still benefiting from the insights generated by AI models.

Confidential Computing and AI Regulations: Ensuring Compliance with GDPR and HIPAA

As AI becomes more embedded in regulated industries, maintaining compliance with data privacy laws like GDPR and HIPAA is essential. Confidential computing ensures that sensitive data and AI models are protected at every stage of the AI lifecycle, reducing the risk of data breaches or unauthorized access.

By securing both data and models, confidential computing helps organizations meet the requirements for data minimization, transparency, and consent, ensuring that AI workflows remain compliant with global regulations.

AI Pipelines with Confidential Computing

As AI workflows become more complex and data privacy concerns grow, confidential computing will play a central role in securing the AI model lifecycle. From data preprocessing to model inference, confidential computing ensures that data and AI models remain protected in trusted execution environments, enabling organizations to deploy AI securely and compliantly.

With technologies like Intel SGX and ARM TrustZone, organizations can now secure their AI pipelines at every stage, ensuring privacy, security, and regulatory compliance in industries like healthcare, finance, and national security.

Hyperlinks Summary:

Strengthening Multi-Cloud Security: The Role of COCONUT-SVSM in Confidential Virtual Machines

By Blog No Comments

By Sal Kimich

Introduction:

As businesses increasingly adopt multi-cloud environments to run their critical workloads, ensuring data security and compliance with regional privacy regulations becomes paramount. The proliferation of sensitive workloads across different cloud providers raises concerns about the safety of data, particularly in virtualized environments where virtual machines (VMs) handle vast amounts of personal and regulated data.

This is where COCONUT-SVSM (Secure Virtual Machine Service Module) shines. Designed to provide secure services and device emulations for confidential virtual machines (CVMs), COCONUT-SVSM ensures that sensitive workloads remain secure, even in distributed or potentially untrusted cloud environments. In this blog, we will explore the value of COCONUT-SVSM in safeguarding virtualized workloads, highlighting how it strengthens multi-cloud security.

Why Secure Virtual Machines Matter in Multi-Cloud Environments

Virtual machines (VMs) are a critical part of the modern cloud infrastructure, enabling organizations to efficiently allocate resources and scale their operations. However, traditional VMs are vulnerable to attacks from both external threats and privileged insiders, especially when data is processed in the cloud.

In multi-cloud environments, workloads can span multiple cloud providers, making it difficult to ensure that each environment is secure. This is where confidential computing and technologies like COCONUT-SVSM come into play. By creating confidential virtual machines (CVMs), organizations can isolate sensitive workloads from the underlying host operating system, ensuring that data remains protected, even if the host is compromised.

The Architecture of COCONUT-SVSM: Providing Security for Confidential VMs

At the heart of COCONUT-SVSM is its ability to provide secure services to CVMs through device emulations and remote attestation. These features enable organizations to run sensitive workloads with the assurance that both the data and the virtual machine environment are secure from unauthorized access.

Key features of COCONUT-SVSM include:

  • TPM Emulation: Emulating a Trusted Platform Module (TPM), COCONUT-SVSM enables secure key management and encryption within the virtual machine.
  • Secure Boot: Using UEFI variable storage, COCONUT-SVSM ensures that VMs can only boot in secure environments, preventing malicious actors from modifying the boot process.
  • Live Migration Support: In multi-cloud environments, VMs often need to be moved between physical hosts. COCONUT-SVSM supports secure live migration, ensuring that sensitive data remains protected during transitions.

These features help organizations comply with strict data privacy regulations, such as GDPR and CCPA, by maintaining control over how and where sensitive data is processed.

How COCONUT-SVSM Enhances Compliance in Multi-Cloud Systems

Compliance with data sovereignty and privacy regulations is a major challenge for organizations operating across multiple jurisdictions. For example, regulations like GDPR mandate that personal data is processed and stored within specific geographic boundaries, while ensuring that security controls are in place to prevent unauthorized access.

COCONUT-SVSM enhances compliance by ensuring that data processed in confidential virtual machines is always secured, regardless of where the data is physically located. This is particularly important for businesses with operations in multiple regions, as it allows them to securely process sensitive workloads while adhering to local regulations.

Additionally, remote attestation provided by COCONUT-SVSM ensures that workloads are only processed in trusted environments, providing an additional layer of security for organizations handling sensitive data.

Real-World Applications: COCONUT-SVSM in Healthcare and Finance

The healthcare and finance sectors are two prime examples of industries that can benefit from the enhanced security provided by COCONUT-SVSM. Both industries handle vast amounts of personal and financial data, making security and compliance critical to their operations.

  • Healthcare: In healthcare, COCONUT-SVSM can be used to protect sensitive patient data during AI-driven diagnostics or clinical trials. By creating secure environments for processing healthcare data, COCONUT-SVSM helps healthcare providers comply with regulations like HIPAA while ensuring that patient privacy is maintained.
  • Finance: In the financial sector, COCONUT-SVSM can be used to secure fraud detection models or other sensitive financial operations. By protecting virtual machines used to process financial transactions, COCONUT-SVSM helps financial institutions comply with PCI-DSS standards and other financial regulations.

COCONUT-SVSM as a Pillar of Multi-Cloud Security

As organizations continue to embrace multi-cloud strategies, the importance of securing virtualized environments cannot be overstated. COCONUT-SVSM provides the tools needed to ensure that confidential virtual machines (CVMs) remain secure and compliant, even when workloads are distributed across multiple cloud providers.

By leveraging features like TPM emulation, secure boot, and remote attestation, COCONUT-SVSM enables organizations to maintain control over their data and adhere to data sovereignty regulations, making it an essential part of any confidential computing strategy. As industries like healthcare and finance continue to handle sensitive data, COCONUT-SVSM will play a critical role in protecting workloads and ensuring compliance in multi-cloud environments.

Hyperlinks Summary:

 

Exploring Enclave SDKs: Enhancing Confidential Computing

By Blog No Comments

Author:  Sal Kimmich

 

In the realm of confidential computing, enclave SDKs play a pivotal role in ensuring secure and private execution environments. These software development kits provide developers with the necessary tools and frameworks to build, deploy, and manage applications that operate within enclaves. In this blog, we will explore three prominent open-source enclave SDKs: Open Enclave, Keystone, and Veracruz. Additionally, we will touch upon the Certifier Framework, which, while slightly different, contributes significantly to the landscape of confidential computing.

Open Enclave

Open Enclave is a versatile SDK that provides a unified API surface for creating enclaves on various Trusted Execution Environments (TEEs) such as Intel SGX and ARM TrustZone. Developed and maintained by a broad community, Open Enclave aims to simplify the development of secure applications by offering a consistent and portable interface across different hardware platforms.

Key Features of Open Enclave:

  • Cross-Platform Support: One of the standout features of Open Enclave is its ability to support multiple hardware architectures, making it a flexible choice for developers working in diverse environments.
  • Rich Documentation and Community Support: Open Enclave boasts extensive documentation and a supportive community, providing ample resources for developers to learn and troubleshoot.
  • Comprehensive Security Measures: The SDK incorporates robust security features, including memory encryption, attestation, and secure storage, ensuring that applications remain secure and tamper-resistant.

Keystone

Keystone is an open-source framework designed to provide secure enclaves on RISC-V architecture. It is highly modular and customizable, allowing developers to tailor the security features to meet the specific needs of their applications.

Key Features of Keystone:

  • Modularity: Keystone’s design philosophy revolves around modularity, enabling developers to customize the enclave’s components, such as the security monitor, runtime, and drivers.
  • RISC-V Architecture: Keystone is built specifically for the RISC-V architecture, leveraging its open and extensible nature to offer a unique and highly configurable enclave solution.
  • Research and Innovation: Keystone is often used in academic and research settings, driving innovation in the field of confidential computing and providing a platform for experimental security enhancements.

Veracruz

Veracruz is an open-source project that aims to create a collaborative computing environment where multiple parties can jointly compute over shared data without compromising privacy. It emphasizes data confidentiality and integrity, making it ideal for scenarios involving sensitive data.

Key Features of Veracruz:

  • Collaborative Computing: Veracruz enables secure multi-party computation, allowing different stakeholders to collaborate on computations without revealing their individual data.
  • Privacy-Preserving: The framework ensures that data remains confidential throughout the computation process, leveraging TEEs to provide strong privacy guarantees.
  • Flexible Deployment: Veracruz supports various deployment models, including cloud, edge, and on-premises, making it adaptable to different use cases and environments.

Certifier Framework: A Slightly Different Approach

While the Certifier Framework for Confidential Computing shares the goal of enhancing security and privacy in computational environments, it adopts a distinct approach compared to traditional enclave SDKs.

Certifier Framework focuses on providing a unified certification and attestation infrastructure for confidential computing environments. It aims to ensure that the software and hardware components in a system can be securely attested and certified, providing trust guarantees to end-users and applications.

Key Features of the Certifier Framework:

  • Certification and Attestation: The primary focus of the Certifier Framework is on certification and attestation, ensuring that all components of a confidential computing environment meet stringent security standards.
  • Unified Approach: The framework offers a unified approach to certification across different TEEs, simplifying the process of establishing trust in diverse environments.
  • Integration with Existing Solutions: The Certifier Framework can be integrated with other enclave SDKs and confidential computing solutions, enhancing their security posture through robust certification mechanisms.

Conclusion

Enclave SDKs like Open Enclave, Keystone, and Veracruz are critical tools for developers aiming to build secure and private applications in the realm of confidential computing. Each of these projects brings unique strengths and features to the table, catering to different hardware architectures and use cases. Meanwhile, the Certifier Framework provides an essential layer of trust and certification, complementing these SDKs and ensuring that confidential computing environments meet the highest security standards. By leveraging these powerful tools, developers can create innovative solutions that protect sensitive data and maintain user privacy in an increasingly digital world.

Confidential Computing Consortium Resources

Library OS for Confidential Computing: Enhancing Data Security with Cutting-Edge Projects

By Blog No Comments

Author:  Sal Kimmich

Introduction

As the landscape of data security continues to evolve, the concept of a Library OS (operating system) for Confidential Computing is gaining traction. Library OS projects create secure environments for applications by providing “auto” enclaves for process isolation. These enclaves, also known as runtimes or sandboxes, ensure that sensitive data remains protected even during processing. In this blog, we explore the significance of Library OS for confidential computing and highlight three key projects: Gramine, Occlum, and Enarx.

What is a Library OS?

A Library OS, or “libOS,” is a streamlined operating system that runs applications within secure enclaves. These enclaves isolate processes, providing a trusted execution environment (TEE) that safeguards data from unauthorized access and tampering. This approach is particularly valuable for confidential computing, where data must remain secure throughout its lifecycle, including during computation.

Key Projects in Library OS for Confidential Computing

Gramine
  • Overview: Gramine is an open-source Library OS designed to run applications in trusted execution environments. It supports Intel SGX and enables the secure execution of unmodified applications.
  • Features: Gramine provides robust security by isolating applications within enclaves, ensuring that data remains protected even if the underlying host is compromised. Its compatibility with existing applications makes it a versatile choice for enhancing data security.
  • GitHub: Gramine Project
Occlum
  • Overview: Occlum is a memory-safe, multi-process Library OS that supports Intel SGX. It aims to provide a secure and efficient environment for running applications within enclaves.
  • Features: Occlum ensures data confidentiality and integrity by isolating processes and providing strong security guarantees. Its design focuses on performance and scalability, making it suitable for a wide range of applications.
  • GitHub: Occlum Project
Enarx
  • Overview: While not a traditional Library OS, Enarx uses WebAssembly (Wasm) to provide similar benefits. It enables the secure execution of applications in TEEs, ensuring data privacy and integrity.
  • Features: Enarx leverages Wasm to create secure runtimes that can run across different hardware platforms. Its approach simplifies the deployment of secure applications, making it a compelling option for confidential computing.
  • GitHub: Enarx Project

The Importance of Library OS in Confidential Computing

Library OS projects like Gramine, Occlum, and Enarx play a crucial role in the realm of confidential computing. They offer a layer of security that ensures sensitive data remains protected during processing. By isolating applications within secure enclaves, these projects mitigate risks associated with data breaches and unauthorized access.

Conclusion

The concept of a Library OS for confidential computing represents a significant advancement in data security. Projects like Gramine, Occlum, and Enarx demonstrate the potential of this approach to enhance privacy and protect sensitive information. As the need for secure data processing continues to grow, these projects will play an increasingly vital role in ensuring the confidentiality and integrity of data in various applications.

Stay tuned for more insights into the world of confidential computing and the innovative projects that are driving this field forward.

Partisia Joins the Confidential Computing Consortium as a Start-up Tier Member

By Blog No Comments

We are pleased to welcome Partisia, a global pioneer in Multiparty Computation (MPC) and advanced cryptographic privacy, as a Start-up Tier member of the Confidential Computing Consortium (CCC). Their membership strengthens the CCC’s efforts to advance secure, privacy-preserving computing by bringing Partisia’s expertise in cutting-edge cryptographic solutions to the forefront of our initiatives.

Founded in 2008, Partisia has a long history of delivering commercial-grade MPC software solutions, with an initial focus on secure, high-stake auctions used for trading energy and spectrum licenses. Over the years, Partisia’s MPC solutions have evolved, becoming the foundation for various services, including key management, data activation, statistics, and various bespoke applications such as DeFi, voting, and e-cash.

Partisia’s commercial activities have also led to the creation of successful spinouts, such as Sepior, which was acquired by Blockdaemon in 2022, and the Partisia Blockchain Foundation. This Swiss-based foundation governs and launches a public blockchain built by Partisia.

By joining the Confidential Computing Consortium, Partisia aligns itself with a global community dedicated to defining and accelerating the adoption of confidential computing. This membership further solidifies Partisia’s commitment to addressing weak and single points of failure across digital infrastructures through commercializing advanced cryptographic technologies.

We eagerly anticipate the valuable contributions that Partisia will bring to the CCC and the broader tech community. As they continue to push the boundaries of secure, privacy-preserving computing, we are excited to see the innovative solutions they will develop.

Confidential Computing Consortium Resources

Enabling Verifiable, User-Owned and Tradable AI Agents in Games – with Veriplay, Polygon, Immutable and Super Protocol

By Blog No Comments

Author:  Nukri Basharuli, Founder and CEO, Super Protocol

 

 

True Web3 Games, with their potential for rich gaming experiences, advanced AI agents, and genuine digital asset ownership, can only reach their full potential through the implementation of Confidential Computing in a truly decentralized manner. The Confidential Computing Consortium, alongside its member Super Protocol, is at the forefront of this revolution, demonstrating how these technologies can unlock new business opportunities.

Super Protocol serves a dual role in the evolving digital landscape. As a confidential and self-sovereign AI Cloud, it focuses on decentralization, privacy, and verifiability. Its computing network of commonly adopted types of GPU and CPU operates in a confidential mode under the orchestration of Smart Contracts on the Polygon blockchain. This makes Super a decentralized alternative to centralized clouds like Amazon AWS for Web3 AI projects. Additionally, as an AI Marketplace, Super Protocol differs from traditional AI marketplaces like Hugging Face by offering AI models and Data owners the unique ability to share and monetize their assets in a fully confidential, self-sovereign mode. The value of Super Protocol is well illustrated by the examples of its clients.

In this blog, we’ll explore how Super Protocol’s AI cloud is set to transform the gaming industry, enabling a secure and self-sovereign experience that could redefine the future of digital entertainment.

Example: Veriplay 

About Veriplay 

Veriplay is a startup that is developing a gaming platform compatible with Immutable and Polygon. This platform will enable creating AI agents in Web3 Games that can be traded on the open market as dynamic NFTs

The Veriplay team, with a proven track record of working with industry giants like Playrix, Warner Brothers, Google, and Crytek, is on a mission to revolutionize the gaming landscape by introducing verifiable and tradable AI agents

This innovative approach aims to address the limitations of traditional gaming experiences and empower players with unprecedented control over their in-game assets

Super Protocol and Veriplay research and testing efforts have utilized NVIDIA’s H100 GPUs, provided by the NVIDIA Confidential Computing team (more details in Super Protocol Press-release: 

https://www.linkedin.com/posts/superprotocol_confidentialcomputing-depin-nvidiainception -activity-7169336537371914242-LBV3?utm_source=share&utm_medium=member_desktop 

Project Goal 

Veriplay’s goal is to give players the ability to truly own AI game agents. To do so, it aims to develop a reliable, Web3-compatible gaming platform for integrating verifiable tokenized AI agents into games with the following characteristics: 

  • Player AI Models are Protected from Unauthorized Alterations: Veriplay wants to protect player AI models from any unauthorized modifications, whether initiated by the game developer or external malicious actors. 
  • AI Model Training is Verifiable: This means that it is possible to verify how the AI agents were trained, which guarantees their fairness and transparency. 
  • Decentralization (Smart Contract Orchestration): Smart contracts will govern the execution

of AI computations and data storage, ensuring transparency and immutability, and eliminating human administration layer. 

  • Free trading of AI NFTs on marketplaces : Veriplay revolutionizes AI agent ownership by transforming them into tradable digital assets managed by players through dynamic NFTs. 

It is evident that without AI computation privacy, verifiability of AI model training history, and smart contract management, the integrity of AI agents as digital assets will be irrevocably compromised, shattering market trust and leading to market rejection of such assets. 

The Centralized Infrastructure Problem 

There are several problems with creating trusted AI agents in centralized infrastructure, such as Amazon Web Services (AWS) or Google Cloud: 

  • Difficulty of Verification: It is difficult to verify that AI agents have been trained and operate according to the rules of the game declared by the developer. This is especially important when AI agents become tradable assets or when they are used in competitions with prize money. 
  • Risk of Developer Manipulation: Developers have the ability to alter or duplicate an AI agent trained by a player who has invested time and money into the training process. For instance, a developer could duplicate a successful model that frequently wins competitions and sell it to other players as this developer’s original creation. 
  • Player’s Inability to Own Agents: In centralized AI agent infrastructures, players lack true ownership of their agents, being confined to developer-defined capabilities and pricing models. While creating a simple NFT for AI agent ownership partially addresses this, it falls short of true self-custodial ownership. For this type of ownership to be achieved, the AI NFT must be dynamic, linked to all AI agent components, maintain security and verifiability, and prevent human administration access – all impossible within centralized frameworks. 

To sum it up, centralized infrastructure poses significant risks that not only diminish the value of players’ time and investments in training their agents, but also severely restrict monetization opportunities for both players and game studios. 

Conversely, with the implementation of trustless AI Agents in games, both players and developers could generate additional income by trading dynamic AI NFTs on marketplaces, renting out Agents to each other, ghosting and participating in the championships with cash prize funds, and more. 

Super Protocol and Veriplay Solution 

In contrast to AI agents confined within centralized infrastructures and under the complete control of game developers, Web3-compatible gaming agents powered by Super Protocol and the Veriplay platform will exhibit the following advantages:

  1. Confidentiality and Sovereignty of AI Agent – players retain exclusive sovereignty over their AI agents, encompassing models, data, and computational resources, effectively eliminating the possibility of third-party manipulation. 
  • Confidential Enclave Technology: Web3-compliant AI agents are computed in confidential enclaves. Confidential enclaves operate based on the Trusted Execution Environment (TEE) technology supported by Nvidia H100 GPU chips. TEE allows creating a secure area inside the processor for safe storage, processing, and protection of confidential data. Even physical access to the server will not grant access to the applications running in the enclaves. No one except the owner of the Agent knows on which servers their data is being processed, as TEE ensures complete isolation of sensitive data and models. 
  • Access and control over the system are only granted to the smart contract and verified applications loaded into it. The computational resources used for the Agent’s operation are automatically authenticated by the protocol, ensuring the user that they are processing their model and data securely. By design, these resources cannot be tampered with or exploited maliciously. The owner alone manages the model, data, and interactions of their gaming agent. 

As a result, users can be confident that the game developer cannot alter or copy the model since they do not have access to it. 

  1. Verifiability of AI Agent training and game interactions – maintaining the verifiability of the Agent throughout the chain from the storage to the server is guaranteed by the following functions: 
  • The client application and the server are mutually authenticated on a TLS connection. They exchange messages signed with a secret key. Messages contain information about the hardware, application, and its settings. 
  • After mutual authentication, the game application computing process initiates. The outcome will be signed using the enclave key, maintaining the chain of trust. This trust continuity prevents unauthorized alterations, ensuring players can rely on the computation’s result. 

Therefore, the verifiability of the AI Agent’s track record and its immutable nature reassure players that acquiring an AI Agent guarantees possession of an asset with the promised properties. Moreover, by investing in its continuous training, they can have confidence that the future market price of the Agent will accurately reflect their training endeavors. 

  1. The decentralization and removal of human administration are achieved through orchestration by a smart contract system. Via smart contracts, Super Protocol entirely separates the gaming process from server and cloud owners, guaranteeing trust, flexibility and reliability. 
  • Smart Contracts oversee the distribution of the system’s computing resources, assigning confidential nodes for computing tasks.
  • Supporting the necessary Service Level Agreement (SLA) and scalability is accomplished by grouping nodes into clusters and pools with automated Disaster Recovery (DR) mechanisms. 
  • Additionally, the capability to establish geo-distributed clusters with efficient local gateways is provided. 
  • The protocol also ensures secure storage through multiple network replication, encryption, and restricted access to trusted applications. 

With these features, the capacity to deploy a fault-tolerant game server, storage, and agents without dependence on specific hosts is achieved, embodying the finest decentralized cloud architecture available today. 

  1. Ownership management via NFTs and smart contracts is central to the project. The entire process of AI agents’ ownership management and the orchestration of the marketplace where they are traded, is exclusively governed by the project’s smart contracts. 

Each agent consists of on-chain data — an NFT with its own wallet, and data in storage, including a model and game interaction history. Any modifications to the model are made within the chain of trust, beginning with the initial record. These changes are exclusively executed through a smart contract, with each alteration recorded in the agent’s track record and the blockchain. 

Listing of an agent on the marketplace and transferring ownership is seamlessly and securely conducted through a smart contract, ensuring the safety of all participants. 

To sum it up, the deployment of a seamless and secure confidential, verifiable, decentralized computing service, combined with state and ownership management through smart contracts, creates a competitive and fair environment for AI agents across diverse activities. 

Uncompromised competition translates into value, meaning that an AI agent’s success in gaming tasks transforms it into an asset that accrues both the player’s time and money, along with the diversity and uniqueness of gaming scenarios it has encountered. 

NFT integration grants the agent autonomy, meaning that it can be traded and rented. Moreover, the agent becomes capable of possessing its own assets and making decisions independently, acting autonomously without external influence. 

The Rest of Web3 Infrastructure: Polygon and Immutable 

Alongside the Super Protocol, the Veriplay team chose Polygon and Immutable X technologies, merging these three platforms to establish a resilient Web3-compliant ecosystem for training and dynamically tokenizing AI Agents. On a high level, each solution has the following functions: 

  • Super Protocol provides the decentralized and confidential verifiable computing infrastructure necessary for widespread adoption of AI agents and dynamic NFTs in games. Leveraging TEE technology, Super ensures data and algorithms are safeguarded against

unauthorized access, manipulation, and attacks, crucial for establishing trusted and transparent web3 games. 

  • Polygon offers a fast and scalable blockchain, delivering high performance and low transaction fees. This enables efficient management of AI agents and dynamic NFTs, ensuring a seamless and cost-friendly gaming experience. Moreover, Veriplay, with its focus on a multichain future, seamlessly integrates with other EVM-compatible networks through Polygon’s multichain framework. Additionally, Polygon’s compatibility with the Ethereum Virtual Machine grants Veriplay direct access to the vast capabilities of the Ethereum ecosystem. This ensures not only smooth scaling but also opens doors to a wider range of opportunities within the Web3 space. 
  • Immutable X stands out as a premier NFT platform focused on gaming, offering scalability, low fees, and developer-friendly tools. These features simplify the integration of dynamic NFTs into games, requiring minimal cost and effort. 

Conclusion 

Super Protocol marks a new era in Web3 Games development, paving the way for innovative and immersive game worlds controlled by players and built on principles of trust and transparency. 

By enabling exclusive ownership of confidential, verifiable, and transferable AI Agents through dynamic NFTs, players can fully trust the authenticity of in-game assets. Moreover, investors in the open market can confidently invest in NFT assets backed by real AI models, sought after by players for gaming applications.

Attestation Libraries for Confidential Computing: Veraison and SPDM Tools

By Blog No Comments

Author:  Sal Kimmich

In the realm of confidential computing, ensuring trust and security in computing environments is paramount. Attestation libraries and tools provide essential components to build systems that can produce and verify evidence of trustworthiness. This blog explores the concept of attestation in confidential computing and highlights two significant projects within the Confidential Computing Consortium (CCC): Veraison and SPDM Tools.

What is Attestation in Confidential Computing?

Attestation is the process by which the hardware provides evidence about itself and the software running under its protection. Any other party can use this evidence to evaluate the trustworthiness of the Trusted Execution Environment. This process is critical in confidential computing to establish and maintain trust in computing environments, ensuring that sensitive data and operations are protected from unauthorized access and tampering.

Key Components of Attestation

  1. Evidence Generation:
    • The hardware (e.g., a device or CPU) generates evidence about its state, such as cryptographic measurements and signatures.
  2. Evidence Verification:
    • The verifier evaluates the provided evidence against a set of policies or reference values to determine the entity’s trustworthiness.
  3. Trust Anchors:
    • Cryptographic roots of trust (e.g., certificates) used to validate the identity.

Veraison: A Comprehensive Attestation Verification Service

Project Veraison builds software components to facilitate the creation of an Attestation Verification Service. Here’s how Veraison operates and its significance:

Overview

  • Purpose: Veraison aims to simplify the development of attestation verification services by providing reusable software components. These components include verification and provisioning pipelines that can be extended with plugins to support specific attestation technologies.
  • Flexibility: The project’s core components are designed to adapt to various deployment environments through abstractions, allowing for custom service creation without the need for extensive bespoke development.

Key Features

  1. Verification Pipelines:
    • Core structures for verifying attestation evidence, ensuring that it meets established trust policies.
  2. Provisioning Pipelines:
    • Components that manage the provisioning of data required for evidence appraisal, sourced from authoritative sources.
  3. Extensibility:
    • Support for plugins allows the service to handle various attestation technologies, making it versatile and adaptable to different use cases.
  4. Community and Collaboration:
    • Veraison is a collaborative project with active community involvement, including regular public meetings and contributions from multiple organizations.

Use Case: Veraison in Action

Veraison provides reference implementations to demonstrate integration principles, offering a convenient basis for developing substantive attestation verification services. These reference implementations showcase how the core components and plugins work together to create a robust verification system. 

Veraison also supports REST APIs to assist in end-to-end integration with attestation scemes, or can be used as verification components within a custom deployment. A great example of this is a key broker service, where successful attestation verification a key released to a Trusted Execution Environment. 

SPDM Tools: Enhancing Security with Attestation Protocols

SPDM (Security Protocol and Data Model) Tools offer libraries and utilities to implement the SPDM protocol, a standardized framework for secure communication and attestation between devices.

Overview

  • Purpose: SPDM Tools provide essential functionality for implementing the SPDM protocol, ensuring secure communication and attestation across various platforms.
  • Interoperability: The tools ensure interoperability between different devices and platforms, promoting a unified approach to security and attestation.

Key Features

  1. Protocol Implementation:
    • Comprehensive support for the SPDM protocol, enabling secure communication and attestation across various platforms.
  2. Utilities and Libraries:
    • A suite of tools and libraries that simplify the implementation and management of SPDM-based attestation solutions.
  3. Standardization:
    • By adhering to the SPDM standard, the tools promote consistency and reliability in attestation processes across different devices and environments.

Use Case: SPDM Tools in Secure Device Communication

SPDM Tools can establish secure communication channels between devices, ensuring that each device can verify the trustworthiness of the other before exchanging sensitive information. This capability is crucial in scenarios such as building a trusted channel between an accelerator device like a GPU and a Confidential Virtual Machine (CVM)..

SPDM-RS: A Rust Implementation for SPDM Protocols

SPDM-RS is a project within the CCC that provides a Rust language implementation of the SPDM, IDE_KM, and TDISP protocols. These protocols facilitate direct device assignment for Trusted Execution Environment I/O (TEE-I/O) in Confidential Computing.

Key Features

  1. SPDM Protocol Implementation:
    • Supports various SPDM requests and responses, including version negotiation, capability negotiation, algorithm negotiation, and more.
  2. IDE_KM and TDISP Protocols:
    • Implements protocols for secure communication and device management, enhancing the trust boundary of Confidential Virtual Machines (CVMs).
  3. Cryptographic Algorithm Support:
    • Includes support for cryptographic algorithms such as SHA-256/384/512, RSA, ECDSA, AES-GCM, and ChaCha20Poly1305.
  4. Cross-Platform Support:
    • Designed to work across different platforms, ensuring broad applicability in various confidential computing scenarios.

Conclusion

Attestation libraries and tools are vital for ensuring the trustworthiness of confidential computing environments. Projects like Veraison and SPDM Tools within the Confidential Computing Consortium provide essential components for building robust attestation solutions. By leveraging these tools, developers can create systems that securely verify and manage trust, protecting sensitive data and operations from potential threats.

End-User Devices for Confidential Computing: Exploring Islet

By Blog No Comments

Author:  Sal Kimmich

As technology evolves, the need for secure and confidential computing extends beyond servers and data centers to end-user devices such as smartphones, tablets, and personal computers. These devices are increasingly used to collect and process sensitive data, necessitating robust security measures to protect user privacy. One notable project within the Confidential Computing Consortium that addresses this need is Islet.

What is Confidential Computing?

Confidential computing is a security paradigm that aims to protect data in use by performing computation in a hardware-based Trusted Execution Environment (TEE). This approach ensures that sensitive data remains encrypted and secure even when being processed, mitigating the risk of unauthorized access and tampering.

The Importance of Trusted Firmware

Trusted Firmware is the cornerstone of Confidential Computing, providing the essential security features and isolation needed to establish a trusted execution environment. Unlike regular firmware, Trusted Firmware includes mechanisms for secure boot, cryptographic verification, and hardware-based isolation of secure and non-secure execution environments. To understand more on this topic, view our blog on Trusted Firmware. 

Islet: A Platform for On-Device Confidential Computing

Islet is an open-source project designed to enable Confidential Computing on ARM architecture devices using the ARMv9 Confidential Compute Architecture (CCA). Its primary objective is to provide a secure platform for on-device Confidential Computing, thereby protecting user privacy and enabling secure processing of sensitive data directly on end-user devices. Islet is implemented in the Rust programming language, and utilizes Rust’s inherent memory safety features to create a robust and secure environment.

Key Features of Islet

  1. Realm Management Monitor (RMM):
    • Operates at EL2 in the Realm world on the application processor cores.
    • Manages confidential virtual machines (VMs), known as realms, ensuring their secure execution.
    • Islet RMM complies with ARm’s specifications for platform ABIs, which enables Islet to integrate seamlessly with the ARM ecosystem, supporting Linux and KVM patch for ARM CCA.
  2. Hardware Enforced Security (HES):
    • Performs device boot measurement and generates platform attestation reports.
    • Manages sealing key functionality within a secure hardware IP separate from the main application processor.
  3. Automated Verification:
    • Incorporates formal verification techniques to enhance the security of Islet, ensuring robustness against various attack vectors.

Use Case: Confidential Machine Learning

Islet showcases its capabilities through a confidential machine learning demo. In this scenario, a mobile device user interacts with a chat-bot application that runs on Islet. The chat-bot processes the request and communicates with an ML server through a secure channel, demonstrating end-to-end confidential computing. This use case highlights Islet’s potential in enabling secure and private machine-to-machine computing without relying on server-side intervention.

Why End-User Devices Need Confidential Computing

While traditional confidential computing solutions focus on server-side protection, securing end-user devices is equally important for several reasons:

  1. Initial Data Collection:
    • Sensitive data collection often begins at the user device level, making it crucial to protect this data from the outset.
  2. Privacy Apps:
    • As users increasingly rely on privacy-focused applications such as secure messengers, password managers, and private browsers, ensuring the confidentiality of data on these devices becomes essential.
  3. End-to-End Security:
    • By enabling confidential computing on user devices, Islet helps establish end-to-end security throughout the entire data processing path, from collection to computation.
  4. Machine-to-Machine Computing:
    • On-device confidential computing facilitates secure machine-to-machine communication, reducing the need for server intervention and enhancing overall security.

Conclusion

Confidential computing is not just for servers and data centers; it is equally critical for end-user devices. Projects like Islet within the Confidential Computing Consortium exemplify the application of Trusted Firmware principles to secure user devices. By providing a robust platform for on-device confidential computing, Islet ensures the privacy and security of sensitive data, paving the way for more secure and private user experiences.

For more information on Islet and its capabilities, visit the Islet GitHub repository.

Understanding Trusted Firmware in Confidential Computing: Coconut SVSM and VirTEE 

By Blog No Comments

Author:  Sal Kimmich

Trusted Firmware serves as the foundational layer in confidential computing, ensuring that the hardware and software environment’s security and integrity are maintained. Unlike regular firmware, Trusted Firmware is designed with additional security features and responsibilities to establish a Trusted Execution Environment (TEE). Here’s a deeper dive into what makes Trusted Firmware different and its role in confidential computing.

 

Differences Between Trusted Firmware and Regular Firmware

  1. Enhanced Security Features:
    • Regular Firmware: Primarily focuses on initializing hardware components and providing basic services to the operating system.
    • Trusted Firmware: Includes enhanced security features such as cryptographic verification of firmware components, secure boot, and mechanisms to enforce hardware-based isolation of secure and non-secure execution environments.
  2. Isolation and Trust:
    • Regular Firmware: Does not inherently provide mechanisms to isolate critical operations or sensitive data from the rest of the system.
    • Trusted Firmware: Establishes a TEE, isolating sensitive operations from the general-purpose operating system and protecting them from potential threats and unauthorized access.
  3. Responsibility and Scope:
    • Regular Firmware: Manages standard hardware initialization and operational tasks.
    • Trusted Firmware: Manages secure initialization of hardware features, authenticates and validates software components, and provides a secure execution environment for critical tasks.

Why Trusted Firmware is Necessary

Trusted Firmware is crucial for confidential computing because it provides a secure foundation that prevents unauthorized access and tampering. Here’s why Trusted Firmware is needed and how it differs from the regular OS and firmware:

Need for Trusted OS:

  • Purpose: To prevent resources from being accessed directly by the generalist OS running concurrently with it, such as preventing a user with root privileges from accessing sensitive resources.
  • Security: The Trusted OS operates with higher privileges and tighter security controls, ensuring that critical operations and data are protected even if the general OS is compromised.

Differences from Normal OS:

  • Size and Scope: The Trusted OS is designed to be small and secure, running with higher privileges than the general OS. For instance, in an ARMv8-a system, parts of the Trusted OS run at EL3 (highest privilege), while a hypervisor runs at EL2, and Linux at EL1.
  • Purpose: The Trusted OS is not meant to replace the general OS like Linux, which is extensive and feature-rich. Instead, it secures specific resources and operations from the general OS.

Security Provided by Trusted OS:

  • Threat Protection: It protects against attempts by users of the general OS to access resources managed by the Trusted OS, including both legitimate and illegitimate access attempts.
  • Mechanism: It uses secure mechanisms, such as the SMC instruction, to switch between the general OS and the Trusted OS when necessary to access secure resources.

Switching Between Trusted OS and Normal World:

  • Context Switching: Occurs when code running in the general OS needs to access a resource managed by the Trusted OS, such as decrypting content using a key only accessible by the Trusted OS.
  • Interrupt Handling: Hardware interrupts may also trigger a switch to the Trusted OS, allowing safe handling of interrupts within the TEE context.

Example Projects

COCONUT Secure VM Service Module (SVSM)

The COCONUT Secure VM Service Module (SVSM) exemplifies Trusted Firmware in confidential computing by providing secure services and device emulations for Confidential Virtual Machines (CVMs). Key features include:

  • Integration with AMD SEV-SNP: Utilizes AMD’s Secure Encrypted Virtualization with Secure Nested Paging, including the VM Privilege Level feature, to ensure robust hardware-based security.
  • Secure Boot and Authentication: Ensures a secure boot process and component authentication, maintaining a trusted execution path from the firmware to the CVM.

VirTEE

VirTEE is another project that demonstrates the application of Trusted Firmware principles. It focuses on:

  • Open Community Development: Collaborative development of tools for TEE bring-up, attestation, and management, supporting a wide range of virtualization platforms.
  • Support for Multiple Technologies: Includes tools and libraries for AMD SEV, SEV-SNP, and Intel TDX, providing comprehensive support for secure virtualization across different hardware platforms.

Discover more about VirTEE via their project repository. 

Conclusion

Trusted Firmware is essential for establishing and maintaining secure and reliable confidential computing environments. It provides enhanced security features, isolation, and trust mechanisms that are not present in regular firmware. Projects like COCONUT-SVSM and VirTEE illustrate the practical application of Trusted Firmware principles, showcasing robust frameworks for secure virtualized environments and cross-platform confidential computing. These projects ensure the integrity and confidentiality of sensitive data and operations, advancing the field of secure computing.