The Linux Foundation Projects
Skip to main content
All Posts By

jshelby

Exploring Enclave SDKs: Enhancing Confidential Computing

By Blog No Comments

Author:  Sal Kimmich

 

In the realm of confidential computing, enclave SDKs play a pivotal role in ensuring secure and private execution environments. These software development kits provide developers with the necessary tools and frameworks to build, deploy, and manage applications that operate within enclaves. In this blog, we will explore three prominent open-source enclave SDKs: Open Enclave, Keystone, and Veracruz. Additionally, we will touch upon the Certifier Framework, which, while slightly different, contributes significantly to the landscape of confidential computing.

Open Enclave

Open Enclave is a versatile SDK that provides a unified API surface for creating enclaves on various Trusted Execution Environments (TEEs) such as Intel SGX and ARM TrustZone. Developed and maintained by a broad community, Open Enclave aims to simplify the development of secure applications by offering a consistent and portable interface across different hardware platforms.

Key Features of Open Enclave:

  • Cross-Platform Support: One of the standout features of Open Enclave is its ability to support multiple hardware architectures, making it a flexible choice for developers working in diverse environments.
  • Rich Documentation and Community Support: Open Enclave boasts extensive documentation and a supportive community, providing ample resources for developers to learn and troubleshoot.
  • Comprehensive Security Measures: The SDK incorporates robust security features, including memory encryption, attestation, and secure storage, ensuring that applications remain secure and tamper-resistant.

Keystone

Keystone is an open-source framework designed to provide secure enclaves on RISC-V architecture. It is highly modular and customizable, allowing developers to tailor the security features to meet the specific needs of their applications.

Key Features of Keystone:

  • Modularity: Keystone’s design philosophy revolves around modularity, enabling developers to customize the enclave’s components, such as the security monitor, runtime, and drivers.
  • RISC-V Architecture: Keystone is built specifically for the RISC-V architecture, leveraging its open and extensible nature to offer a unique and highly configurable enclave solution.
  • Research and Innovation: Keystone is often used in academic and research settings, driving innovation in the field of confidential computing and providing a platform for experimental security enhancements.

Veracruz

Veracruz is an open-source project that aims to create a collaborative computing environment where multiple parties can jointly compute over shared data without compromising privacy. It emphasizes data confidentiality and integrity, making it ideal for scenarios involving sensitive data.

Key Features of Veracruz:

  • Collaborative Computing: Veracruz enables secure multi-party computation, allowing different stakeholders to collaborate on computations without revealing their individual data.
  • Privacy-Preserving: The framework ensures that data remains confidential throughout the computation process, leveraging TEEs to provide strong privacy guarantees.
  • Flexible Deployment: Veracruz supports various deployment models, including cloud, edge, and on-premises, making it adaptable to different use cases and environments.

Certifier Framework: A Slightly Different Approach

While the Certifier Framework for Confidential Computing shares the goal of enhancing security and privacy in computational environments, it adopts a distinct approach compared to traditional enclave SDKs.

Certifier Framework focuses on providing a unified certification and attestation infrastructure for confidential computing environments. It aims to ensure that the software and hardware components in a system can be securely attested and certified, providing trust guarantees to end-users and applications.

Key Features of the Certifier Framework:

  • Certification and Attestation: The primary focus of the Certifier Framework is on certification and attestation, ensuring that all components of a confidential computing environment meet stringent security standards.
  • Unified Approach: The framework offers a unified approach to certification across different TEEs, simplifying the process of establishing trust in diverse environments.
  • Integration with Existing Solutions: The Certifier Framework can be integrated with other enclave SDKs and confidential computing solutions, enhancing their security posture through robust certification mechanisms.

Conclusion

Enclave SDKs like Open Enclave, Keystone, and Veracruz are critical tools for developers aiming to build secure and private applications in the realm of confidential computing. Each of these projects brings unique strengths and features to the table, catering to different hardware architectures and use cases. Meanwhile, the Certifier Framework provides an essential layer of trust and certification, complementing these SDKs and ensuring that confidential computing environments meet the highest security standards. By leveraging these powerful tools, developers can create innovative solutions that protect sensitive data and maintain user privacy in an increasingly digital world.

Confidential Computing Consortium Resources

Library OS for Confidential Computing: Enhancing Data Security with Cutting-Edge Projects

By Blog No Comments

Author:  Sal Kimmich

Introduction

As the landscape of data security continues to evolve, the concept of a Library OS (operating system) for Confidential Computing is gaining traction. Library OS projects create secure environments for applications by providing “auto” enclaves for process isolation. These enclaves, also known as runtimes or sandboxes, ensure that sensitive data remains protected even during processing. In this blog, we explore the significance of Library OS for confidential computing and highlight three key projects: Gramine, Occlum, and Enarx.

What is a Library OS?

A Library OS, or “libOS,” is a streamlined operating system that runs applications within secure enclaves. These enclaves isolate processes, providing a trusted execution environment (TEE) that safeguards data from unauthorized access and tampering. This approach is particularly valuable for confidential computing, where data must remain secure throughout its lifecycle, including during computation.

Key Projects in Library OS for Confidential Computing

Gramine
  • Overview: Gramine is an open-source Library OS designed to run applications in trusted execution environments. It supports Intel SGX and enables the secure execution of unmodified applications.
  • Features: Gramine provides robust security by isolating applications within enclaves, ensuring that data remains protected even if the underlying host is compromised. Its compatibility with existing applications makes it a versatile choice for enhancing data security.
  • GitHub: Gramine Project
Occlum
  • Overview: Occlum is a memory-safe, multi-process Library OS that supports Intel SGX. It aims to provide a secure and efficient environment for running applications within enclaves.
  • Features: Occlum ensures data confidentiality and integrity by isolating processes and providing strong security guarantees. Its design focuses on performance and scalability, making it suitable for a wide range of applications.
  • GitHub: Occlum Project
Enarx
  • Overview: While not a traditional Library OS, Enarx uses WebAssembly (Wasm) to provide similar benefits. It enables the secure execution of applications in TEEs, ensuring data privacy and integrity.
  • Features: Enarx leverages Wasm to create secure runtimes that can run across different hardware platforms. Its approach simplifies the deployment of secure applications, making it a compelling option for confidential computing.
  • GitHub: Enarx Project

The Importance of Library OS in Confidential Computing

Library OS projects like Gramine, Occlum, and Enarx play a crucial role in the realm of confidential computing. They offer a layer of security that ensures sensitive data remains protected during processing. By isolating applications within secure enclaves, these projects mitigate risks associated with data breaches and unauthorized access.

Conclusion

The concept of a Library OS for confidential computing represents a significant advancement in data security. Projects like Gramine, Occlum, and Enarx demonstrate the potential of this approach to enhance privacy and protect sensitive information. As the need for secure data processing continues to grow, these projects will play an increasingly vital role in ensuring the confidentiality and integrity of data in various applications.

Stay tuned for more insights into the world of confidential computing and the innovative projects that are driving this field forward.

Partisia Joins the Confidential Computing Consortium as a Start-up Tier Member

By Blog No Comments

We are pleased to welcome Partisia, a global pioneer in Multiparty Computation (MPC) and advanced cryptographic privacy, as a Start-up Tier member of the Confidential Computing Consortium (CCC). Their membership strengthens the CCC’s efforts to advance secure, privacy-preserving computing by bringing Partisia’s expertise in cutting-edge cryptographic solutions to the forefront of our initiatives.

Founded in 2008, Partisia has a long history of delivering commercial-grade MPC software solutions, with an initial focus on secure, high-stake auctions used for trading energy and spectrum licenses. Over the years, Partisia’s MPC solutions have evolved, becoming the foundation for various services, including key management, data activation, statistics, and various bespoke applications such as DeFi, voting, and e-cash.

Partisia’s commercial activities have also led to the creation of successful spinouts, such as Sepior, which was acquired by Blockdaemon in 2022, and the Partisia Blockchain Foundation. This Swiss-based foundation governs and launches a public blockchain built by Partisia.

By joining the Confidential Computing Consortium, Partisia aligns itself with a global community dedicated to defining and accelerating the adoption of confidential computing. This membership further solidifies Partisia’s commitment to addressing weak and single points of failure across digital infrastructures through commercializing advanced cryptographic technologies.

We eagerly anticipate the valuable contributions that Partisia will bring to the CCC and the broader tech community. As they continue to push the boundaries of secure, privacy-preserving computing, we are excited to see the innovative solutions they will develop.

Confidential Computing Consortium Resources

August Newsletter

By Newsletter No Comments

In Today’s Issue:

  1. Executive Director August Recap
  2. Agenda Released! CC Mini Summit @ OSSEU
  3. Post-Quantum Cryptography
  4. Web3 Use Case
  5. Community Blog Highlights

Welcome to the August edition of our newsletter – your guide to awesome happenings in our CCC community. Let’s go!

Executive Director August recap

While it’s holiday season in much of the Northern Hemisphere, the CCC’s work continues (uninterrupted even by the Olympics and Paralympics!), and as we’ve grown over the past few years, we’ve made the decision to continue Governing Board meetings throughout the year, instead of breaking for the (Northern) summer period.  The Governing Board manages the strategic and policy directions of the CCC, including budgetary decisions and the acceptance of new open-source projects into the Consortium.  Attendance is open to officers of the Consortium, Premier Member representatives, and the elected Governing Board representatives of the General Members.  Representatives from other committees typically attend and present the status of work in their respective areas and sometimes the Governing Board requests reports from other groups.

While keeping within the governance structure of the Consortium, we try to maintain a “minimal viable governance” approach.  Post-Covid (and changing travel budget constraints for many organizations), opportunities to meet in person have been reduced, so we are considering a face-to-face meeting (supplemented by video conferencing) at the Linux Foundation Member Summit in November: please let us know if you’re going to be there (even if you’re not a Premier member!).

One of the areas that the Governing Board has been keen to promote work on this year has been lowering barriers to the adoption of Confidential Computing.  One of these is the availability of Attestation Verification Services, which allow consumers of Confidential Computing services to gain the cryptographic assurances about the workloads they need.  Attestation is a core part of Confidential Computing, and the word “attested” was deliberately added to the CCC’s definition of Confidential Computing to reflect that:
“Confidential Computing is the protection of data in use by performing computation in a hardware-based, attested Trusted Execution Environment.”

The CCC has recently kicked off a piece of work to encourage discussion of business models around Attestation Verification Services and to help those considering providing or consuming them.  An initial discussion document has generated a great deal of input and the plan is to start a working group with online meetings later in August.  If you are interested in participating, please get in touch.

CC Mini Summit Agenda Announced!

Bringing EU Community Together

CCC is hosting the “Confidential Computing Mini Summit” at the Open Source Summit EU, Vienna Austria

  • 📢 Mini Summit Agenda
  • ⏰ Time: 13:30 – 17:00
  • 📍 Room 0.14 (level 0) – see floor plan here
  • 🎫 Mini Summit Registration Fee: $10
  • 💰 20% Discount Code for Main Summit: OSSEUCOLOSPK20
    (*Note: Registration for the main conference is required to attend the Mini Summit.)
  • Register Here

Post-Quantum Cryptography

Over the last few weeks at TAC meetings, we’ve been discussing the new evolution of cryptography called Post-Quantum Cryptography or PQC. As full-scale quantum computers become more and more likely, cryptographers have had to invent new algorithms that will remain secure against adversaries with new capabilities. In Confidential Computing, we rely on cryptography in a number of ways to protect workloads in use. As a trusted execution environment (TEE) starts we use cryptographic hash algorithms to fingerprint each component.

Later we use cryptographic signatures when the hardware attests to those measurements. While the workload is running the memory is protected with encryption and in some cases integrity provisions. Some of these algorithms are more impacted by quantum computing than others. Hardware vendors will need to update their algorithms. Software vendors may want to shield downstream adopters by carefully designing their APIs. If you are interested to learn more keep your eyes open for an upcoming blog on our Post Quantum Cryptography discussions or watch our Tech Talk.

TAC Tech Talk playlist 

Bringing EU Community Together

CCC is hosting the “Confidential Computing Mini Summit” at the Open Source Summit EU, Vienna Austria

  • 📢 Mini Summit Agenda
  • ⏰ Time: 13:30 – 17:00
  • 📍 Room 0.14 (level 0) – see floor plan here
  • 🎫 Mini Summit Registration Fee: $10
  • 💰 20% Discount Code for Main Summit: OSSEUCOLOSPK20
    (*Note: Registration for the main conference is required to attend the Mini Summit.)
  • Register Here

Web3 Use Case

Enabling Verifiable, User-Owned and Tradable AI Agents in Games – with Veriplay, Polygon, Immutable and Super Protocol

True Web3 Games, with their potential for rich gaming experiences, advanced AI agents, and genuine digital asset ownership, can only reach their full potential through the implementation of Confidential Computing in a truly decentralized manner. The Confidential Computing Consortium, alongside its member Super Protocol, is at the forefront of this revolution, demonstrating how these technologies can unlock new business opportunities.

Read the Full Use Case

Community Blog Highlights

July Newsletter

By Newsletter No Comments

In Today’s Issue:

  1. Executive Director July Recap
  2. The Case for Confidential Computing
  3. Community News
  4. OSS EU 2024, Confidential Computing Mini Summit

Welcome to the July edition of our newsletter – your guide to awesome happenings in our CCC community. Let’s go!

Executive Director July recap

Following the announcement of a 12-month free subscription to the CCC for new members of under 100 employees, we’ve had a steady stream of new members and it’s continuously growing! If you are a start-up and would like to get involved in the CCC’s work (or you know another organization that might be interested), please get in touch. You can find information about many of the benefits on our website.

This month, I went back in Asia, meeting members (and potential members) in South Korea and Singapore. The CCC sponsored the Privacy-Enhancing Technology Summit Asia-Pacific again this year and we had a fantastic turnout. Read the full recap blog here.

Having had the CC Summit in North America and the PET Summit in Singapore, we’re not about to leave out Europe, where we’re seeing increasing interest and traction for Confidential Computing. I led a panel discussion on CC for the European Central Bank with Parviz Peiravi from Intel and Felix Schuster from Edgeless Systems recently. And we’re also running a CC Mini-Summit at Open Source Summit in Vienna on the 19th September. No waltzes are promised, but there are opportunities to speak: still few more days to submit your talk! Mini Summit CFP

CCC’s Use Case Report is LIVE

As the collection, storage, and analysis of data become increasingly important across industries, businesses are looking for solutions that keep data secure and processes compliant with regulations. Confidential computing is one of these solutions, involving the use of a trusted execution environment that runs on shared infrastructure but processes data away from unauthorized users.

This use case report interviewed members of the confidential computing community on the ways they have implemented the technology and what they believe its future holds.

Read the Full Report

Community News

Meet us at Open Source Summit

Bringing EU Community Together

CCC is hosting the “Confidential Computing Mini Summit” at the Open Source Summit EU, Vienna Austria

  • ⏰ Time: 13:30 – 17:00
  • 🎫 Mini Summit Registration Fee: $10
  • 💰 20% Discount Code for Main Summit: OSSEUCOLOSPK20
    (*Note: Registration for the main conference is required to attend the Mini Summit.)
  • Register Here

Enabling Verifiable, User-Owned and Tradable AI Agents in Games – with Veriplay, Polygon, Immutable and Super Protocol

By Blog No Comments

Author:  Nukri Basharuli, Founder and CEO, Super Protocol

 

 

True Web3 Games, with their potential for rich gaming experiences, advanced AI agents, and genuine digital asset ownership, can only reach their full potential through the implementation of Confidential Computing in a truly decentralized manner. The Confidential Computing Consortium, alongside its member Super Protocol, is at the forefront of this revolution, demonstrating how these technologies can unlock new business opportunities.

Super Protocol serves a dual role in the evolving digital landscape. As a confidential and self-sovereign AI Cloud, it focuses on decentralization, privacy, and verifiability. Its computing network of commonly adopted types of GPU and CPU operates in a confidential mode under the orchestration of Smart Contracts on the Polygon blockchain. This makes Super a decentralized alternative to centralized clouds like Amazon AWS for Web3 AI projects. Additionally, as an AI Marketplace, Super Protocol differs from traditional AI marketplaces like Hugging Face by offering AI models and Data owners the unique ability to share and monetize their assets in a fully confidential, self-sovereign mode. The value of Super Protocol is well illustrated by the examples of its clients.

In this blog, we’ll explore how Super Protocol’s AI cloud is set to transform the gaming industry, enabling a secure and self-sovereign experience that could redefine the future of digital entertainment.

Example: Veriplay 

About Veriplay 

Veriplay is a startup that is developing a gaming platform compatible with Immutable and Polygon. This platform will enable creating AI agents in Web3 Games that can be traded on the open market as dynamic NFTs

The Veriplay team, with a proven track record of working with industry giants like Playrix, Warner Brothers, Google, and Crytek, is on a mission to revolutionize the gaming landscape by introducing verifiable and tradable AI agents

This innovative approach aims to address the limitations of traditional gaming experiences and empower players with unprecedented control over their in-game assets

Super Protocol and Veriplay research and testing efforts have utilized NVIDIA’s H100 GPUs, provided by the NVIDIA Confidential Computing team (more details in Super Protocol Press-release: 

https://www.linkedin.com/posts/superprotocol_confidentialcomputing-depin-nvidiainception -activity-7169336537371914242-LBV3?utm_source=share&utm_medium=member_desktop 

Project Goal 

Veriplay’s goal is to give players the ability to truly own AI game agents. To do so, it aims to develop a reliable, Web3-compatible gaming platform for integrating verifiable tokenized AI agents into games with the following characteristics: 

  • Player AI Models are Protected from Unauthorized Alterations: Veriplay wants to protect player AI models from any unauthorized modifications, whether initiated by the game developer or external malicious actors. 
  • AI Model Training is Verifiable: This means that it is possible to verify how the AI agents were trained, which guarantees their fairness and transparency. 
  • Decentralization (Smart Contract Orchestration): Smart contracts will govern the execution

of AI computations and data storage, ensuring transparency and immutability, and eliminating human administration layer. 

  • Free trading of AI NFTs on marketplaces : Veriplay revolutionizes AI agent ownership by transforming them into tradable digital assets managed by players through dynamic NFTs. 

It is evident that without AI computation privacy, verifiability of AI model training history, and smart contract management, the integrity of AI agents as digital assets will be irrevocably compromised, shattering market trust and leading to market rejection of such assets. 

The Centralized Infrastructure Problem 

There are several problems with creating trusted AI agents in centralized infrastructure, such as Amazon Web Services (AWS) or Google Cloud: 

  • Difficulty of Verification: It is difficult to verify that AI agents have been trained and operate according to the rules of the game declared by the developer. This is especially important when AI agents become tradable assets or when they are used in competitions with prize money. 
  • Risk of Developer Manipulation: Developers have the ability to alter or duplicate an AI agent trained by a player who has invested time and money into the training process. For instance, a developer could duplicate a successful model that frequently wins competitions and sell it to other players as this developer’s original creation. 
  • Player’s Inability to Own Agents: In centralized AI agent infrastructures, players lack true ownership of their agents, being confined to developer-defined capabilities and pricing models. While creating a simple NFT for AI agent ownership partially addresses this, it falls short of true self-custodial ownership. For this type of ownership to be achieved, the AI NFT must be dynamic, linked to all AI agent components, maintain security and verifiability, and prevent human administration access – all impossible within centralized frameworks. 

To sum it up, centralized infrastructure poses significant risks that not only diminish the value of players’ time and investments in training their agents, but also severely restrict monetization opportunities for both players and game studios. 

Conversely, with the implementation of trustless AI Agents in games, both players and developers could generate additional income by trading dynamic AI NFTs on marketplaces, renting out Agents to each other, ghosting and participating in the championships with cash prize funds, and more. 

Super Protocol and Veriplay Solution 

In contrast to AI agents confined within centralized infrastructures and under the complete control of game developers, Web3-compatible gaming agents powered by Super Protocol and the Veriplay platform will exhibit the following advantages:

  1. Confidentiality and Sovereignty of AI Agent – players retain exclusive sovereignty over their AI agents, encompassing models, data, and computational resources, effectively eliminating the possibility of third-party manipulation. 
  • Confidential Enclave Technology: Web3-compliant AI agents are computed in confidential enclaves. Confidential enclaves operate based on the Trusted Execution Environment (TEE) technology supported by Nvidia H100 GPU chips. TEE allows creating a secure area inside the processor for safe storage, processing, and protection of confidential data. Even physical access to the server will not grant access to the applications running in the enclaves. No one except the owner of the Agent knows on which servers their data is being processed, as TEE ensures complete isolation of sensitive data and models. 
  • Access and control over the system are only granted to the smart contract and verified applications loaded into it. The computational resources used for the Agent’s operation are automatically authenticated by the protocol, ensuring the user that they are processing their model and data securely. By design, these resources cannot be tampered with or exploited maliciously. The owner alone manages the model, data, and interactions of their gaming agent. 

As a result, users can be confident that the game developer cannot alter or copy the model since they do not have access to it. 

  1. Verifiability of AI Agent training and game interactions – maintaining the verifiability of the Agent throughout the chain from the storage to the server is guaranteed by the following functions: 
  • The client application and the server are mutually authenticated on a TLS connection. They exchange messages signed with a secret key. Messages contain information about the hardware, application, and its settings. 
  • After mutual authentication, the game application computing process initiates. The outcome will be signed using the enclave key, maintaining the chain of trust. This trust continuity prevents unauthorized alterations, ensuring players can rely on the computation’s result. 

Therefore, the verifiability of the AI Agent’s track record and its immutable nature reassure players that acquiring an AI Agent guarantees possession of an asset with the promised properties. Moreover, by investing in its continuous training, they can have confidence that the future market price of the Agent will accurately reflect their training endeavors. 

  1. The decentralization and removal of human administration are achieved through orchestration by a smart contract system. Via smart contracts, Super Protocol entirely separates the gaming process from server and cloud owners, guaranteeing trust, flexibility and reliability. 
  • Smart Contracts oversee the distribution of the system’s computing resources, assigning confidential nodes for computing tasks.
  • Supporting the necessary Service Level Agreement (SLA) and scalability is accomplished by grouping nodes into clusters and pools with automated Disaster Recovery (DR) mechanisms. 
  • Additionally, the capability to establish geo-distributed clusters with efficient local gateways is provided. 
  • The protocol also ensures secure storage through multiple network replication, encryption, and restricted access to trusted applications. 

With these features, the capacity to deploy a fault-tolerant game server, storage, and agents without dependence on specific hosts is achieved, embodying the finest decentralized cloud architecture available today. 

  1. Ownership management via NFTs and smart contracts is central to the project. The entire process of AI agents’ ownership management and the orchestration of the marketplace where they are traded, is exclusively governed by the project’s smart contracts. 

Each agent consists of on-chain data — an NFT with its own wallet, and data in storage, including a model and game interaction history. Any modifications to the model are made within the chain of trust, beginning with the initial record. These changes are exclusively executed through a smart contract, with each alteration recorded in the agent’s track record and the blockchain. 

Listing of an agent on the marketplace and transferring ownership is seamlessly and securely conducted through a smart contract, ensuring the safety of all participants. 

To sum it up, the deployment of a seamless and secure confidential, verifiable, decentralized computing service, combined with state and ownership management through smart contracts, creates a competitive and fair environment for AI agents across diverse activities. 

Uncompromised competition translates into value, meaning that an AI agent’s success in gaming tasks transforms it into an asset that accrues both the player’s time and money, along with the diversity and uniqueness of gaming scenarios it has encountered. 

NFT integration grants the agent autonomy, meaning that it can be traded and rented. Moreover, the agent becomes capable of possessing its own assets and making decisions independently, acting autonomously without external influence. 

The Rest of Web3 Infrastructure: Polygon and Immutable 

Alongside the Super Protocol, the Veriplay team chose Polygon and Immutable X technologies, merging these three platforms to establish a resilient Web3-compliant ecosystem for training and dynamically tokenizing AI Agents. On a high level, each solution has the following functions: 

  • Super Protocol provides the decentralized and confidential verifiable computing infrastructure necessary for widespread adoption of AI agents and dynamic NFTs in games. Leveraging TEE technology, Super ensures data and algorithms are safeguarded against

unauthorized access, manipulation, and attacks, crucial for establishing trusted and transparent web3 games. 

  • Polygon offers a fast and scalable blockchain, delivering high performance and low transaction fees. This enables efficient management of AI agents and dynamic NFTs, ensuring a seamless and cost-friendly gaming experience. Moreover, Veriplay, with its focus on a multichain future, seamlessly integrates with other EVM-compatible networks through Polygon’s multichain framework. Additionally, Polygon’s compatibility with the Ethereum Virtual Machine grants Veriplay direct access to the vast capabilities of the Ethereum ecosystem. This ensures not only smooth scaling but also opens doors to a wider range of opportunities within the Web3 space. 
  • Immutable X stands out as a premier NFT platform focused on gaming, offering scalability, low fees, and developer-friendly tools. These features simplify the integration of dynamic NFTs into games, requiring minimal cost and effort. 

Conclusion 

Super Protocol marks a new era in Web3 Games development, paving the way for innovative and immersive game worlds controlled by players and built on principles of trust and transparency. 

By enabling exclusive ownership of confidential, verifiable, and transferable AI Agents through dynamic NFTs, players can fully trust the authenticity of in-game assets. Moreover, investors in the open market can confidently invest in NFT assets backed by real AI models, sought after by players for gaming applications.

Attestation Libraries for Confidential Computing: Veraison and SPDM Tools

By Blog No Comments

Author:  Sal Kimmich

In the realm of confidential computing, ensuring trust and security in computing environments is paramount. Attestation libraries and tools provide essential components to build systems that can produce and verify evidence of trustworthiness. This blog explores the concept of attestation in confidential computing and highlights two significant projects within the Confidential Computing Consortium (CCC): Veraison and SPDM Tools.

What is Attestation in Confidential Computing?

Attestation is the process by which the hardware provides evidence about itself and the software running under its protection. Any other party can use this evidence to evaluate the trustworthiness of the Trusted Execution Environment. This process is critical in confidential computing to establish and maintain trust in computing environments, ensuring that sensitive data and operations are protected from unauthorized access and tampering.

Key Components of Attestation

  1. Evidence Generation:
    • The hardware (e.g., a device or CPU) generates evidence about its state, such as cryptographic measurements and signatures.
  2. Evidence Verification:
    • The verifier evaluates the provided evidence against a set of policies or reference values to determine the entity’s trustworthiness.
  3. Trust Anchors:
    • Cryptographic roots of trust (e.g., certificates) used to validate the identity.

Veraison: A Comprehensive Attestation Verification Service

Project Veraison builds software components to facilitate the creation of an Attestation Verification Service. Here’s how Veraison operates and its significance:

Overview

  • Purpose: Veraison aims to simplify the development of attestation verification services by providing reusable software components. These components include verification and provisioning pipelines that can be extended with plugins to support specific attestation technologies.
  • Flexibility: The project’s core components are designed to adapt to various deployment environments through abstractions, allowing for custom service creation without the need for extensive bespoke development.

Key Features

  1. Verification Pipelines:
    • Core structures for verifying attestation evidence, ensuring that it meets established trust policies.
  2. Provisioning Pipelines:
    • Components that manage the provisioning of data required for evidence appraisal, sourced from authoritative sources.
  3. Extensibility:
    • Support for plugins allows the service to handle various attestation technologies, making it versatile and adaptable to different use cases.
  4. Community and Collaboration:
    • Veraison is a collaborative project with active community involvement, including regular public meetings and contributions from multiple organizations.

Use Case: Veraison in Action

Veraison provides reference implementations to demonstrate integration principles, offering a convenient basis for developing substantive attestation verification services. These reference implementations showcase how the core components and plugins work together to create a robust verification system. 

Veraison also supports REST APIs to assist in end-to-end integration with attestation scemes, or can be used as verification components within a custom deployment. A great example of this is a key broker service, where successful attestation verification a key released to a Trusted Execution Environment. 

SPDM Tools: Enhancing Security with Attestation Protocols

SPDM (Security Protocol and Data Model) Tools offer libraries and utilities to implement the SPDM protocol, a standardized framework for secure communication and attestation between devices.

Overview

  • Purpose: SPDM Tools provide essential functionality for implementing the SPDM protocol, ensuring secure communication and attestation across various platforms.
  • Interoperability: The tools ensure interoperability between different devices and platforms, promoting a unified approach to security and attestation.

Key Features

  1. Protocol Implementation:
    • Comprehensive support for the SPDM protocol, enabling secure communication and attestation across various platforms.
  2. Utilities and Libraries:
    • A suite of tools and libraries that simplify the implementation and management of SPDM-based attestation solutions.
  3. Standardization:
    • By adhering to the SPDM standard, the tools promote consistency and reliability in attestation processes across different devices and environments.

Use Case: SPDM Tools in Secure Device Communication

SPDM Tools can establish secure communication channels between devices, ensuring that each device can verify the trustworthiness of the other before exchanging sensitive information. This capability is crucial in scenarios such as building a trusted channel between an accelerator device like a GPU and a Confidential Virtual Machine (CVM)..

SPDM-RS: A Rust Implementation for SPDM Protocols

SPDM-RS is a project within the CCC that provides a Rust language implementation of the SPDM, IDE_KM, and TDISP protocols. These protocols facilitate direct device assignment for Trusted Execution Environment I/O (TEE-I/O) in Confidential Computing.

Key Features

  1. SPDM Protocol Implementation:
    • Supports various SPDM requests and responses, including version negotiation, capability negotiation, algorithm negotiation, and more.
  2. IDE_KM and TDISP Protocols:
    • Implements protocols for secure communication and device management, enhancing the trust boundary of Confidential Virtual Machines (CVMs).
  3. Cryptographic Algorithm Support:
    • Includes support for cryptographic algorithms such as SHA-256/384/512, RSA, ECDSA, AES-GCM, and ChaCha20Poly1305.
  4. Cross-Platform Support:
    • Designed to work across different platforms, ensuring broad applicability in various confidential computing scenarios.

Conclusion

Attestation libraries and tools are vital for ensuring the trustworthiness of confidential computing environments. Projects like Veraison and SPDM Tools within the Confidential Computing Consortium provide essential components for building robust attestation solutions. By leveraging these tools, developers can create systems that securely verify and manage trust, protecting sensitive data and operations from potential threats.

Fr0ntierX Joins the Confidential Computing Consortium as a Startup Member

By Announcement No Comments

 

August 26, 2024 – Fr0ntierX, a leader in secure AI and cybersecurity, has officially joined the Confidential Computing Consortium. This recognition, driven by Fr0ntierX’s cutting-edge Janus platform, marks a significant milestone for the company.

Janus offers a novel approach to secure AI through confidential computing. This technology ensures complete data encryption at every level, making it indispensable for industries requiring top-tier security.

Fr0ntierX’s inclusion in the Consortium underscores its commitment to advancing secure computing in collaboration with the industry’s best.

“This community is unique. Nowhere else do you have competing companies come together with a shared goal of advancing the industry together. For us, it’s an incredible opportunity to integrate Janus with new ideas, ensuring our solutions continue to meet the highest standards,” said Jonathan Begg, CEO of Fr0ntierX. 

With a team of industry experts, Ph.D.s, and strategic advisors, Fr0ntierX provides guidance and support to help businesses maximize the benefits of AI adoption while maintaining the highest standards of security and compliance.

Fr0ntierX empowers enterprises, government agencies, and academic institutions to leverage the power of AI and Large Language Models (LLMs) without compromising security. Their flagship product, Janus, features advanced encryption and robust cybersecurity – powered by confidential computing – safeguarding data from storage to processing. By eliminating master keys, Janus mitigates common threats and ensures data integrity. Unlike typical AI models, which may expose data to third-parties, Janus operates within a fully isolated environment, providing a secure container for AI workflows and the compartmentalization of context data, making it ideal for sectors that handle sensitive information.

By joining the Confidential Computing Consortium, Fr0ntierX aims to further accelerate innovation in secure computing by collaborating with industry leaders to drive the adoption of confidential computing technologies.

Confidential Computing Consortium Resources

End-User Devices for Confidential Computing: Exploring Islet

By Blog No Comments

Author:  Sal Kimmich

As technology evolves, the need for secure and confidential computing extends beyond servers and data centers to end-user devices such as smartphones, tablets, and personal computers. These devices are increasingly used to collect and process sensitive data, necessitating robust security measures to protect user privacy. One notable project within the Confidential Computing Consortium that addresses this need is Islet.

What is Confidential Computing?

Confidential computing is a security paradigm that aims to protect data in use by performing computation in a hardware-based Trusted Execution Environment (TEE). This approach ensures that sensitive data remains encrypted and secure even when being processed, mitigating the risk of unauthorized access and tampering.

The Importance of Trusted Firmware

Trusted Firmware is the cornerstone of Confidential Computing, providing the essential security features and isolation needed to establish a trusted execution environment. Unlike regular firmware, Trusted Firmware includes mechanisms for secure boot, cryptographic verification, and hardware-based isolation of secure and non-secure execution environments. To understand more on this topic, view our blog on Trusted Firmware. 

Islet: A Platform for On-Device Confidential Computing

Islet is an open-source project designed to enable Confidential Computing on ARM architecture devices using the ARMv9 Confidential Compute Architecture (CCA). Its primary objective is to provide a secure platform for on-device Confidential Computing, thereby protecting user privacy and enabling secure processing of sensitive data directly on end-user devices. Islet is implemented in the Rust programming language, and utilizes Rust’s inherent memory safety features to create a robust and secure environment.

Key Features of Islet

  1. Realm Management Monitor (RMM):
    • Operates at EL2 in the Realm world on the application processor cores.
    • Manages confidential virtual machines (VMs), known as realms, ensuring their secure execution.
    • Islet RMM complies with ARm’s specifications for platform ABIs, which enables Islet to integrate seamlessly with the ARM ecosystem, supporting Linux and KVM patch for ARM CCA.
  2. Hardware Enforced Security (HES):
    • Performs device boot measurement and generates platform attestation reports.
    • Manages sealing key functionality within a secure hardware IP separate from the main application processor.
  3. Automated Verification:
    • Incorporates formal verification techniques to enhance the security of Islet, ensuring robustness against various attack vectors.

Use Case: Confidential Machine Learning

Islet showcases its capabilities through a confidential machine learning demo. In this scenario, a mobile device user interacts with a chat-bot application that runs on Islet. The chat-bot processes the request and communicates with an ML server through a secure channel, demonstrating end-to-end confidential computing. This use case highlights Islet’s potential in enabling secure and private machine-to-machine computing without relying on server-side intervention.

Why End-User Devices Need Confidential Computing

While traditional confidential computing solutions focus on server-side protection, securing end-user devices is equally important for several reasons:

  1. Initial Data Collection:
    • Sensitive data collection often begins at the user device level, making it crucial to protect this data from the outset.
  2. Privacy Apps:
    • As users increasingly rely on privacy-focused applications such as secure messengers, password managers, and private browsers, ensuring the confidentiality of data on these devices becomes essential.
  3. End-to-End Security:
    • By enabling confidential computing on user devices, Islet helps establish end-to-end security throughout the entire data processing path, from collection to computation.
  4. Machine-to-Machine Computing:
    • On-device confidential computing facilitates secure machine-to-machine communication, reducing the need for server intervention and enhancing overall security.

Conclusion

Confidential computing is not just for servers and data centers; it is equally critical for end-user devices. Projects like Islet within the Confidential Computing Consortium exemplify the application of Trusted Firmware principles to secure user devices. By providing a robust platform for on-device confidential computing, Islet ensures the privacy and security of sensitive data, paving the way for more secure and private user experiences.

For more information on Islet and its capabilities, visit the Islet GitHub repository.

Understanding Trusted Firmware in Confidential Computing: Coconut SVSM and VirTEE 

By Blog No Comments

Author:  Sal Kimmich

Trusted Firmware serves as the foundational layer in confidential computing, ensuring that the hardware and software environment’s security and integrity are maintained. Unlike regular firmware, Trusted Firmware is designed with additional security features and responsibilities to establish a Trusted Execution Environment (TEE). Here’s a deeper dive into what makes Trusted Firmware different and its role in confidential computing.

 

Differences Between Trusted Firmware and Regular Firmware

  1. Enhanced Security Features:
    • Regular Firmware: Primarily focuses on initializing hardware components and providing basic services to the operating system.
    • Trusted Firmware: Includes enhanced security features such as cryptographic verification of firmware components, secure boot, and mechanisms to enforce hardware-based isolation of secure and non-secure execution environments.
  2. Isolation and Trust:
    • Regular Firmware: Does not inherently provide mechanisms to isolate critical operations or sensitive data from the rest of the system.
    • Trusted Firmware: Establishes a TEE, isolating sensitive operations from the general-purpose operating system and protecting them from potential threats and unauthorized access.
  3. Responsibility and Scope:
    • Regular Firmware: Manages standard hardware initialization and operational tasks.
    • Trusted Firmware: Manages secure initialization of hardware features, authenticates and validates software components, and provides a secure execution environment for critical tasks.

Why Trusted Firmware is Necessary

Trusted Firmware is crucial for confidential computing because it provides a secure foundation that prevents unauthorized access and tampering. Here’s why Trusted Firmware is needed and how it differs from the regular OS and firmware:

Need for Trusted OS:

  • Purpose: To prevent resources from being accessed directly by the generalist OS running concurrently with it, such as preventing a user with root privileges from accessing sensitive resources.
  • Security: The Trusted OS operates with higher privileges and tighter security controls, ensuring that critical operations and data are protected even if the general OS is compromised.

Differences from Normal OS:

  • Size and Scope: The Trusted OS is designed to be small and secure, running with higher privileges than the general OS. For instance, in an ARMv8-a system, parts of the Trusted OS run at EL3 (highest privilege), while a hypervisor runs at EL2, and Linux at EL1.
  • Purpose: The Trusted OS is not meant to replace the general OS like Linux, which is extensive and feature-rich. Instead, it secures specific resources and operations from the general OS.

Security Provided by Trusted OS:

  • Threat Protection: It protects against attempts by users of the general OS to access resources managed by the Trusted OS, including both legitimate and illegitimate access attempts.
  • Mechanism: It uses secure mechanisms, such as the SMC instruction, to switch between the general OS and the Trusted OS when necessary to access secure resources.

Switching Between Trusted OS and Normal World:

  • Context Switching: Occurs when code running in the general OS needs to access a resource managed by the Trusted OS, such as decrypting content using a key only accessible by the Trusted OS.
  • Interrupt Handling: Hardware interrupts may also trigger a switch to the Trusted OS, allowing safe handling of interrupts within the TEE context.

Example Projects

COCONUT Secure VM Service Module (SVSM)

The COCONUT Secure VM Service Module (SVSM) exemplifies Trusted Firmware in confidential computing by providing secure services and device emulations for Confidential Virtual Machines (CVMs). Key features include:

  • Integration with AMD SEV-SNP: Utilizes AMD’s Secure Encrypted Virtualization with Secure Nested Paging, including the VM Privilege Level feature, to ensure robust hardware-based security.
  • Secure Boot and Authentication: Ensures a secure boot process and component authentication, maintaining a trusted execution path from the firmware to the CVM.

VirTEE

VirTEE is another project that demonstrates the application of Trusted Firmware principles. It focuses on:

  • Open Community Development: Collaborative development of tools for TEE bring-up, attestation, and management, supporting a wide range of virtualization platforms.
  • Support for Multiple Technologies: Includes tools and libraries for AMD SEV, SEV-SNP, and Intel TDX, providing comprehensive support for secure virtualization across different hardware platforms.

Discover more about VirTEE via their project repository. 

Conclusion

Trusted Firmware is essential for establishing and maintaining secure and reliable confidential computing environments. It provides enhanced security features, isolation, and trust mechanisms that are not present in regular firmware. Projects like COCONUT-SVSM and VirTEE illustrate the practical application of Trusted Firmware principles, showcasing robust frameworks for secure virtualized environments and cross-platform confidential computing. These projects ensure the integrity and confidentiality of sensitive data and operations, advancing the field of secure computing.