By Dayeol Lee, Research Scientist at TikTok Privacy Innovation Lab, and Mateus Guzzo, Open Source Advocate
At FOSDEM 2025, Dayeol Lee, a Research Scientist at TikTok’s Privacy Innovation Lab, introduced ManaTEE, an open-source framework designed to facilitate privacy-preserving data analytics for public research. The framework integrates Privacy-Enhancing Techniques (PETs), including confidential computing, to safeguard data privacy without compromising usability. It offers an interactive interface through JupyterLab, providing an intuitive experience for researchers and data scientists. ManaTEE leverages Trusted Execution Environments (TEEs) to ensure both data confidentiality and execution integrity, fostering trust between data owners and analysts. Additionally, it provides proof of execution through attestation, enabling researchers to demonstrate the reproducibility and integrity of their results. The framework simplifies deployment by leveraging cloud-based confidential computing backends, making secure and private data analytics accessible and scalable for diverse use cases.
The video recording of Dayeol Lee’s presentation is available for viewing here.
ManaTEE was originally developed by TikTok as a privacy solution for secure data collaboration and has been donated to the Linux Foundation’s Confidential Computing Consortium. Also, ManaTEE is the core privacy preserving technology powering TikTok Research Tools, such as the TikTok Virtual Compute Environment (VCE). The framework is designed to meet the increasing need for secure data collaboration, addressing critical challenges in data privacy and security.
Private data for public interest
Private data is considered very valuable for businesses, as they can extract significant value from it. However, many miss the value of private data for public interest. Personal or proprietary data can be combined to provide insights into various public research domains such as public health, public safety, and education. For example, medical data could be combined with personal dietary data to offer insights into how personal habits impact health.
Data analytics for public interest often requires the combination of numerous datasets to ensure accurate insights and conclusions. Sometimes these datasets come from different sources. There are several challenges to fully combining these datasets. Multiple data providers may have conflicting interests and enforce different privacy policies and compliances. Moreover, data may be distributed across many platforms, including on-premise clusters, clouds, and data warehouses, making it hard to ensure all computations on the data are accountable and transparent.
What is ManaTEE?
To fully enable privacy-preserving data analytics for public interest, we need a standardized approach that provides strong privacy protection with technical enforcement, as well as accountability and transparency. Moreover, we need a framework that is easy to deploy and use.
We find that existing technical solutions such as differential privacy and trusted execution environments offer great properties to achieve our goals. We believe that a well-designed system could use existing techniques to offer a standardized way of private data analytics.
We decided to design and build ManaTEE, a framework that allows data owners to securely share their data for public research, with technically enforced privacy, accountability, and transparency guarantees. With the framework, researchers can gain accurate insights from private or proprietary datasets.
ManaTEE community release
The first community release of ManaTEE includes easy deployment options, a comprehensive demo tutorial, and an extensible framework ready for contributions. Future plans for ManaTEE involve expanding backend support to multi-cloud and on-prem solutions, integrating privacy-compliant data pipelines, enhancing output privacy protections, and supporting confidential GPUs for AI workloads.
For those interested in exploring ManaTEE further, the project is available on GitHub, and the community is encouraged to contribute to its development. The open governance model under the Confidential Computing Consortium aims to foster a vibrant ecosystem of contributors to enhance the project with new features, improved security, and more use cases.
By Dan Middleton, Intel Senior Principal Engineer and Chair, CCC Technical Advisory Council
The term container can be ambiguous. Here are 3 different representations of what people might mean by a container.
I’m often posed with questions about Confidential Computing and containers. Often, the question is something to the effect of, “Does Confidential Computing work with Containers?” or “Can I use Confidential Computing without redesigning my containers?” (Spoilers: Yes and Yes. But it also depends on your security and operational goals.)
The next question tends to be, “How much work will it be for me to get my containerized applications protected by Confidential Computing?” But there are a lot of variations to these questions, and it’s often not quite clear what the end goal is. Part of the confusion comes from “container” being a sort of colloquialism; it can mean a few different things depending on the context.
In Confidential Computing, we talk about the protection of data in use, in contrast with the protection of data at rest or in transit. So, if we apply the same metaphor to containers, we can see three different embodiments of what a container might mean.
In the first case, a container is simply a form of packaging, much like a Debian file or an RPM. You could think of it as a glorified zip file. It contains your application and its dependencies. This is really the only standardized definition of a container from the OCI image spec. There’s not a lot of considerations for packaging that are relevant for Confidential Computing, so this part is pretty much a no-op.
The next thing people might mean when they talk about a container is that containerized application during runtime. That container image file included an entry point which is the process that’s going to be launched. Now, that process is also pretty boring. It’s just a normal Linux process. There’s no special layer intermediating instructions like a JVM or anything like that. The thing that makes it different is that the operating system blinds the process from the rest of the system (namespacing) and can restrict its resources (cgroups). This is also referred to as sandboxing. So again, from a Confidential Computing perspective, there’s nothing different that we would do for a container process than what we would do for another process.
However, because the container image format and sandboxing have become so popular, an ecosystem has grown up around these providing orchestration. Orchestration is another term that’s used colloquially. When you want to launch a whole bunch of web applications spread across maybe a few different geographies, you don’t want to do that same task 1000 times manually. We want it to be automated. And so, I think 90% of the time, maybe 99% of the time, that people ask questions about containers and Confidential Computing, they’re wondering whether Confidential Computing is compatible with their orchestration system.
Administrative users operate a control plane which starts and stops containers inside nodes (which are often virtual machines). A Pod is a Kubernetes abstraction which has no operating system meaning – it is one or more containers each of which is a process.
One of the most popular orchestration systems is Kubernetes (K8s for short). Now, there are many distributions of K8s under different names, and there are many orchestration systems that have nothing to do with K8s. But given its popularity, let’s use K8s as an example to understand security considerations.
For our purposes, we’ll consider two K8s abstractions: the Control Plane and Nodes. The Control Plane is a collection of services that are used to send commands out to start, monitor, and stop containers across a fleet of nodes. Conventionally, a node is a virtual machine, and your containerized applications can be referred to as pods. From an operating system perspective, a pod is not a distinct abstraction. It’s sufficient for us to just think of a pod as one or more containers or equivalently one or more Linux processes. So, we have this control plane, which are a few services that help manage the containers that are launched across a fleet of virtual machines.
Now we can finally get into the Confidential Computing-related security considerations. If we were talking about adversary capabilities, the Control Plane has remote code execution, which is about as dangerous as an attacker can be. But is the Control Plane an adversary? What is it that we really want to isolate here, and what is it that we trust? There are any number of possible permutations, but they really collapse down to about four different patterns.
Four isolation patterns that recognize different trust relationships with the control plane.
In the first pattern, we want to isolate our container, and we trust nothing else. In the second case, we may have multiple containers on the same node that need to work together, and so our isolation unit we could think of as a pod, but it’s more properly or more pragmatically a virtual machine. Now, in both of these cases, but especially the second, the control plane still has influence over the container and its environment, no matter how it’s isolated. To be clear, the control plane can’t directly snoop on the container in either case, but you may want to limit the amount of configuration you delegate to the the control plane.
And so, in the third case, we put the whole control plane, which means each of the control plane services, inside a Confidential Computing environment. Maybe more importantly, we operate the control plane ourselves removing the 3rd party administrator entirely. It’s commonly the case, though, that companies don’t want to operate all of the K8s infrastructure by themselves, and that’s why there are managed K8s offerings from cloud service providers. And that brings us to our last case, where we decide that we trust the CSP, and we’re just going to sort of ignore the fact that the control plane has remote code execution inside what is otherwise our isolated VM for our pods or containers.
Process and VM Isolation examples with associated open source and commercial projects.
Let’s make this a little bit more concrete with some example open-source projects and commercial offerings. The only way to actually isolate a container, which means isolating a process, is with Intel® Software Guard Extensions (Intel® SGX) using an open-source project like Gramine or Occlum. So, if we come back to the question, “How much work do I have to do here?” there is at least a little bit of work because you’ll use these frameworks to repackage your application. You don’t have to rewrite your application, you don’t have to change its APIs, but you do need to use one of these projects to wrap your application in an enclave. This arguably gives you the most stringent protection because here you are only trusting your own application code and the Gramine or Occlum projects.
To the right, your next choice could be to isolate by pod. In practice, this means to isolate at the granularity of a virtual machine (VM). Using an open-source project like CNCF Confidential Containers (CoCo) lets you take your existing containers and use the orchestration system to target Confidential Computing hardware. CoCo can also target Intel® SGX hardware using Occlum, but more commonly CoCo is used with VM isolation capabilities through Intel® Trust Domain Extensions (Intel®TDX), AMD Secure Encrypted Virtualization (SEV)*, Arm Confidential Computing Architecture (CCA)*, or eventually RISC-V CoVE*. And there’s a little bit of work here too. You don’t have to repackage your application, but you do need to use this enlightened orchestration system from CoCo (or a distribution like Red Hat OpenShift Sandbox Containers*). These systems will launch each pod in a separate confidential virtual machine. They have taken pains to limit what the control plane can do and inspect, and there is a good barrier between the CVM and the control plane. However, it is a balancing act to limit the capabilities of the control plane when those capabilities are largely why you are using orchestration to begin with.
Edgeless Systems Constellation* strikes a little different balance. If you don’t want to trust the control plane but you still want to use CSP infrastructure or some other untrusted data center, Constellation will run each control plane service in a confidential VM and then also launch your pods in confidential VMs. But operating K8s isn’t for everyone, so when it comes to how much work is involved, it depends on whether you operate k8s or not. If you don’t normally operate k8s then this would be a significant increase. There are no changes that you need to make to your applications, though, and if your company is already in the business of operating their own orchestration systems, then there’s arguably no added cost or effort here.
But for those organizations who do rely on managed services from CSPs, you can make use of confidential instances in popular CSPs such as Azure Kubernetes Service (AKS)* and Google Kubernetes Engine (GKE)*. And this is generally very simple, like checkbox simple, but it comes with a caveat that you do trust the CSP’s control of your control plane. Google makes this explicit in some very nice documentation: (https://cloud.google.com/kubernetes-engine/docs/concepts/control-plane-security).
Now, which one of these four is right for your organization depends on the things that we’ve just covered, but also a few other considerations. The user that chooses container isolation generally is one that has a security-sensitive workload where a compromise of the workload has real consequences. They might also have a multiparty workload where management of that workload by any one of those parties works against the common interests of that group.
Typically, those with security sensitive or multiparty workloads will isolate at process granularity. VM isolation can be implemented differently based on whether the control plane is trusted or not.
Users of CNCF Confidential Containers probably don’t fully trust the CSP, or they want defense in depth against the data center operator, whether that’s a CSP or their own enterprise on-prem data center. More importantly they probably only want to deploy sensitive information or a cryptographic secret, if they can assess the security state of the system. This is called a Remote Attestation. Attestation is a fun topic and one of the most exciting parts of Confidential Computing, but it can be an article unto itself. So, to keep things brief, we’ll just stick with the idea that you can make an automated runtime decision whether a system is trustworthy before deploying something sensitive.
Now let’s look at the last two personas on the right of the diagram. Users of Constellation may not trust a CSP, or they may use a multi-cloud hosting strategy where it’s more advantageous for them to operate the K8s control plane themselves anyway. For users of CSP managed K8s, the CSP does not present a risk but the user certainly wants defense in depth protections against other tenants using that same shared infrastructure. In these latter two cases, Remote Attestations may also be desired, but used in more passive ways. For example, from an auditing perspective, logging the Attestation can show compliance that an application was run with protection of data in use.
In this article, we’ve covered more than a few considerations, but certainly, each of these four patterns has more to be understood to make an informed choice when it comes to security and operational considerations. I hope that this arms you, though, with the next set of questions to go pursue that informed choice.
[Edit 3/7: Clarified control plane influence per feedback from Benny Fuhry.]
Legal Disclaimers
Intel technologies may require enabled hardware, software or service activation.
The SIMI Group, Inc. (SIMI), a pioneer in health information exchange and analytics services since 1996, continues to push boundaries in public health and healthcare informatics. By addressing critical data gaps across public health agencies, healthcare systems, community organizations, payers, pharmaceutical companies, and researchers, SIMI delivers near real-time situation awareness while prioritizing privacy. Their expertise transforms complex data into actionable insights that drive community health and wellness.
The Confidential Computing Consortium (CCC) is excited to welcome The SIMI Group, Inc. (SIMI) as a startup member. By joining the CCC, SIMI reinforces its commitment to advancing data security and driving the global adoption of trusted execution environments (TEEs). This strategic collaboration with industry leaders like Microsoft and AMD positions SIMI to meet the rigorous privacy, security, and compliance standards of healthcare and public health, while building trust among the public and community partners.
“SIMI is excited to join the CCC and collaborate with Microsoft and AMD,” said Nilesh Dadi, Director of Trusted & Predictive Analytics at SIMI. “This partnership empowers us to support healthcare systems and public health by leveraging trusted execution environments. With this technology, we enable near real-time situation awareness of vaccinations, outbreaks, and medical emergencies in a transparent and privacy-protecting manner.”
SIMI’s leadership in public health innovation stems from firsthand experience with real-world challenges. “SIMI was boots-on-the-ground from the earliest days of the COVID-19 pandemic in the United States,” said Dan Desmond, President & Chief Innovation Officer at SIMI. “The world can no longer rely on faxes, massive group phone calls, and spreadsheets to manage medical and public health emergencies. We’re working with CCC collaborators to build on our progress with the Confidential Consortium Framework, moving toward an accountable and attestable zero-trust future.”
As a CCC member, SIMI is poised to drive the adoption of secure, privacy-first technologies, shaping the future of public health and healthcare informatics through collaboration and innovation.
We are thrilled to announce that MITRE has joined the Confidential Computing Consortium (CCC), further solidifying its commitment to advancing cybersecurity innovation. As a leader in providing technical expertise to the U.S. MITRE’s participation will play a pivotal role in shaping the future of secure cloud computing.
A New Era of Cloud Security
With the growing migration of IT resources to the cloud, securing sensitive data has become more critical than ever. Confidential Computing represents a groundbreaking advancement in cybersecurity by enabling encryption for “data in use” and supporting hardware-bound “enclave attestation.” These capabilities reduce the cyber threat surface, offering unparalleled protection for sensitive data processed in cloud environments.
MITRE’s cybersecurity engineers regularly address the most complex and critical challenges in information systems security as they partner with the Government. By leveraging Confidential Computing, MITRE seeks to enhance cloud security while addressing uncertainties and mitigating potential new risks introduced by emerging technologies.
Through its membership in the CCC, MITRE aims to stay at the forefront of:
Understanding Emerging Use Cases: Identifying practical applications of Confidential Computing across industries and government sectors.
Evaluating Implementation Methods: Exploring best practices for adopting Confidential Computing standards and technologies.
Assessing Value Propositions: Demonstrating the tangible benefits of Confidential Computing for cloud security and operational efficiency.
Analyzing Vulnerabilities: Investigating potential risks and threats associated with emerging products, standards, and cloud services.
Driving Collaboration and Innovation
MITRE’s expertise in cybersecurity will contribute significantly to the CCC’s mission of broadening the adoption of Confidential Computing. By collaborating with industry leaders, MITRE will help establish robust standards, develop practical solutions, and ensure secure implementation methods that meet the needs of both Government and private sectors.
As Confidential Computing continues to evolve, MITRE’s involvement will enable greater innovation and confidence in cloud security, benefiting the Government and the broader technology community. Together, we can address the challenges of tomorrow and build a more secure digital landscape.
Adopting on-demand computing for sensitive and private workloads is critical in today’s interconnected and data-driven world. We need simple, fast, and reliable security mechanisms to protect data, code, and runtimes to achieve this. The Confidential Computing Consortium’s new Messaging Guideis a comprehensive resource that explores how Confidential Computing (CC) addresses these challenges and supports organizations in securing their workloads.
Confidential Computing capabilities protect against unauthorized access and data leaks, enabling organizations to collaborate securely and comply with regulatory requirements. By encrypting data at rest, in transit, and during processing, CC technology allows sensitive workloads to move to the clud without requiring full trust in cloud providers, including administrators and hypervisors.
This white paper outlines the motivations, use cases, and solutions made possible by Confidential Computing, empowering organizations to:
Who Should Read This Guide?
This document is tailored to meet the needs of a diverse audience, including:
Organization leaders to explore use cases and services that can be enabled by Confidential Computing.
Organization leaders considering whether to use Confidential Computing for securing a new or existing
product(s), projects, services, and capabilities
Regulators, standards bodies and ecosystem members in Data Privacy and related fields.
General public/Mainstream Media/Publication to raise general awareness of Confidential Computing and its benefits
Why This Matters
As the demand for secure data processing grows, Confidential Computing provides a critical solution to meet the challenges of modern cloud environments. Whether enabling secure AI applications, fostering inter-organizational data collaboration, or addressing compliance needs, CC empowers organizations to innovate without compromising security.
We invite you to explore the Messaging Guide and discover how Confidential Computing can transform your approach to secure computing. Together, we can build a future where privacy and security are foundational to all digital workloads.
The Confidential Computing Consortium proudly announces the first community release of ManaTEE, an open-source framework for private data analytics. Originally developed by TikTok as a privacy solution for secure data collaboration, ManaTEE is now open-sourced and part of the Linux Foundation’s Confidential Computing Consortium.
Highlights of the Community Release:
Easy Deployment: Test ManaTEE locally with minikube, no cloud account needed.
Comprehensive Demo Tutorial: Step-by-step guidance to get started.
Extensible Framework: Refactored code with Bazel builds and a CI/CD pipeline, ready for contributions.
What’s Next?
ManaTEE is evolving with plans to:
Expand backend support to multi-cloud and on-prem solutions.
Integrate privacy-compliant data pipelines.
Enhance output privacy protections.
Support confidential GPUs for AI workloads.
A Technical Steering Committee will soon guide the project’s future.
We are excited to announce that Applied Blockchain has rejoined the Confidential Computing Consortium (CCC) as a General Member, reinforcing its longstanding commitment to advancing innovation in Confidential Computing and Trusted Execution Environment (TEE) technology. This move aligns with CCC’s mission to enhance trust and privacy in business applications and marks a continued dedication to tackling some of the most pressing challenges in digital privacy.
As one of the few organizations that are members of the Confidential Computing Consortium and the LF Decentralised Trust, Applied Blockchain stands out for its cross-domain expertise in privacy-preserving technology. This dual membership uniquely positions the company to foster collaboration and drive progress across both ecosystems, promoting secure, transparent, and trustworthy solutions for the future of technology.
Applied Blockchain’s renewed involvement comes directly from its groundbreaking work on the Silent Data platform. By integrating TEE technology with blockchain, Silent Data provides a robust solution for privacy-conscious companies.
“We are thrilled to rejoin the Confidential Computing Consortium as a General Member, reinforcing our commitment to advancing Trusted Execution Environment (TEE) technologies. Our continued work on Silent Data demonstrates how we can tackle privacy challenges, and we look forward to collaborating with CCC members to drive innovation, enhance trust, and protect sensitive data.” — Adi Ben-Ari, Founder & CEO at Applied Blockchain
Applied Blockchain focuses on safeguarding consumer and business data in critical sectors such as banking, energy trading, and supply chains. With its renewed membership, the company is positioned to make significant strides in evolving privacy-enhancing technologies, helping organizations across industries protect sensitive data while driving trust and security in their operations.
We look forward to Applied Blockchain’s continued impact as they collaborate with CCC members and help shape the future of Confidential Computing.
The rapid adoption of AI and data-driven technologies has revolutionized industries, but it has also exposed a critical tension: the need to balance robust security with explainability. Traditionally, these two priorities have been at odds. High-security systems often operate in opaque “black box” environments, while efforts to make AI systems transparent can expose vulnerabilities.
Verified Computing bridges this gap that reconciles these conflicting needs. It enables organizations to achieve unparalleled data security while maintaining the transparency and accountability required for compliance and trust.
The Core Technologies That Make It Possible
1. Trusted Execution Environments (TEEs)
TEEs are hardware-based secure enclaves that isolate sensitive computations from the rest of the system. They protect data and processes even if the operating system or hypervisor is compromised. Examples include Intel® SGX, Intel® TDX and AMD SEV.
How They Work: TEEs operate as secure zones within a processor, where data and code are encrypted and inaccessible to external actors. For example, during a financial transaction, a TEE ensures that sensitive computations like risk assessments are performed without exposure to the broader system.
Why They Matter: They protect data “in use,” closing a crucial gap in the data lifecycle that encryption alone cannot address.
2. Remote Attestation
Remote attestation provides cryptographic proof that a TEE is genuine and operating as expected. This ensures trust in the environment, particularly in cloud or collaborative settings.
How It Works: A TEE generates an attestation report, including a cryptographic signature tied to the hardware. This report confirms the integrity of the software and hardware running within the enclave (source).
Why It Matters: Remote attestation reassures stakeholders that computations occur in a secure and uncompromised environment, a critical requirement in multi-tenant cloud infrastructures.
3. Confidential Virtual Machines (VMs)
Confidential VMs extend TEE principles to virtualized environments, making secure computing scalable for complex workloads. Technologies like Intel® TDX allow organizations to isolate entire virtual machines.
How They Work: Confidential VMs use memory encryption to ensure that data remains secure during processing. Encryption keys are hardware-managed, inaccessible to the hypervisor or OS (source).
Why They Matter: They enable secure data processing in public clouds, even in shared infrastructures.
4. Verified Compute Frameworks
Verified Compute frameworks build on TEEs by introducing mechanisms for generating immutable logs and cryptographic proofs of computations. An example is EQTY Lab’s Verifiable Compute.
How They Work: These frameworks capture the details of computations (inputs, outputs, and environment integrity) in tamper-proof logs. These logs are cryptographically verifiable, ensuring transparency without compromising confidentiality.
Why They Matter: They allow organizations to meet regulatory requirements and provide explainable AI outputs while safeguarding proprietary algorithms and sensitive data.
5. Homomorphic Encryption and Secure Multi-Party Computation (SMPC)
In cases where external collaboration or ultra-sensitive data handling is needed, additional cryptographic techniques enhance confidentiality.
Homomorphic Encryption: Enables computations on encrypted data without decryption.
SMPC: Distributes computations across multiple parties, ensuring that no single party has access to the complete dataset.
Why They Matter: These techniques complement TEEs by enabling secure collaboration across untrusted parties.
How Verified Computing Bridges Security and Explainability
Achieving Transparency Without Sacrificing Security
Traditionally, efforts to make AI systems explainable required exposing internal processes or sharing sensitive data—practices that risked data breaches or model theft. Verified confidential computing changes the game by:
Allowing computations to occur in TEEs or confidential VMs, ensuring data is secure at all times.
Using verified compute frameworks to provide cryptographic evidence of computation integrity, allowing external parties to trust results without accessing sensitive details.
For example, a healthcare provider running an AI diagnostic tool can securely process patient data in a TEE. The AI’s decisions can be explained to regulators or patients using cryptographic proofs, without exposing proprietary algorithms or patient information.
As AI plays a growing role in critical sectors, trust is paramount. Verified computing ensures that stakeholders can verify:
The Security of the Data: Through TEEs and confidential VMs.
The Integrity of Computations: Via cryptographic attestation and verifiable compute frameworks.
The Explainability of Results: Through transparent logging and auditable records.
For instance, financial institutions can use verified computing to process loan applications, providing regulators with evidence of fairness and transparency without compromising customer data security.
Verified Compute
Verified Computing is more than a technological advancement—it is a paradigm shift. By integrating technologies like TEEs, remote attestation, confidential VMs, and verifiable compute frameworks, it resolves the long-standing tension between security and explainability. Organizations can now protect sensitive data, ensure compliance, and provide transparent, trustworthy AI systems.
As industries adopt this approach, verified computing will become the gold standard for secure and accountable digital transformation. Bridging these historically conflicting priorities paves the way for a future where trust in AI is not just an aspiration, but a guarantee. For more insights and resources, visit the Confidential Computing Consortium.
The Digital Operational Resilience Act (DORA), a landmark regulation from the European Union, is reshaping the landscape of information and communication technology (ICT) security for financial entities. Designed to strengthen operational resilience, DORA mandates comprehensive measures to protect ICT systems against disruptions and cyber threats, ensuring the continuity of critical financial services.
What Is DORA?
DORA establishes a unified framework for ICT risk management, oversight, and reporting for financial entities operating in the EU. The act applies to banks, insurance companies, investment firms, and other financial organizations, aiming to safeguard the stability of financial systems amid increasing cyber threats.
DORA will come into effect on January 17, 2025, requiring financial entities to meet stringent ICT security and operational resilience standards. The regulation introduces detailed requirements for ICT risk management, third-party ICT service provider oversight, and robust incident reporting mechanisms.
Why Chapter II, Section II, Article 8, Paragraph 2 Matters
One of the most critical aspects of DORA is outlined in Chapter II, Section II, Article 8, Paragraph 2, which states:
Financial entities shall design, procure and implement ICT security strategies, policies, procedures, protocols and tools that aim at, in particular, ensuring the resilience, continuity and availability of ICT systems, and maintaining high standards of security, confidentiality and integrity of data, whether at rest, in use or in transit.
This provision emphasizes a holistic approach to ICT security—ensuring that data remains secure across its entire lifecycle: while being stored, processed, or transmitted. It aligns operational resilience with data confidentiality and integrity, which are foundational for maintaining trust and mitigating systemic risks.
However, the requirement to protect data in use poses a unique challenge. Traditional security measures like encryption effectively safeguard data at rest (storage) and in transit (network transmission), but they falter when data is actively being processed. This is where Confidential Computing steps in as a game-changing solution.
Confidential Computing: The Clear Candidate
Confidential computing enables the protection of data in use by leveraging hardware-based secure enclaves. These enclaves create an isolated environment where sensitive computations can occur, shielding them from unauthorized access—even from the host operating system or cloud provider. By ensuring the confidentiality and integrity of data in use, confidential computing directly addresses one of the most pressing gaps in traditional ICT security strategies.
Key features of confidential computing that align with DORA’s requirements include:
Enhanced Data Security: Protects sensitive computations from being exposed, even in shared cloud environments.
Resilience and Integrity: Ensures that data remains secure and untampered during active processing.
Regulatory Compliance: Provides a robust mechanism to meet DORA’s requirements for high security standards across the data lifecycle.
A Call to Action for Financial Entities
As the 2025 deadline approaches, financial entities must act to design and implement ICT security strategies that align with DORA’s requirements. Confidential computing, with its ability to secure data in use, is a pivotal technology for achieving compliance with Article 8, Paragraph 2.
By integrating confidential computing into their ICT security frameworks, financial institutions can not only meet regulatory mandates but also enhance their overall resilience against evolving cyber threats. Early adoption will provide a competitive edge, enabling organizations to build trust with customers, regulators, and partners in an increasingly digital and interconnected financial ecosystem.
Conclusion
DORA’s focus on ensuring ICT systems’ resilience, continuity, and security presents both a challenge and an opportunity for financial entities. By embracing confidential computing, organizations can address the critical requirements of Chapter II, Section II, Article 8, Paragraph 2, securing their data at every stage of its lifecycle. As the clock ticks toward 2025, the time to act is now.
At ACSAC 2024 (Annual Computer Security Applications Conference), the esteemed Cybersecurity Artifact Award was presented to the “Rapid Deployment of Confidential Cloud Applications with Gramine” project for its innovative approach to enhancing cloud security. The project stood out for enabling the secure deployment of confidential applications in cloud environments while ensuring the protection of sensitive data.
Introducing Gramine: A Breakthrough in Confidential Cloud Computing
The winning artifact showcases Gramine, a lightweight framework designed to facilitate the rapid deployment of confidential cloud applications. By leveraging Trusted Execution Environments (TEEs), specifically Intel SGX, Gramine provides hardware-enforced isolation of data during computation. This ensures that both data and computations remain protected from adversarial threats in the cloud.
Gramine (formerly known as Graphene) is an open-source library that allows developers to build and run applications in secure enclaves, such as Intel’s SGX, without needing to modify the application’s source code. It bridges the gap between traditional cloud computing and confidential computing, making it easier for organizations to protect sensitive workloads in multi-tenant cloud environments while maintaining the flexibility and performance of cloud-native applications.
Key Features of the Winning Artifact
Confidential Computing: Gramine ensures that sensitive data is encrypted and protected even while in use, guarding it from external threats and insider attacks.
Easy Deployment: The project simplifies the complex process of setting up and configuring secure enclaves for cloud applications, making confidential computing more accessible.
Scalability and Flexibility: With support for deploying multiple applications in parallel, Gramine helps large organizations secure diverse cloud workloads efficiently.
Compatibility with Existing Applications: A major advantage of Gramine is its ability to run unmodified applications in secure enclaves, enabling seamless integration of confidential computing into existing infrastructures.
Why It Won the ACSAC Cybersecurity Artifact Award
The “Rapid Deployment of Confidential Cloud Applications with Gramine” project won first place for its innovative solution to one of the most critical challenges in cloud security: ensuring the confidentiality and integrity of sensitive data in potentially untrusted cloud environments.
As more organizations move to the cloud, the need for tools that protect confidentiality and privacy becomes increasingly urgent. Gramine provides a practical solution by enabling confidential workloads to be deployed at scale while remaining flexible enough to integrate with existing cloud-native applications. This lowers the barriers to secure cloud deployment, making confidential computing accessible to a broader range of organizations.
The Impact on Cloud Security
The success of this project highlights the growing importance of confidential computing in the battle against cloud-based cyber threats. As cloud adoption continues to rise, tools like Gramine pave the way for organizations to secure their cloud applications, safeguard sensitive data, and meet privacy regulations.
The ACSAC Cybersecurity Artifact Award positions this project as a catalyst for further innovation in cloud security and confidential computing. It offers both a technical solution and a blueprint for securely deploying sensitive workloads in a rapidly evolving cloud landscape.
For more information on the winning artifact, visit the ACSAC 2024 program page: