The Linux Foundation Projects
Skip to main content
Category

Blog

The Evolution of Cybersecurity: From Early Threats to Modern Challenges

By Blog No Comments

Authored by Sal Kimmich

As we continue our journey through the world of Confidential Computing, it’s essential to understand the backdrop against which this technology has emerged. This week, we delve into the evolution of cybersecurity, tracing its journey from the early days of computing to the sophisticated landscape we navigate today.

The Early Days of Cybersecurity

Cybersecurity, in its infancy, was a game of cat and mouse between emerging technologies and the threats that shadowed them. The earliest computers, massive and isolated, faced minimal security concerns. However, as technology advanced and computers became interconnected, the need for robust cybersecurity measures became apparent.

The Birth of Computer Viruses and Antivirus Software

The 1980s marked a significant turning point with the advent of the first computer viruses. Among these early threats was the Brain virus, which led to the creation of the first antivirus software in 1987. This was a pivotal moment, signaling the start of an ongoing battle against cyber threats.

The Internet Era and Its Challenges

The explosion of the internet in the 1990s and early 2000s brought cybersecurity to the forefront. The connectivity that empowered businesses and individuals also opened up new vulnerabilities. Viruses, worms, and later, sophisticated malware, posed significant risks, leading to the development of more advanced cybersecurity solutions.

The Rise of Cybercrime

As technology continued to evolve, so did the nature of threats. Cybercrime became a lucrative business, with hackers targeting not just computers but entire networks. Data breaches, identity theft, and ransomware attacks became common, causing significant financial and reputational damage to individuals and organizations.

The Current Landscape: A Complex Battlefield

Today, cybersecurity is an intricate field, encompassing everything from endpoint security to network defenses, and now, Confidential Computing. The threats have become more sophisticated, leveraging AI and machine learning, making proactive and advanced defense mechanisms essential.

Confidential Computing: A New Frontier in Cybersecurity

This brings us to Confidential Computing – a response to the modern need for enhanced data protection. As we’ve seen, cybersecurity is no longer just about preventing unauthorized access; it’s about ensuring data integrity and confidentiality at every stage, including during processing.

Looking Ahead

The evolution of cybersecurity is a testament to the ever-changing landscape of technology. As we continue to innovate, so too will the methods to protect our digital assets. Confidential Computing is part of this ongoing evolution, representing the next step in securing our digital future.

A Fun Reminder of Our Journey

Reflecting on this evolution, it’s fascinating to think that the journey from the Brain virus to today’s sophisticated cyber threats led to the birth of an entire industry. The first antivirus software in 1987 was just the beginning of what has become a critical and ever-evolving field.

Stay Tuned

Next week, we’ll dive deeper into the world of Trusted Execution Environments (TEEs), a cornerstone of Confidential Computing. Join us as we explore how TEEs provide a secure space for data processing, marking a significant advancement in our quest for cybersecurity.

Explore the four-part series on Confidential Computing—a vital innovation for data privacy and security. Dive in now!

Part I –  Introduction to Confidential Computing:  A Year Long Exploration

Part IIIBasics of Trusted Execution Environments (TEEs):  The Heart of Confidential Computing

Part IVCollaborative Security:  The Role of Open Source in Confidential Computing

Introduction to Confidential Computing: A Year-Long Exploration

By Blog No Comments

Authored by Sal Kimmich

Welcome to the first blog Confidential Computing Consortium blog series to help new members navigate the transformative landscape of Confidential Computing, a crucial advancement in safeguarding data privacy and security.

What is Confidential Computing?

Confidential Computing is a cutting-edge approach that protects data in use by encrypting it within Trusted Execution Environments (TEEs). These secure areas of a processor ensure data is inaccessible to other applications, the operating system, and even cloud providers, safeguarding sensitive information from unauthorized access or leaks during processing. This technology is foundational in addressing the critical challenge of protecting data throughout its lifecycle, offering a new dimension of security for our digital world.

The Significance

In an era where data privacy concerns are paramount, Confidential Computing emerges as a vital solution. It enables businesses and individuals to compute with confidence, knowing their data remains secure and private, even in shared infrastructure environments. This technology fosters trust and facilitates secure data collaboration, unlocking new possibilities in cloud computing and beyond.

Our Journey Ahead

This blog series will explore these topics (and many more!):

1. The Evolution of Confidential Computing

2. Insights into Trusted Execution Environments (TEEs)

3. The Vital Role of Open Source in Confidential Computing

We’ll examine its transformative impact across industries, its pivotal role in emerging technologies, and how it underpins secure, data-driven innovations. This exploration is designed for tech enthusiasts, industry professionals, and anyone curious about the next frontier in digital security.

Learn more with Special Interest Groups (SIGs)

The Confidential Computing Consortium (CCC) champions this technology through collaborative efforts, including Special Interest Groups (SIGs). These SIGs are integral to: SIG meetings are open to everyone, emphasizing the consortium’s commitment to inclusivity and collaboration. There’s no membership requirement to join these discussions, making it an excellent opportunity for anyone interested in contributing to or learning more about confidential computing.

Be Part of the Movement

By joining our journey, you become a part of a community dedicated to advancing confidential computing. This series promises to deepen your understanding and provide resources that can be easily shared for collaborative efforts driving this technology forward.

Stay tuned as we reveal the fascinating world of Confidential Computing, and it’s critical role in privacy-enhancing technologies. If there is any topic you would love us to cover in this series, we’d love to hear from you! Reach out to skimmich@contractor.linuxfoundation.org

Explore the four-part series on Confidential Computing—a vital innovation for data privacy and security. Dive in now!

Part IIThe Evolution of Cybersecurity:  From Early Threats to Modern Challenges

Part IIIBasics of Trusted Execution Environments (TEEs):  The Heart of Confidential Computing

Part IVCollaborative Security:  The Role of Open Source in Confidential Computing

Highlights from the Confidential Computing DevRoom at FOSDEM

By Blog No Comments

By Sal Kimmich

The Confidential Computing DevRoom at FOSDEM brought together experts and enthusiasts to discuss and demystify the rapidly evolving field of Confidential Computing. The event was a melting pot of ideas, showcasing the latest advancements, practical applications, and the future direction of this technology.

 Kickoff: Unveiling the Essence of Confidential Computing

The DevRoom opened with Fritz Alder, Jo Van Bulk, and Fabiano Fidencio welcoming attendees and setting the stage for the day’s discussions. They emphasized the importance of adhering to the Confidential Computing Consortium (CCC) definition, highlighting key properties such as data confidentiality, integrity, and code integrity. The conversation also touched on contextual properties like code confidentiality, authenticated launch, and attestability, underscoring the diversity in application needs and security requirements.

Intel TDX: A Leap Towards VM Isolation

Dr. Benny Fuhry took the stage to deep dive into Intel Trusted Domain Extensions (TDX), presenting it as a groundbreaking approach to VM isolation. Intel TDX stands out by ensuring that each trust domain is encrypted with a unique key, a move aimed at mitigating Virtual Machine Monitor (VMM) attacks. With general availability announced alongside the 5th Gen Intel Xeon Scalable processors, Intel TDX is set to revolutionize memory confidentiality, integrity, and key management.

Watch this talk. 

 SGX-STEP: Enhancing Side Channel Attack Resolution

The SGX-STEP presentation from Luca Wilke spotlighted innovative techniques to counteract side-channel attacks, still a concern in the realm of Confidential Computing. Through detailed explanations of single stepping, interrupt counting, and amplification, the speakers shed light on improving temporal resolution for side-channel attacks, presenting a clear path toward more secure environments that could be used in Confidential Computing and beyond. 

Watch this talk. 

Database Security: Bridging Confidential Computing and Data Storage

Ilaria Battiston and Lotte Felius delved into the integration of confidential computing with database systems, presenting their research on secure databases. They discussed the performance overhead of utilizing SGX with SQLite and PostgreSQL, emphasizing the trade-offs between security and efficiency with preliminary results. Their work on minimizing performance impacts through vectorized processing inside secure enclaves provided valuable insights for developers aiming to secure database operations.

Watch this talk. 

Ups and Downs of Running Enclaves in Production

Evervault’s presentation from Cian Butler highlighted their innovative solutions for data security and compliance, focusing on encryption proxies and secure serverless functions. They discussed the challenges of monitoring and observability within AWS Nitro enclaves, showcasing their efforts to enhance reliability and performance in secure computing environments.

Watch this talk. 

 fTPM: Securing Embedded Systems

Tymoteusz Burak introduced the concept of fTPM implemented as a Trusted Application in ARM TrustZone, offering a compelling solution for enhancing the security of embedded systems. Despite challenges such as lack of secure storage and entropy sources, fTPM stands as a testament to the potential of leveraging Trusted Execution Environments (TEEs) for robust security measures.

Watch this talk.

Integrity Protected Workloads 

The presentation by Tom Dohrmann on Mushroom offered an insightful look into securing Linux workloads using AMD’s SEV-SNP technology. With a clear goal to run Linux programs securely, Mushroom addresses the critical need for integrity in remote code compilation on untrusted hosts. The architecture of Mushroom, built with a focus on minimalism and security, comprises a kernel and a supervisor, both developed in Rust, emphasizing efficiency and reduced host interaction. 

Watch this talk. 

Reproducible Builds For Confidential Computing

The talk by Malte Poll and Paul Meyer delved into a critical aspect of Confidential Computing: the validation of Trusted Computing Base (TCB) measurements through remote attestation and the importance of reproducible builds in this process. The presentation highlighted the challenges in the current landscape, where reference values for validating TCB measurements are often provided by third parties without transparent mechanisms for auditing their trustworthiness or origin. Advocating for an auditable CC ecosystem, the speakers emphasized the necessity for every component of the TCB to be open source and reproducible, allowing end-users to verify the deployed system comprehensively. Utilizing mkosi and Nix(OS), they showcased how to build fully reproducible OS images from source code to reference values for remote attestation, providing a foundation for projects like Constellation and the Confidential Containers project. This approach aims to enhance the trust and security in Confidential Computing by enabling the community to independently verify reference values, marking a significant step towards more transparent and secure computing environments.

Watch this talk. 

 Advancing Remote Attestation

Ionut Mihalcea and Thomas Fossati showed us the development and importance of remote attestation covered milestones from the formation of TCPA to the latest advancements in RATS EAT. This narrative underscored the critical role of remote attestation in establishing trust and preserving privacy within confidential computing frameworks.

Watch this Talk

FOSDEM: The Broader Impact 

FOSDEM concluded with a roundup of various DevRooms, highlighting the interconnectedness of confidential computing with other domains such as energy, community development, and monitoring. Special attention was given to the EU’s new open-source cloud initiative, IPCEI-CIS, showcasing the commitment to leveraging open-source solutions for enhancing security and privacy.

A Special Thank You

As we reflect on all the experiences and exchanges at FOSDEM, we want to share a special note of gratitude to all participants of the Decrypted Gathering – one that we received directly from the catering team who worked with us that night:

I catered your event and I have to thank you for having been the most respectful and polite clients I’ve ever seen… And I of course thank you for working for such a noble cause that is data protection and open OS.

Thank you for existing and you can congratulate all the persons present. It was unseen and so heartwarming for me/us. 

All the best,

Lauréline

Confidential computing is unique. It’s the kind of work that anyone can understand the value of, as soon as you explain the kind of data we try to keep private. Personalized medicine, space technology, and energy grids are all parts of Confidential Computing’s emerging sectors. 

I’m incredibly grateful to have a growing community of engineers, academics and technology giants all coming together around this work. Thank you to everyone who is helping us to bring Confidential Computing to the center stage of this year. 

Want to Get Involved with CCC? 

If you are still looking to get involved with the Confidential Computing Consortium, you can find more resources about our technical committees and institutional memberships here. All of our technical committee meetings are open to the public, and recorded for all to view. We welcome anyone who wants to join in on the conversations around Confidential Computing.

If there’s a concept or clarification from these talks you believe is important to share with the CCC community, get in touch with me at skimmich@contractor.linuxfoundation.org and we’ll help you do write it up as a blog post or webinar, and get the information out to everyone.

2023 CCC Open Source Highlights

By Blog, Uncategorized No Comments

In 2023 we focused on growing three things: our projects, ecosystem recognition, and our community.

Our technical community made great strides on each of these. Our open source project portfolio is wider and more mature. Outside of the CCC we contributed security expertise to public documents and standards organizations. As we grew to deliver these projects and papers, we maintained our emphasis on growing a positive community where everyone is welcome, and anyone can learn and contribute.

Projects

We grew projects in two vectors. First, for our existing projects we wanted to make sure they were useful and adopted. The prime example of that is Gramine moving to Graduated status as a reflection of its maturity and broad adoption.

Second, as a still young consortium we have plenty of room to add projects to address new areas or bring new approaches to existing areas. We are delighted to have made a home for new projects originating from Red Hat, Intel, VMWare/Broadcom, Samsung, and Suse. They join a portfolio originally provided by Red Hat, Microsoft, UNC, Intel, UC Berkeley, and Arm. These projects are now in an open governance setting where individuals unaffiliated with these organizations can bring their talents and contributions.

VirTEE provides tools and libraries to make development, management, and attestation of  Virtualization-based Confidential Computing easier.

Spdm-rs implements key protocols to bring devices into the Confidential Computing boundary like accelerators for AI/ML workloads.

The Certifier Framework aims to bridge across different Confidential Computing environments for one coherent application experience.

Islet broadens our portfolio from a cloud and server focus out to phones and other mobile devices.

Finally, coconut-svsm creates a secure layer under the OS to provide trusted capabilities like virtual TPMs.

Some of these projects are still on-boarding and will be listed on the CCC website soon.

Ecosystem

One of the exciting things about Confidential Computing is that it is both developing and yet already in production. As an open source organization, we tend to focus on the development, but we also serve a role in explaining how to use it in production to solve real problems.

In 2023 we generated a number of articles in plain language about topics from attestation to homomorphic encryption. We also broadened out from our own channels to respond to government RFCs and engage other standards organizations. Our Governance, Risk, and Compliance SIG takes point on these matters and coordinates inputs from our community’s wide pool of subject matter experts. You are welcome to join us on Wednesdays.

The Attestation SIG is one of our most educational forums. This past year we made sense of a wide array of formats and attestation patterns. Our Cloud Service Providers (CSPs) discussed their attestation services and took inputs on how to evolve them to meet emerging standards while contributors from IETF, TCG, and other standards organizations shared their directions and took input on how to address requirements from hardware, software, and service vendors.  The SIG also harmonized attestation approaches for TLS. A subteam produced a spec, implemented some open-source code and got the spec adopted in the IETF.  All that in ~1 year, which by standardization time standards is quite a remarkable feat. To contribute or learn more please join us Tuesdays or make some popcorn and enjoy our youtube feed.

In our last TAC meeting of the year we ratified a new SIG. We all rely so much on the Linux kernel and yet that’s not an area where the consortium has focused. We’ll be writing up more about our plans in a separate post, but for now we’ll just note that in 2023 we recognized that engaging more with the Linux Kernel community is one of the most important things we can do to make Confidential Computing easy to adopt.

Community

It’s said that culture is more important than any individual policy or initiative of an organization. In the CCC we have a culture of Inclusivity and of Minimum Viable Governance. One way to think about that is we prioritize our resources in ways to include everyone. In the past that has included funded internships to welcome people to our community. 2023’s incremental step was identifying conferences where we can reach communities that are underrepresented in the CCC. In some cases we became aware of a conference after a deadline and so headed into 2024 we look to build on what we learned in 2023 to reach the widest possible audience. Given the rate of growth we saw in 2023, 2024 is going to be a big year for Confidential Computing and our Consortium. We are glad to have a sound culture to grow from and the opportunity to expand to make computing more secure.

Finally, as just a teaser for one more announcement hitting the news in 2024… we closed out 2023 by hiring a Technical Community Architect. We found an excellent energetic person to help activate things for CCC maintainers, grow contributors, and help champion our projects in the open source ecosystem.

2024 is going to be great!

Welcoming Sal Kimmich to the Confidential Computing Consortium

By Announcement, Blog, In The News No Comments

The Linux Foundation’s Confidential Computing Consortium (CCC) is proud to announce Sal Kimmich joining as the Technical Community Architect. Sal’s career started by sharing Python scripts with other computational neuroscientists in the wild world of supercomputing. A decade later, they are still paying attention to the algorithmic side of open source tech.  

Before joining CCC, Sal worked as a scalable SecDevOps Machine Learning engineer and brought those contributions to the Cloud Native Computing Foundation (CNCF) and the Open Source Security Foundation (OpenSSF). They have focused on practical automation around security best practices that make the maintainer’s lives easier, like Security Slams.  

At CCC,  we are building the landscape for Trusted Execution Environments (TEEs) at the Linux Foundation as it becomes as Confidential Computing becomes foundational to cross-industry security practicesConfidentiality of data in use is also a cornerstone of digital progress: having hardware level trust in compute is critical to the wave of critical technologies in both edge and cloud. 

Sal’s vision for CCC is clear – to make maintainers’ work enjoyable and rewarding, to create tech demos that dazzle, and to showcase the world-class Open Source Projects enabling secure computation. 2024 marks the start of an incredible year of compute, collaboration and community expansion ahead, as runtime security takes the spotlight in emerging tech. 

CCC end-of-year blog post 2023

By Blog No Comments

This year has been a big one for the Confidential Computing Consortium, with a great deal of activity in the technical, outreach and governance spheres.  The most obvious difference was the Governing Board’s decision to appoint me as Executive Director.  I’ve been involved with the CCC since its inception in a variety of roles, from Premier member representative to Treasurer to General member representative to the Governing Board.  I’m delighted to be involved, working with the many members I already knew and getting to know those I didn’t, or who have joined recently.  Another major change was that our Chair of the GB since the foundation of the CCC in October 2019, Stephen Walli of Microsoft, stepped down, handing over to the previous vice-Chair, Ron Perez of Intel.  The transition was seamless, and we thank Stephen for his amazing leadership and service and Ron for his stepping up into the role.

Member survey

One of my first actions as Executive Director was to initiate a survey to help align the activities of the Consortium with members’ priorities.  This was backed up by conversations with various members and was extremely helpful in allowing me to decide where to be putting in the most effort.  The main priorities expressed were:

  • End-User involvement
  • Use cases
  • Regulator/standards engagement
  • Industry visibility
  • Increased AsiaPac activity/involvement
  • Member meet-ups
  • Conference speaking

The Governing Board endorsed these and they have set the scene for the work we have been doing for the second half the year and will continue into 2024.  I am planning a similar survey next year.

TAC and SIGs

The Technical Advisory Council (TAC) continues to be well-attended and the venue for much discussion, generally meeting for two hours every two weeks.  We often host presentations from external bodies or projects which are relevant or technically adjacent to Confidential Computing.  Another important task that the TAC undertakes is working with open source projects which are interested in joining the CCC.  The TAC provides technical and governance oversight and support through the process, and we currently have seven projects, with another two close to admission and at least two more going through the process.  Having a strong ecosystem of open source projects is vital for the healthy growth of Confidential Computing and is one of the core aims of the CCC.

The TAC also administers and coordinates the activities of several Special Interest Groups (SIGs).  The number of these increased to three this year: the Governance, Risk & Compliance SIG (GRC), the Attestation SIG and the Linux kernel SIG.  This last (and newest) is intended to work with the Linux kernel community to shepherd in work from members and the community and to allow communication to avoid “surprise” architectural or design changes and ease acceptance of new CC-related work.

Another important decision which is related to the work of the TAC was the decision to recruit a Technical Community Architect (TCA) to help coordinate the work of the TAC, the SIGs and the open source projects as the work they do grows.  More news on this will follow very shortly.

Brief listing of activities through the year

The Confidential Computing Consortium was involved in many activities during the year, including sponsoring, attending or participating in conferences across Europe, North America and Asia Pacific.  The list below includes most of the significant activities.

Jan/Feb

FOSDEM – Brussels
State of Open Con – London

Mar/Apr

FOSS Backstage – Berlin and online
OC3 – online
Website refresh and update
Mike Bursell appointed as Executive Director 

May

Wikipedia entry created – Confidential computing

Jun/July

Inaugural Confidential Computing Summit (250 attendees) – recordings available on-demand) and Happy Hour – San Francisco

Aug/Sep

DEFCON – Las Vegas
Diana Initiative – Las Vegas
OSS EU – Bilbao
Kubecon Asia – Shanghai

Oct/Nov

LF Member Summit – Monterey
PET Summit Asia – Singapore

Dec

OSS Japan – Tokyo

New members

We are delighted to have welcomed the following new members in 2023:

  • Acurast
  • BeekeeperAI
  • California Health Medical Reserve Corps
  • Canonical Group Limited
  • Cryptosat
  • enclaive
  • Hushmesh
  • Samsung Electronics Co. Ltd
  • SUSE LLC
  • Spectro Cloud, Inc.

We have a number of other organizations currently considering membership, who we hope to welcome early in 2024.

Planning for 2024

As we move into 2024, we have lots of plans to continue promoting Confidential Computing globally.  Here are some areas in which you can expect to see movement:

  • Clearly definition of the benefits of membership is available on the website
  • Closer work with and support for start-ups in the ecosystem
  • Lots of events, including an expanded Confidential Computing Summit 
  • A marketing package for events to allow quicker and further reaching involvement for all members attending
  • Work on use cases
  • Appearance of our new Technical Community Architect

Final word

I would like to thank everyone who has been involved in the Confidential Computing Consortium and the larger ecosystem over the past twelve months.  In particular, thank you to all those who make the CCC work through their involvement with our various committees and SIGs.  I would also like to send our best wishes to Helen Lau from the Linux Foundation who has departed (for now, we hope!) on parental leave and to thank Ben Sternthal and Riann Kleinhans for their work in supporting our mission.  Finally, may I wish you all the best for the festive season and a prosperous New Year.

Mike Bursell
Executive Director, Confidential Computing Consortium

Confidential Computing Mini Summit at OSS EU in Bilbao

By Blog No Comments

We’re delighted to announce that the Confidential Computing Consortium is hosting a Mini Summit co-located with Open Source Summit Europe in Bilbao in September.  The Mini Summit will take place during the afternoon of Monday, 18th September, the day before the main OSS EU conference. 

Call for Proposals for the Confidential Computing Mini Summit are open! We welcome submissions on any relevant content to present at this summit. Submit your proposal here!

Important Dates:

  • CFP deadline: Aug 13, 2023
  • Speaker notification: Aug 18, 2023

Session type:

  • 30 min session

Topic area:

  • Use case deep dive
  • EU open source project & communities
  • (Open) Surprise us with a hot topic!

It’s a great opportunity to meet other members of the community, hear sessions from leaders in the industry and enjoy a little more time in Spain!  In-person registration is just $10 to your existing OSS EU ticket, and virtual registration is free.  We look forward to seeing you there!

More details are available at https://events.linuxfoundation.org/open-source-summit-europe/features/co-located-events/#confidential-computing-mini-summit

Broad industry representation at Confidential Computing Summit

By Blog No Comments

On Thursday, 29th June 2023, the first Confidential Computing Summit was held at the Marriott Marquis in San Francisco.  Organized by Opaque Systems and the Confidential Computing Consortium, it comprised 38 sessions delivered by 44 speakers and panelists, with 244 attendees – over twice the expected number.  Although initially planned as a single track event, the number of responses to the Call for Papers was so large that the agenda was split into three tracks, with keynotes starting and ending the event.

Sessions covered a broad range of topics, from state of the industry and outlook, to deep-dive technical discussions.  One of the key themes of the Summit, however, was the application of Confidential Computing to real-life use cases, with presentations by end users as well as suppliers of Confidential Computing technologies.  The relevance of Confidential Computing to AI was a recurring topic as data and model privacy is emerging as a major concern for many users, particularly those with requirements to share data with untrusted parties whether partners or even competitors for multi-party collaboration.  Other use cases included private messaging, anti-money laundering, Edge computing, regulatory compliance, Big Data, examination security and data sovereignty.  Use cases for Confidential Computing ranged across multiple sectors, including telecommunications, banking, insurance, healthcare and AdTech. Sessions ranged from high-level commercial use case discussions to low-level technical considerations.

There was an exhibitor hall which doubled as meeting space and included booths from the CCC and Opaque Systems plus the Summit’s premier sponsors (Microsoft, Intel, VMware, Arm, Anjuna, Fortanix, Edgeless Systems, Cosmian).  The venue also had sufficient space (and seating with branded cushions!) for a busy “hallway track”.  For many attendees, the ability to meet other industry professionals in person for the first time was as valuable a reason to attend the Summit as the session – while virtual conferences can have value, the conversations held face-to-face at the conference provided opportunities for networking that would have been impossible without real-world interactions.

Videos of many of the sessions will be made available on the conference website in the coming weeks: https://confidentialcomputingsummit.com/ (the agenda of sessions presented is also available).

The Confidential Computing Consortium would like to thank Opaque Systems and the program committee for their hard work in organizing this event.  Given the success of the Summit, plans are already underway for a larger instance next year.  Please keep an eye on this blog and other news outlets for information.  We look forward to seeing you there!

Confidential Computing: logging and debugging

By Blog No Comments

Mike Bursell

This article is a slightly edited version of an article originally published at https://blog.enarx.dev/confidential-computing-logging-and-debugging/

Debugging applications is an important part of the development process, and one of the mechanisms we use for it is logging: providing extra details about what’s going on in (and around) the application to help us understand problems, manage errors and (when we’re lucky!) monitor normal operation.  Logging then, is useful not just for abnormal, but also for normal (“nominal”) operations.  Log entries and other error messages can be very useful, but they can also provide information to other parties – sometimes information which you’d prefer they didn’t have.  This is particularly true when you are thinking about Confidential Computing: running applications or workloads in environments where you really want to protect the confidentiality and integrity of your application and its data.  This article examines some of the issues that we need to consider when designing Confidential Computing frameworks, the applications we run in them, and their operations.  It is written partly from the point of view of the Enarx project, but that is mainly to provide some concrete examples: these have been generalised where possible.  Note that this is quite a long article, as it goes into detailed discussion of some complex issues, and tries to examine as many of the alternatives as possible.

First, let us remind ourselves of one of the underlying assumptions about Confidential Computing in general which is that you don’t trust the host. The host, in this context, is the computer running your workload within a TEE instance – your Confidential Computing workload (or simply workload). And when we say that we don’t trust it, we really mean that: we don’t want to leak any information to the host which might allow it (the host) to infer information about the workload that is running, either in terms of the program itself (and any associated algorithms) or the data.

Now, this is a pretty tall order, particularly given that the state of the art at the moment doesn’t allow for strong protections around resource utilisation by the workload. There’s nothing that the workload can do to stop the host system from starving it of CPU resources, and slowing it down, or even stopping it running altogether.  This presents the host with many opportunities for artificially imposed timing attacks against which it is very difficult to protect.  In fact, there are other types of resource starvation and monitoring around I/O as well, which are also germane to our conversation.

Beyond this, the host system can also attempt to infer information about the workload by monitoring its resource utilisation without any active intervention. To give an example, let us say that the host notices that the workload creates a network socket to an external address. It (the host) starts monitoring the data sent via this socket, and notices that it is all encrypted using TLS. The host may not be able to read the data, but it may be able to infer that a specific short burst of activity just after the opening of the socket corresponds to the generation of a cryptographic key. This information on its own may be sufficient for the host to fashion passive or active attacks to weaken the strength of this key.

None of this is good news, but let’s extend our thinking beyond just normal operation of the workload and consider debugging generally and the error handling more particularly. For the sake of clarity, we will posit a tenant with a client process on a separate machine (considered trusted, unlike the host), and that TEE instance on the host has four layers, including the associated workload. This may not be true for all applications or designs, but is a useful generalisation, and covers most of the issues that are likely to arise.  This architecture models a cloud workload deployment. Here’s a picture.

TEE layers and components

These layers may be defined thus:

  1. application layer – the application itself, which may or may not be aware that it is running within a TEE instance. For many use cases, this, from the point of view of a tenant/client of the host, is the workload as defined above.
  2. runtime layer – the context in which the application runs. How this is considered is likely to vary significantly between TEE type and implementations, and in some cases (where the workload is a full VM image, including application and operating system, for instance), there may be little differentiation between this layer and the application layer (the workload includes both). In many cases, however, the runtime layer will be responsible for loading the application layer – the workload.
  3. TEE loading layer – the layer responsible for loading at least the runtime layer, and possibly some other components into the TEE instance. Some parts of this are likely to exist outside of the TEE instance, but others (such as a UEFI loader for a VM) may exist within it. For this reason, we may choose to separate “TEE-internal” from “TEE-external” components within this layer. For many implementations, this layer may disappear (cease to run and be removed from memory) once the runtime has started.
  4. TEE execution layer – the layer responsible for actually executing the runtime above it, and communicating with the host. Like the TEE loading layer, this is likely to exist in two parts – one within the TEE instance, and one outside it (again, “TEE-internal” and “TEE-external”. 

An example of relative lifecycles is shown here.

Component lifecycles

Now we consider logging for each of these.

Application layer

The application layer generally communicates via a data plane to other application components external to the TEE, including those under the control of the tenant, some of which may sit on the client machine.  Some of these will be considered trusted from the point of view of the application, and these at least will typically require an encrypted communication channel so that the host is unable to snoop on the data (others may also require encryption).  Exactly how these channels are set up will vary between implementations, but application-level errors and logging should be expected to use these communication channels, as they are relevant to the application’s operation. This is the simplest case, as long as channels to external components are available. Where they cease to be available, for whatever reason, the application may choose to store logging information for later transfer (if possible) or communicate a possible error state to the runtime layer.

The application may also choose to communicate other runtime errors, or application errors that it considers relevant or possibly relevant to runtime, to the runtime layer.

Runtime layer

It is possible that the runtime layer may have access to communication channels to external parties that the application layer does not – in fact, if it is managing the loading and execution of the runtime layer, this can be considered a control plane. As the runtime layer is responsible for the execution of the application, it needs to be protected from the host, and it resides entirely within the TEE instance. It also has access to information associated with the application layer (which may include logging and error information passed directly to it by the application), which should also be protected from the host (both in terms of confidentiality and integrity), and so any communications it has with external parties must be encrypted.

There may be a temptation to consider that the runtime layer should be reporting errors to the host, but this is dangerous. It is very difficult to control what information will be passed: not only primary information, but also inferred information. There does, of course, need to be communication between the runtime layer and the host in order to allow execution – whether this is system calls or another mechanism – but in the model described here, that is handled by the TEE execution layer.

TEE loading layer

This layer is one where we start having to make some interesting decisions.  There are, as we noted, two different components which may make up this layer: TEE-internal and TEE-external.

TEE loading – TEE-internal

The TEE-internal component may generate logging information associated either with successful or unsuccessful loading of a workload.  Some errors encountered may be recoverable, while others are unrecoverable.  In most cases, it may generally be expected that a successful loading event is considered non-sensitive and can be exposed to the TEE-external component, as the host will generally be able to infer successful loading as execution will continue onto the next phase (even when the TEE loading layer and TEE execution layer do have not explicitly separate external components), but the TEE-internal component still needs to be careful about the amount of information exposed to the host, as even information around workload size or naming may provide a malicious entity with useful information.  In such cases, integrity protection of messages may be sufficient: failure to provide integrity protection could lead the host to misreport successful loading to a remote tenant, for example – not necessarily a major issue, but a possible attack vector nevertheless.

Error events associated with failure to load the workload (or parts of it) are yet more tricky.  Opportunities may exist for the host to tamper with the loading process with the intention of triggering errors from which information may be gleaned – for instance, pausing execution at particular points and seeing what error messages are generated.  The more data exported by the TEE loading internal component, the more data the external component may be able to make available to malicious parties.  One of the interesting questions to consider is what to do with error messages generated before a communications channel (the control plane) back to the provisioning entity has been established.  Once this has been established (and is considered “secure” to the appropriate level required), then transferring error messages via it is a pretty straightforward proposition, though this channel may still be subject to traffic analysis and resource starvation (meaning that any error states associated with timing need to be carefully examined).  Before this communication channel has been established, the internal component has three viable options (which are not mutually exclusive):

  1. Pass to the external component for transmission to the tenant “out of band”, by the external component.
  2. Pass to the external component for storage and later consumption and transmission over the control plane by the internal component if the control plane can be established in the future.
  3. Consign to internal storage, assuming availability of RAM or equivalent assigned for this purpose.

In terms of attacks, options 1 and 2 are broadly similar as long as the control plane fails to exist.  Additionally, in case 1, the external component can choose not to transmit all (or any) of the data to the tenant, and in case 2, it may withhold data from the internal component when requested.

If we take the view (as proposed above) that at least the integrity, and possibly the confidentiality of error messages is of concern, then option 1 would only be viable if a shared secret has already been established between the TEE loading internal component and the tenant or the identity of the TEE loading internal component already established with the tenant, which is impossible unless the control plane has already been created.  For option 2, the internal component can generate a key which it can use to encrypt the data sent to the external component, and store this key for decryption when (and if) the external component returns the data.

TEE loading – TEE-external

Any information which is available to any TEE-external component must be assumed to be unprotected and untrusted.  The only exceptions are if data is signed (for integrity) or encrypted (for confidentiality, though integrity is typically also transparently assured when data is encrypted), as noted above.  The TEE-external may choose to store or transmit error messages from the TEE-internal component, as noted above, but it may also generate log entries of its own.  There are five possible (legitimate) consumers of these entries:

  1. The host system – the host (general logging, operating system or other components) may consume information around successful loading to know when to start billing, for instance, or consume information around errors for its own purposes or to transmit to the tenant (where the TEE loading component is not in direct contact with the client, or other communication channels are preferred).
  2. The TEE loading internal component – there may be both success and failure events which are useful to communicate to the TEE loading internal component to allow it to make decisions.  Communications to this component assume, of course, that loading was sufficiently successful to allow the TEE loading internal component to start execution.
  3. The TEE runtime external component – if the lifecycle has proceeded to the stage where the TEE runtime component is executing, the TEE loading external component can communicate logging information to it, either directly (if they are executing concurrently) or via another entity such as storage.
  4. The TEE runtime internal component – similarly to case #3 above, the TEE loading external component may be able to communicate to the TEE runtime internal component, either directly or indirectly.
  5. The client – as noted in #1 above, the host may communicate logging information to the client.  An alternative, if an appropriate communications channel exists, is for the TEE loading external component to communicate directly with it.  The client should always treat all communications with this component as untrusted (unless they are being transmitted for the internal component, and are appropriately integrity/confidentiality protected).

The TEE runtime layer

TEE runtime – TEE-internal

While the situation for this component is similar to that for the TEE loading internal component, it is somewhat simpler because the fact that this stage of the lifecycle has been reached means that the application has, by definition, been loaded and is running.  This means that there are a number of different channels for communication of error messages: the application data plane, the runtime control plane and the TEE runtime external component.  Most logging information will generally be directed either to the application (for decision making or transmission over its data plane at the application’s discretion) or to the client via the control plane. Standard practice can generally be applied as to which of these is most appropriate for which use cases.

Transmission of data to the TEE runtime external component needs to be carefully controlled, as the runtime component (unless it is closely coupled with the application) is unlikely to be in a good position to judge what information might be considered sensitive if available to components or entities external to the TEE.  For this reason, either error communication to the TEE runtime external component should be completely avoided, or standardised (and carefully designed) error messages should be employed – which makes standard debugging techniques extremely difficult.

Debugging

Any form of debugging for TEE instances is extremely difficult, and there are two fairly stark choices:

  1. Have a strong security profile and restrict debugging to almost nothing.
  2. Have a weaker security profile and acknowledge that it is almost impossible to ensure the protection of the confidentiality and integrity of the workload (the application and its data).

There are times, particularly during the development and testing of a new application when the latter is the only feasible approach.  In this case, we can recommend two principles:

  1. Create a well-defined set of error states which can be communicated via untrusted channels (that is, which are generally unprotected from confidentiality and integrity attacks), and which do not allow for “free form” error messages (which are more likely to leak information to a host).
  2. Ensure that any deployment with a weaker profile is closely controlled (and never into production).

These two principles can be combined, and a deployment lifecycle might allow for different profiles: e.g. a testing profile on local hardware allowing free form error messages and a staging profile on external hardware which only allows for “static” error messages.

Standard operation

Standard operation must assume the worst case scenario, which is that the host may block, change and interfere with all logging and error messages to which it has access, and may use them to infer information about the workload (application and associated data), affecting its confidentiality, integrity and normal execution.  Given this, the default must be that all TEE-internal components should minimise all communications to which the host may have access.

Application

To restrict application data plane communication is clearly infeasible in most cases, though all communications should generally be encrypted for confidentiality and integrity protection and designers and architects with particularly strong security policies may wish to consider how to restrict data plane communications.

Runtime component

Data plane communications from the runtime component are likely to be fewer than application data plan communications in most cases, and there may also be some opportunities to design these with security in mind.

TEE loading and TEE runtime components

These are the components where the most care must be taken, as we have noted above, but also where there may be the most temptation to lower levels of security if only to allow for easier debugging and error management.

Summary

In a standard cloud deployment, there is little incentive to consider strong security controls around logging and debugging, simply because the host has access not only to all communications to and from a hosted workload, but also to all the code and data associated with the workload at runtime.  For Confidential Computing workloads, the situation is very different, and designers and architects of the TEE infrastructure and even, to a lesser extent, of potential workloads themselves, need to consider very carefully the impact of the host gaining access to messages associated with the workload and the infrastructure components.  It is, realistically, infeasible to restrict all communication to levels appropriate for deployment, so it is recommended that various profiles are created which can be applied to different stages of a deployment, and whose use is carefully monitored, logged (!) and controlled by process.

Why is Attestation Required for Confidential Computing?

By Blog No Comments

Alec Fernandez (alfernandez@microsoft.com)

At the end of 2022, the Confidential Computing Consortium amended the definition of Confidential Computing. We added attestation as an explicit part of the definition, but beyond updating our whitepaper we did not explain to the community why we made this change.

First off, an attestation is the evidence that you use to evaluate whether or not to trust a Confidential Computing program or environment. It’s sometimes built into a common protocol as in RA-TLS / Attested TLS. In other uses it might be built into the boot flow of a Confidential VM or built into an asynchronous usage like attaching it to the result of a Confidential Process.

To many of us attestation was an implicit part of Confidential Computing architecture. However it is so central to the idea of Confidential Computing that it really needed to be part of the formal definition.

Hardware and software providers have long offered assurances of security and these assurances have oftentimes fallen short of expectations. A historical analysis of the track record for placing trust in individual organizations to protect data raises important questions for security professionals. The recurrence of data breaches has led to understandably deep skepticism of technologies that purport to provide new security protections.

Users desire to see for themselves the evidence that new technologies are actually safeguarding their data.

Attestation is the process by which customers can alleviate their skepticism by getting answers to these questions:  

  • Can the TEE provide evidence showing that its security assurances are in effect?
  • Who is providing this evidence?  
  • How is the evidence obtained?
  • Is the evidence valid, authentic, and delivered through a secure chain of custody?
  • Who judges the evidence? 
  • Is the judge separate from the evidence provider?
  • Who provides the standards against which the evidence is judged?
  • Can evidence assure that the code and data protection claims are in effect? 

Hardware based attestation evidence is produced by a trusted hardware root-of-trust component of the computing environment. The hardware root-of-trust is a silicon chip or a set of chips that have been specifically designed to be highly tamper resistant. Some have been reviewed by researchers at standards organizations such as NIST, NSA, ICO, ENISA and academic institutions around the world and the technical community at large. While a critique of the analyses behind hardware roots of trust is beyond the scope of this paper, we take them to represent the current state of the art in computer security. They represent a significant improvement over available alternatives. See reference material at the end of this blog for more information.

Providing Attestation Evidence

Attestation evidence is delivered in a message containing authentic, accurate and timely measurements of system components such as hardware, firmware, BIOS and the software and data state of the computer being evaluated. Importantly, this attestation evidence is digitally signed by a key known only to the hardware root-of-trust (often the physical CPU) and not extractable. This means that the attestation evidence is secured. It cannot be altered, once it leaves the hardware without the alteration being detected. It is impervious to attacks by the host operating system, the kernel, the cloud platform provider. This eliminates chain of custody concerns as the evidence flows from the producer to the consumer.

Validating the Authenticity of Attestation Evidence

Before examining the attestation evidence, the source of the evidence must be established. This is done by matching the digital signature in the attestation evidence with a certificate issued by the manufacturer of the hardware root of trust, for example the manufacturer of the physical CPU in the computer. If the signature on the attestation evidence matches the manufacturer’s certificate, then this proves that the attestation report was produced by the CPU hardware. This means that if you trust the company that manufactured the hardware, then you can trust the attestation report.

Who Judges the Attestation Evidence? Are they Separate from the Evidence Provider?

Having the attestation evidence delivered in a message that is digitally signed by hardware allows for TEE users to establish for themselves that the security assurances provided by the TEE are in place. This can be done without the provider of the computing infrastructure or intervening parties being able to alter the evidence during delivery.

Attestation evidence is highly technical and oftentimes it is not feasible for an organization to judge the evidence themselves. This is especially true when the organization is not specialized in computing infrastructure security. In cases such as these, having a different entity, a third party with security expertise, evaluate the evidence offers a good balance between security and complexity. In this scenario, the computing infrastructure or device user is implicitly trusting the entity that verifies the attestation evidence (the verifier). In such scenarios, it is imperative for the device user to have access to effective mechanisms to verify the authenticity and reliability of the verifier to ensure that the attestation results produced by the verifier are legitimate and trustworthy.

Who provides the standards against which the evidence is judged?

The attestation evidence contains claims about the physical characteristics and the configuration settings of the execution environment. Examples include:

  • CPU Manufacturer, model and version and identifier.
  • Microcode and firmware version.
  • Configuration settings, e.g., whether memory encryption is enabled.
  • Encryption configuration, e.g., whether a different key is used to protect each individual VM

The values supplied in the attestation evidence are compared against reference values. For example, the firmware supplier might recommend that it be patched to a specific version due to the discovery of a security vulnerability. The attestation evidence will accurately reflect the current firmware version. But who decides which are acceptable firmware versions?

  • Since the firmware is typically the responsibility of the hardware manufacturer and they have intimate knowledge of the details behind its security baseline, they should certainly be consulted.
  • The owner of the device or computing infrastructure should also be consulted since they could be responsible for any risks of data exfiltration.
  • In a public cloud environment, the computing infrastructure provider controls patching the firmware to the hardware manufacturer’s recommended version but they do not make use of the resulting environment. The user of the TEE is responsible for data placed in the environment and must ensure that firmware complies with their security policy

Remote attestation provides a way to evaluate evidence that shows the actual firmware version provided by the TEE. This evidence is provided directly by the hardware on which the TEE is executing and allows the attestation verifier to independently verify when the patching was completed.

More generally, attestation can be used to check whether all available security standards and policies have been met. This practically eliminates the possibility that a configuration error on the part of the computer or device owner will result in a security guarantee being falsely reported. The computer or device owner might be incorrectly configured in a way that goes undetected, but the attestation evidence comes directly from the hardware component that is executing the TEE and so remains accurate.

Relying on Attestation Evidence to Secure a TEE

An example of using attestation to provide data security is secure key release (SKR). One excellent use case for SKR is configuring your key management infrastructure (KMI) to evaluate the attestation evidence against a policy controlled by the verifier which is deemed to be trustworthy owner of the TEE and configuring your KMI to refuse to supply the key needed to decrypt the computer’s OS disk unless the attestation evidence shows the computer to be in compliance. In this example, the attestation evidence is generated when the computer is powered on and sent to the KMI. If the attestation evidence indicates that the TEE is not in compliance with the policy (perhaps because the CPU firmware was not an acceptable version) then the KMI would not release the decryption key to the compute infrastructure and this would prevent data from being decrypted and this prevent the risk of data exfiltration.

Conclusion

Confidential computing, through the use of hardware-based, attested TEEs and remote attestation protects sensitive data and code against an increasingly common class of threats occurring during processing while data is in use. These were previously difficult, if not impossible to mitigate. Additionally, Confidential Computing allows for protecting data against the owner of the system and public cloud platforms which traditionally had to simply be trusted to not use their elevated permissions to access the data.

 

References

https://nvlpubs.nist.gov/nistpubs/ir/2022/Nist.IR.8320.pdf

https://tools.ietf.org/html/draft-ietf-rats-architecture

CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_Updated_November_2022.pdf (confidentialcomputing.io)

Common-Terminology-for-Confidential-Computing.pdf (confidentialcomputing.io)

CCC_outreach_whitepaper_updated_November_2022.pdf (confidentialcomputing.io)