THE LINUX FOUNDATION PROJECTS
Yearly Archives

2023

Confidential Computing: logging and debugging

By Blog No Comments

Mike Bursell

This article is a slightly edited version of an article originally published at https://blog.enarx.dev/confidential-computing-logging-and-debugging/

Debugging applications is an important part of the development process, and one of the mechanisms we use for it is logging: providing extra details about what’s going on in (and around) the application to help us understand problems, manage errors and (when we’re lucky!) monitor normal operation.  Logging then, is useful not just for abnormal, but also for normal (“nominal”) operations.  Log entries and other error messages can be very useful, but they can also provide information to other parties – sometimes information which you’d prefer they didn’t have.  This is particularly true when you are thinking about Confidential Computing: running applications or workloads in environments where you really want to protect the confidentiality and integrity of your application and its data.  This article examines some of the issues that we need to consider when designing Confidential Computing frameworks, the applications we run in them, and their operations.  It is written partly from the point of view of the Enarx project, but that is mainly to provide some concrete examples: these have been generalised where possible.  Note that this is quite a long article, as it goes into detailed discussion of some complex issues, and tries to examine as many of the alternatives as possible.

First, let us remind ourselves of one of the underlying assumptions about Confidential Computing in general which is that you don’t trust the host. The host, in this context, is the computer running your workload within a TEE instance – your Confidential Computing workload (or simply workload). And when we say that we don’t trust it, we really mean that: we don’t want to leak any information to the host which might allow it (the host) to infer information about the workload that is running, either in terms of the program itself (and any associated algorithms) or the data.

Now, this is a pretty tall order, particularly given that the state of the art at the moment doesn’t allow for strong protections around resource utilisation by the workload. There’s nothing that the workload can do to stop the host system from starving it of CPU resources, and slowing it down, or even stopping it running altogether.  This presents the host with many opportunities for artificially imposed timing attacks against which it is very difficult to protect.  In fact, there are other types of resource starvation and monitoring around I/O as well, which are also germane to our conversation.

Beyond this, the host system can also attempt to infer information about the workload by monitoring its resource utilisation without any active intervention. To give an example, let us say that the host notices that the workload creates a network socket to an external address. It (the host) starts monitoring the data sent via this socket, and notices that it is all encrypted using TLS. The host may not be able to read the data, but it may be able to infer that a specific short burst of activity just after the opening of the socket corresponds to the generation of a cryptographic key. This information on its own may be sufficient for the host to fashion passive or active attacks to weaken the strength of this key.

None of this is good news, but let’s extend our thinking beyond just normal operation of the workload and consider debugging generally and the error handling more particularly. For the sake of clarity, we will posit a tenant with a client process on a separate machine (considered trusted, unlike the host), and that TEE instance on the host has four layers, including the associated workload. This may not be true for all applications or designs, but is a useful generalisation, and covers most of the issues that are likely to arise.  This architecture models a cloud workload deployment. Here’s a picture.

TEE layers and components

These layers may be defined thus:

  1. application layer – the application itself, which may or may not be aware that it is running within a TEE instance. For many use cases, this, from the point of view of a tenant/client of the host, is the workload as defined above.
  2. runtime layer – the context in which the application runs. How this is considered is likely to vary significantly between TEE type and implementations, and in some cases (where the workload is a full VM image, including application and operating system, for instance), there may be little differentiation between this layer and the application layer (the workload includes both). In many cases, however, the runtime layer will be responsible for loading the application layer – the workload.
  3. TEE loading layer – the layer responsible for loading at least the runtime layer, and possibly some other components into the TEE instance. Some parts of this are likely to exist outside of the TEE instance, but others (such as a UEFI loader for a VM) may exist within it. For this reason, we may choose to separate “TEE-internal” from “TEE-external” components within this layer. For many implementations, this layer may disappear (cease to run and be removed from memory) once the runtime has started.
  4. TEE execution layer – the layer responsible for actually executing the runtime above it, and communicating with the host. Like the TEE loading layer, this is likely to exist in two parts – one within the TEE instance, and one outside it (again, “TEE-internal” and “TEE-external”. 

An example of relative lifecycles is shown here.

Component lifecycles

Now we consider logging for each of these.

Application layer

The application layer generally communicates via a data plane to other application components external to the TEE, including those under the control of the tenant, some of which may sit on the client machine.  Some of these will be considered trusted from the point of view of the application, and these at least will typically require an encrypted communication channel so that the host is unable to snoop on the data (others may also require encryption).  Exactly how these channels are set up will vary between implementations, but application-level errors and logging should be expected to use these communication channels, as they are relevant to the application’s operation. This is the simplest case, as long as channels to external components are available. Where they cease to be available, for whatever reason, the application may choose to store logging information for later transfer (if possible) or communicate a possible error state to the runtime layer.

The application may also choose to communicate other runtime errors, or application errors that it considers relevant or possibly relevant to runtime, to the runtime layer.

Runtime layer

It is possible that the runtime layer may have access to communication channels to external parties that the application layer does not – in fact, if it is managing the loading and execution of the runtime layer, this can be considered a control plane. As the runtime layer is responsible for the execution of the application, it needs to be protected from the host, and it resides entirely within the TEE instance. It also has access to information associated with the application layer (which may include logging and error information passed directly to it by the application), which should also be protected from the host (both in terms of confidentiality and integrity), and so any communications it has with external parties must be encrypted.

There may be a temptation to consider that the runtime layer should be reporting errors to the host, but this is dangerous. It is very difficult to control what information will be passed: not only primary information, but also inferred information. There does, of course, need to be communication between the runtime layer and the host in order to allow execution – whether this is system calls or another mechanism – but in the model described here, that is handled by the TEE execution layer.

TEE loading layer

This layer is one where we start having to make some interesting decisions.  There are, as we noted, two different components which may make up this layer: TEE-internal and TEE-external.

TEE loading – TEE-internal

The TEE-internal component may generate logging information associated either with successful or unsuccessful loading of a workload.  Some errors encountered may be recoverable, while others are unrecoverable.  In most cases, it may generally be expected that a successful loading event is considered non-sensitive and can be exposed to the TEE-external component, as the host will generally be able to infer successful loading as execution will continue onto the next phase (even when the TEE loading layer and TEE execution layer do have not explicitly separate external components), but the TEE-internal component still needs to be careful about the amount of information exposed to the host, as even information around workload size or naming may provide a malicious entity with useful information.  In such cases, integrity protection of messages may be sufficient: failure to provide integrity protection could lead the host to misreport successful loading to a remote tenant, for example – not necessarily a major issue, but a possible attack vector nevertheless.

Error events associated with failure to load the workload (or parts of it) are yet more tricky.  Opportunities may exist for the host to tamper with the loading process with the intention of triggering errors from which information may be gleaned – for instance, pausing execution at particular points and seeing what error messages are generated.  The more data exported by the TEE loading internal component, the more data the external component may be able to make available to malicious parties.  One of the interesting questions to consider is what to do with error messages generated before a communications channel (the control plane) back to the provisioning entity has been established.  Once this has been established (and is considered “secure” to the appropriate level required), then transferring error messages via it is a pretty straightforward proposition, though this channel may still be subject to traffic analysis and resource starvation (meaning that any error states associated with timing need to be carefully examined).  Before this communication channel has been established, the internal component has three viable options (which are not mutually exclusive):

  1. Pass to the external component for transmission to the tenant “out of band”, by the external component.
  2. Pass to the external component for storage and later consumption and transmission over the control plane by the internal component if the control plane can be established in the future.
  3. Consign to internal storage, assuming availability of RAM or equivalent assigned for this purpose.

In terms of attacks, options 1 and 2 are broadly similar as long as the control plane fails to exist.  Additionally, in case 1, the external component can choose not to transmit all (or any) of the data to the tenant, and in case 2, it may withhold data from the internal component when requested.

If we take the view (as proposed above) that at least the integrity, and possibly the confidentiality of error messages is of concern, then option 1 would only be viable if a shared secret has already been established between the TEE loading internal component and the tenant or the identity of the TEE loading internal component already established with the tenant, which is impossible unless the control plane has already been created.  For option 2, the internal component can generate a key which it can use to encrypt the data sent to the external component, and store this key for decryption when (and if) the external component returns the data.

TEE loading – TEE-external

Any information which is available to any TEE-external component must be assumed to be unprotected and untrusted.  The only exceptions are if data is signed (for integrity) or encrypted (for confidentiality, though integrity is typically also transparently assured when data is encrypted), as noted above.  The TEE-external may choose to store or transmit error messages from the TEE-internal component, as noted above, but it may also generate log entries of its own.  There are five possible (legitimate) consumers of these entries:

  1. The host system – the host (general logging, operating system or other components) may consume information around successful loading to know when to start billing, for instance, or consume information around errors for its own purposes or to transmit to the tenant (where the TEE loading component is not in direct contact with the client, or other communication channels are preferred).
  2. The TEE loading internal component – there may be both success and failure events which are useful to communicate to the TEE loading internal component to allow it to make decisions.  Communications to this component assume, of course, that loading was sufficiently successful to allow the TEE loading internal component to start execution.
  3. The TEE runtime external component – if the lifecycle has proceeded to the stage where the TEE runtime component is executing, the TEE loading external component can communicate logging information to it, either directly (if they are executing concurrently) or via another entity such as storage.
  4. The TEE runtime internal component – similarly to case #3 above, the TEE loading external component may be able to communicate to the TEE runtime internal component, either directly or indirectly.
  5. The client – as noted in #1 above, the host may communicate logging information to the client.  An alternative, if an appropriate communications channel exists, is for the TEE loading external component to communicate directly with it.  The client should always treat all communications with this component as untrusted (unless they are being transmitted for the internal component, and are appropriately integrity/confidentiality protected).

The TEE runtime layer

TEE runtime – TEE-internal

While the situation for this component is similar to that for the TEE loading internal component, it is somewhat simpler because the fact that this stage of the lifecycle has been reached means that the application has, by definition, been loaded and is running.  This means that there are a number of different channels for communication of error messages: the application data plane, the runtime control plane and the TEE runtime external component.  Most logging information will generally be directed either to the application (for decision making or transmission over its data plane at the application’s discretion) or to the client via the control plane. Standard practice can generally be applied as to which of these is most appropriate for which use cases.

Transmission of data to the TEE runtime external component needs to be carefully controlled, as the runtime component (unless it is closely coupled with the application) is unlikely to be in a good position to judge what information might be considered sensitive if available to components or entities external to the TEE.  For this reason, either error communication to the TEE runtime external component should be completely avoided, or standardised (and carefully designed) error messages should be employed – which makes standard debugging techniques extremely difficult.

Debugging

Any form of debugging for TEE instances is extremely difficult, and there are two fairly stark choices:

  1. Have a strong security profile and restrict debugging to almost nothing.
  2. Have a weaker security profile and acknowledge that it is almost impossible to ensure the protection of the confidentiality and integrity of the workload (the application and its data).

There are times, particularly during the development and testing of a new application when the latter is the only feasible approach.  In this case, we can recommend two principles:

  1. Create a well-defined set of error states which can be communicated via untrusted channels (that is, which are generally unprotected from confidentiality and integrity attacks), and which do not allow for “free form” error messages (which are more likely to leak information to a host).
  2. Ensure that any deployment with a weaker profile is closely controlled (and never into production).

These two principles can be combined, and a deployment lifecycle might allow for different profiles: e.g. a testing profile on local hardware allowing free form error messages and a staging profile on external hardware which only allows for “static” error messages.

Standard operation

Standard operation must assume the worst case scenario, which is that the host may block, change and interfere with all logging and error messages to which it has access, and may use them to infer information about the workload (application and associated data), affecting its confidentiality, integrity and normal execution.  Given this, the default must be that all TEE-internal components should minimise all communications to which the host may have access.

Application

To restrict application data plane communication is clearly infeasible in most cases, though all communications should generally be encrypted for confidentiality and integrity protection and designers and architects with particularly strong security policies may wish to consider how to restrict data plane communications.

Runtime component

Data plane communications from the runtime component are likely to be fewer than application data plan communications in most cases, and there may also be some opportunities to design these with security in mind.

TEE loading and TEE runtime components

These are the components where the most care must be taken, as we have noted above, but also where there may be the most temptation to lower levels of security if only to allow for easier debugging and error management.

Summary

In a standard cloud deployment, there is little incentive to consider strong security controls around logging and debugging, simply because the host has access not only to all communications to and from a hosted workload, but also to all the code and data associated with the workload at runtime.  For Confidential Computing workloads, the situation is very different, and designers and architects of the TEE infrastructure and even, to a lesser extent, of potential workloads themselves, need to consider very carefully the impact of the host gaining access to messages associated with the workload and the infrastructure components.  It is, realistically, infeasible to restrict all communication to levels appropriate for deployment, so it is recommended that various profiles are created which can be applied to different stages of a deployment, and whose use is carefully monitored, logged (!) and controlled by process.

CCC Newsletter – May 2023

By Newsletter No Comments

Welcome to the May 2023 edition of the Confidential Computing Consortium newsletter! We look forward to sharing every month news about projects underway, new members, industry events and other useful information to keep you updated with what’s happening at the consortium.

Welcome New Members!

Cryptosat is excited to join the Confidential Computing Consortium. We are working to provide a very unique trusted compute environment in space for use-cases requiring a perfect air-gap and physical isolation. We’re looking forward to contribute to the Confidential Computing technology landscape and establish fruitful partnerships with other companies in the consortium.

Confidential Computing Summit Use Case Awards

Calling all Confidential Computing experts! Today we’re launching the Confidential Computing Use Case Awards, with the chance to be recognized for the best case study across healthcare, financial services, and adtech. Use this form to tell your story.

Each case study will be evaluated by a panel of judges. Things to keep in mind:

  • The case studies do not need to be deployed. We are interested in nominations that identify the real world changes that can be addressed by confidential computing.
  • The use cases will be grouped in the following sectors: FinServ, Healthcare, AdTech, and Other
  • The case study must answer two questions: What is the problem? How does confidential computing provide the solution?

Recent Events

Open Source Summit North America, May 10-12, Vancouver

Mike Bursell and Stephen Walli attended the conference representing the CCC. Confidential Computing talks include:

– Advancements in Confidential Computing – Vojtěch Pavlik, SUSE
– WASM + CC, Secure Your FaaS Function – Xinran Wang & Liang He, Intel
– A WASM Runtime for FaaS Protected by TEE – Sara Wang & Yongli He, Intel
– OpenFL: A Federated Learning Project to Power Your Projects – Ezequiel Lanza, Intel

Upcoming Events

Confidential Computing Summit, June 29th, San Francisco

The Confidential Computing Consortium is a co-organizer of the Confidential Computing Summit. The event will take place in San Francisco on the 29th of June. The CCC and Opaque are launching the Confidential Computing Use Case Awards, asking teams to share their most interesting use cases across healthcare, financial services, adtech, and social good, with the chance to be recognized at the summit:

Webinars

BlindAI: Secure remote ML inference with Intel SGX enclaves

Striking a balance between security, privacy, and performance is a challenge in machine learning applications. In this talk we will present BlindAI, an open-source confidential computing solution that harnesses Intel SGX enclaves to enable secure remote ML inference. Our solution effectively safeguards the confidentiality of both the model and user data while also ensuring the predictions’ integrity. We will discuss the motivation behind BlindAI, how we factored in the specificities and constraints of Intel SGX at the design stage, and share the outcome of an independent security audit of our solution.

FLOSS WEEKLY 731 – Confidential Computing

Dan Middleton, of Intel and the Confidential Computing Consortium (CCC), dives deep on the topic of confidential computing (CoCo) and many related concerns, such as Trusted Execution Environments with Doc Searls and Jonathan Bennett.

Thanks,
The Confidential Computing Consortium

CCC Newsletter – April 2023

By Newsletter No Comments

Welcome to the April 2023 edition of the Confidential Computing Consortium newsletter! We look forward to sharing every month news about projects underway, new members, industry events and other useful information to keep you updated with what’s happening at the consortium.

Welcome New Members!

Spectro Cloud has recently joined the CCC. Founded by multi-cloud management experts, Spectro Cloud aims to make cloud infrastructure boundaryless for the enterprise. It provide solutions that help enterprises run Kubernetes their way, anywhere.

A word from Mike Bursell, CCC’s new Executive Director

I’m very pleased to announce that I’ve just started a new role as part-time Executive Director for the Confidential Computing Consortium, which is a project of the The Linux Foundation. I have been involved from the very earliest days of the consortium, which was founded in 2019, and I’m delighted to be joining as an officer of the project as we move into the next phase of our growth. I look forward to working with existing and future members and helping to expand industry adoption of Confidential Computing.

For those of you who’ve been following what I’ve been up to over the years, this may not be a huge surprise, at least in terms of my involvement, which started right at the beginning of the CCC. In fact, Enarx, the open source project of which I was co-founder, was the very first project to be accepted into the CCC, and Red Hat, where I was Chief Security Architect (in the Office of the CTO) at the time, was one of the founding members. Since then, I’ve served on the Governing Board (twice, once as Red Hat’s representative as a Premier member, and once as an elected representative of the General members) acted as Treasurer, been Co-chair of the Attestation SIG and been extremely active in the Technical Advisory Council. I was instrumental in initiating the creation of the first analyst report into Confidential Computing and helped in the creation of the two technical and one general white paper published by the CCC. I’ve enjoyed working with the brilliant industry leaders who more than ably lead the CCC, many of whom I now count not only as valued colleagues but also as friends.

The position – Executive Director – however, is news. For a while, the CCC has been looking to extend its activities beyond what the current officers of the consortium can manage, given that they have full-time jobs outside the CCC. The consortium has grown to over 40 members now – 8 Premier, 35 General and 8 Associate – and with that comes both the opportunity to engage in a whole new set of activities, but also a responsibility to listen to the various voices of the membership and to ensure that the consortium’s activities are aligned with the expectations and ambitions of the members. Beyond that, as Confidential Computing becomes more pervasive, it’s time to ensure that (as far as possible), there’s a consistent, crisp and compelling set of messages going out to potential adopters of the technology, as well as academics and regulators.

I plan to be working on the issues above. I’ve only just started and there’s a lot to be doing – and the role is only part-time! – but I look forward to furthering the aims of the CCC:

“The Confidential Computing Consortium is a community focused on projects securing data in use and accelerating the adoption of confidential computing through open collaboration.” – The core mission of the CCC
Wish me luck, or, even better, get in touch and get involved yourself.

Recent Events

Kubecon Europe, April 18-21, Amsterdam

– Keynote: MLOps on Highly Sensitive Data – Strict Confinement, Confidential Computing, and Tokenization Protecting Privacy – Maciej Mazur, Principal AI/ML Engineer, Canonical & Andreea Munteanu, AI/ML Product Manager, Canonical

– Confidential Containers Made Easy – Fabiano Fidencio, Intel & Jens Freimann, Red Hat

– The Next Episode in Workload Isolation: Confidential Containers – Jeremi Piotrowski, Microsoft

RSA Conference, April 24-27, San Francisco

CCC member Kate George from Intel went to RSA to raise awareness about the CCC and promote CC Summit.

– The Rise of Confidential Computing, What It Is and What it Means to You – Stephanie Domas, Intel

– Cloud Security Made for the EU: Securing Data & Applications – Dr. Norbert Pohlmann, IT Security Association Germany (TeleTrusT) (Moderator), Ulla Coester, Westphalian University of Applied Sciences Gelsenkirchen (Panelist), Nils Karn, Mitigant by Resility (Panelist), Andreas Walbrodt, enclaive (Panelist)

Upcoming Events

Open Source Summit North America, May 10-12, Vancouver

Mike Bursell will attend the event to promote the CCC. Confidential Computing talks include:

– Advancements in Confidential Computing – Vojtěch Pavlik, SUSE
– WASM + CC, Secure Your FaaS Function – Xinran Wang & Liang He, Intel
– A WASM Runtime for FaaS Protected by TEE – Sara Wang & Yongli He, Intel
– OpenFL: A Federated Learning Project to Power Your Projects – Ezequiel Lanza, Intel

Confidential Computing Summit, June 29th, San Francisco

The Confidential Computing Consortium is a co-organizer of the Confidential Computing Summit. The event will take place in San Francisco on the 29th of June. The CCC and Opaque are launching the Confidential Computing Use Case Awards, asking teams to share their most interesting use cases across healthcare, financial services, adtech, and social good, with the chance to be recognized at the summit:

Webinar

Arm Confidential Compute Architecture, May 23rd

The Arm Confidential Compute Architecture (Arm CCA) builds on top of the Armv9-A Realm Management Extension (RME) by providing a reference security architecture and open-source implementation of hypervisor-based confidential computing. This talk describes the latest open-source project developments (Trusted Firmware, Linux, KVM, EDK2) to enable Arm CCA, including current status and next steps.

CCC Blog: Why Attestation is Required for Confidential Computing?

Alec Fernandez from Microsoft clarifies why the CCC amended the definition of Confidential Computing to add attestation:

Wikipedia

The Wikipedia article for Confidential Computing has now been officially published. The article was led by Mike Ferron-Jones under the guidance of Wikipedia consultant Jake Orlowitz with the help of multiple CCC members. The article is available here:

https://en.wikipedia.org/wiki/Confidential_computing

Thanks,
The Confidential Computing Consortium

Why is Attestation Required for Confidential Computing?

By Blog No Comments

Alec Fernandez (alfernandez@microsoft.com)

At the end of 2022, the Confidential Computing Consortium amended the definition of Confidential Computing. We added attestation as an explicit part of the definition, but beyond updating our whitepaper we did not explain to the community why we made this change.

First off, an attestation is the evidence that you use to evaluate whether or not to trust a Confidential Computing program or environment. It’s sometimes built into a common protocol as in RA-TLS / Attested TLS. In other uses it might be built into the boot flow of a Confidential VM or built into an asynchronous usage like attaching it to the result of a Confidential Process.

To many of us attestation was an implicit part of Confidential Computing architecture. However it is so central to the idea of Confidential Computing that it really needed to be part of the formal definition.

Hardware and software providers have long offered assurances of security and these assurances have oftentimes fallen short of expectations. A historical analysis of the track record for placing trust in individual organizations to protect data raises important questions for security professionals. The recurrence of data breaches has led to understandably deep skepticism of technologies that purport to provide new security protections.

Users desire to see for themselves the evidence that new technologies are actually safeguarding their data.

Attestation is the process by which customers can alleviate their skepticism by getting answers to these questions:  

  • Can the TEE provide evidence showing that its security assurances are in effect?
  • Who is providing this evidence?  
  • How is the evidence obtained?
  • Is the evidence valid, authentic, and delivered through a secure chain of custody?
  • Who judges the evidence? 
  • Is the judge separate from the evidence provider?
  • Who provides the standards against which the evidence is judged?
  • Can evidence assure that the code and data protection claims are in effect? 

Hardware based attestation evidence is produced by a trusted hardware root-of-trust component of the computing environment. The hardware root-of-trust is a silicon chip or a set of chips that have been specifically designed to be highly tamper resistant. Some have been reviewed by researchers at standards organizations such as NIST, NSA, ICO, ENISA and academic institutions around the world and the technical community at large. While a critique of the analyses behind hardware roots of trust is beyond the scope of this paper, we take them to represent the current state of the art in computer security. They represent a significant improvement over available alternatives. See reference material at the end of this blog for more information.

Providing Attestation Evidence

Attestation evidence is delivered in a message containing authentic, accurate and timely measurements of system components such as hardware, firmware, BIOS and the software and data state of the computer being evaluated. Importantly, this attestation evidence is digitally signed by a key known only to the hardware root-of-trust (often the physical CPU) and not extractable. This means that the attestation evidence is secured. It cannot be altered, once it leaves the hardware without the alteration being detected. It is impervious to attacks by the host operating system, the kernel, the cloud platform provider. This eliminates chain of custody concerns as the evidence flows from the producer to the consumer.

Validating the Authenticity of Attestation Evidence

Before examining the attestation evidence, the source of the evidence must be established. This is done by matching the digital signature in the attestation evidence with a certificate issued by the manufacturer of the hardware root of trust, for example the manufacturer of the physical CPU in the computer. If the signature on the attestation evidence matches the manufacturer’s certificate, then this proves that the attestation report was produced by the CPU hardware. This means that if you trust the company that manufactured the hardware, then you can trust the attestation report.

Who Judges the Attestation Evidence? Are they Separate from the Evidence Provider?

Having the attestation evidence delivered in a message that is digitally signed by hardware allows for TEE users to establish for themselves that the security assurances provided by the TEE are in place. This can be done without the provider of the computing infrastructure or intervening parties being able to alter the evidence during delivery.

Attestation evidence is highly technical and oftentimes it is not feasible for an organization to judge the evidence themselves. This is especially true when the organization is not specialized in computing infrastructure security. In cases such as these, having a different entity, a third party with security expertise, evaluate the evidence offers a good balance between security and complexity. In this scenario, the computing infrastructure or device user is implicitly trusting the entity that verifies the attestation evidence (the verifier). In such scenarios, it is imperative for the device user to have access to effective mechanisms to verify the authenticity and reliability of the verifier to ensure that the attestation results produced by the verifier are legitimate and trustworthy.

Who provides the standards against which the evidence is judged?

The attestation evidence contains claims about the physical characteristics and the configuration settings of the execution environment. Examples include:

  • CPU Manufacturer, model and version and identifier.
  • Microcode and firmware version.
  • Configuration settings, e.g., whether memory encryption is enabled.
  • Encryption configuration, e.g., whether a different key is used to protect each individual VM

The values supplied in the attestation evidence are compared against reference values. For example, the firmware supplier might recommend that it be patched to a specific version due to the discovery of a security vulnerability. The attestation evidence will accurately reflect the current firmware version. But who decides which are acceptable firmware versions?

  • Since the firmware is typically the responsibility of the hardware manufacturer and they have intimate knowledge of the details behind its security baseline, they should certainly be consulted.
  • The owner of the device or computing infrastructure should also be consulted since they could be responsible for any risks of data exfiltration.
  • In a public cloud environment, the computing infrastructure provider controls patching the firmware to the hardware manufacturer’s recommended version but they do not make use of the resulting environment. The user of the TEE is responsible for data placed in the environment and must ensure that firmware complies with their security policy

Remote attestation provides a way to evaluate evidence that shows the actual firmware version provided by the TEE. This evidence is provided directly by the hardware on which the TEE is executing and allows the attestation verifier to independently verify when the patching was completed.

More generally, attestation can be used to check whether all available security standards and policies have been met. This practically eliminates the possibility that a configuration error on the part of the computer or device owner will result in a security guarantee being falsely reported. The computer or device owner might be incorrectly configured in a way that goes undetected, but the attestation evidence comes directly from the hardware component that is executing the TEE and so remains accurate.

Relying on Attestation Evidence to Secure a TEE

An example of using attestation to provide data security is secure key release (SKR). One excellent use case for SKR is configuring your key management infrastructure (KMI) to evaluate the attestation evidence against a policy controlled by the verifier which is deemed to be trustworthy owner of the TEE and configuring your KMI to refuse to supply the key needed to decrypt the computer’s OS disk unless the attestation evidence shows the computer to be in compliance. In this example, the attestation evidence is generated when the computer is powered on and sent to the KMI. If the attestation evidence indicates that the TEE is not in compliance with the policy (perhaps because the CPU firmware was not an acceptable version) then the KMI would not release the decryption key to the compute infrastructure and this would prevent data from being decrypted and this prevent the risk of data exfiltration.

Conclusion

Confidential computing, through the use of hardware-based, attested TEEs and remote attestation protects sensitive data and code against an increasingly common class of threats occurring during processing while data is in use. These were previously difficult, if not impossible to mitigate. Additionally, Confidential Computing allows for protecting data against the owner of the system and public cloud platforms which traditionally had to simply be trusted to not use their elevated permissions to access the data.

 

References

https://nvlpubs.nist.gov/nistpubs/ir/2022/Nist.IR.8320.pdf

https://tools.ietf.org/html/draft-ietf-rats-architecture

CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_Updated_November_2022.pdf (confidentialcomputing.io)

Common-Terminology-for-Confidential-Computing.pdf (confidentialcomputing.io)

CCC_outreach_whitepaper_updated_November_2022.pdf (confidentialcomputing.io)

CCC Newsletter – March 2023

By Newsletter No Comments

Welcome to the March 2023 edition of the Confidential Computing Consortium newsletter! We look forward to sharing every month news about projects underway, new members, industry events and other useful information to keep you updated with what’s happening at the consortium.

New Members

Canonical

image.png

Canonical joined the CCC in the prior month, and now they’ve published a blog post:

https://canonical.com/blog/canonical-joins-the-confidential-computing-consortium

Suse

suse.png

Suse has recently joined the CCC and they have also published a blog post:

https://www.suse.com/c/suse-joins-the-confidential-computing-consortium/

Customers and partners rely on SUSE to deliver a secure, open source platform that fully protects data regardless of its state.  Confidential Computing safeguards data in use without impacting business-critical workloads.  Joining the Confidential Computing Consortium enables SUSE to collaborate with open source leaders to advance these security technologies for our customers.

Recent Events

FOSS Backstage

FRqGrpJk_400x400.jpg

The Confidential Computing Consortium participated at FOSS Backstage that took place in Berlin on March 13-14. CCC Outreach Chair Nick Vidal gave a talk about combining open source supply chain technologies like SBOMs and Sigstore with Confidential Computing. The presentation was very much inspired by the SLSA security framework, where the major threats are highlighted in each stage of the supply chain. Interestingly enough, currently SLSA does not cover much of the last mile of the supply chain, when the application/workload is actually deployed, and this is where Confidential Computing can play an important role. The video recording is available here:

https://program.foss-backstage.de/fossback23/talk/ZMCST7/

OC3

oc3.png

On March 15th , for the third year in a row, the Open Confidential Computing Conference (OC3) brought the confidential computing community together to discuss latest developments, use cases, and projects. The event was hosted by Edgeless Systems, and proudly sponsored by the Confidential Computing Consortium, amongst others. There were 29 sessions with 37 expert speakers from Intel, Microsoft, NVIDIA, IBM, AMD, Suse and many more. 1227 people registered across industries from all over the world. The recordings are available on Edgeless Systems YouTube channel on demand.

You can find Ben Fischer keynote on behalf of the CCC here:

A CTO panel with Greg Lavender, Mark Russinovich, Mark Papermaster and Ian Buck is available here:

Webinar: 

Dan Middleton, CCC TAC Chair and principal engineer at Intel, and Dave Thaler, former CCC TAC Chair and software architect at Microsoft, shared their work with Confidential Computing and their efforts to further this technology via the Confidential Computing Consortium. Learn about confidential computing, the problems it solves, and how you can get involved:

https://openatintel.podbean.com/e/confidential-computing/

Upcoming Events

Confidential Computing Summit

ccsummit.png

The Confidential Computing Consortium is a co-organizer of the Confidential Computing Summit. The event will take place in San Francisco on the 29th of June. The Confidential Computing Summit brings together experts, innovators, cloud providers, software and hardware providers, and user organizations from all industries to accelerate key initiatives in confidential computing. Call for Speakers are open.

Women in Confidential Computing

In March we celebrated International Women’s month. We have several women who are leading the way and advancing Confidential Computing, among which:

  • Raluca Ada Popa: Raluca is an associate professor of computer science at UC Berkeley. She is interested in security, systems, and applied cryptography. Raluca developed practical systems that protect data confidentiality by computing over encrypted data, as well as designed new encryption schemes that underlie these systems. Some of her systems have been adopted into or inspired systems such as SEEED of SAP AG, Microsoft SQL Server’s Always Encrypted Service, and others. Raluca received her PhD in computer science as well as her two BS degrees, in computer science and in mathematics, from MIT. She is the recipient of an Intel Early Career Faculty Honor award, George M. Sprowls Award for best MIT CS doctoral thesis, a Google PhD Fellowship, a Johnson award for best CS Masters of Engineering thesis from MIT, and a CRA Outstanding undergraduate award from the ACM.
  • Mona Vij: Mona is a Principal Engineer and Cloud and Data Center Security Research Manager at Intel Labs, where she focuses on Scalable Confidential Computing for end-to-end Cloud to Edge security. Mona received her Master’s degree in Computer Science from University of Delhi, India. Mona leads the research engagements on Trusted execution with a number of universities. Her research has been featured in journals and conferences including USNIX OSDI, USENIX ATC and ACM ASPLOS, among others. Mona’s research interests primarily include trusted computing, virtualization, device drivers and operating systems.
  • Nelly Porter: Nelly is a lead of the Confidential Computing in Google with over 10 years’ experience in platform security, virtsec, PKI, crypto, authentication, and authorization field. She is working on multiple areas in Google, from root of trust, Titan, to the Shielded and Confidential Computing, has 25 patents and defensive publications. Prior to working at Google, Porter spent some time working in Microsoft in the virtualization and security space, HP Labs advancing clustering story, and Scientix (Israel) as a firmware and kernel driver eng. She has two sons, both are in the CS field, one of them is working for Google.
  • Lily Sturmann: Lily is a senior software engineer at Red Hat in the Office of the CTO in Emerging Technologies. She has primarily worked on security projects related to remote attestation, confidential computing, and securing the software supply chain.
  • Ijlal Loutfi: Ijlal is a security product manager at Canonical, the publishers of Ubuntu. She’s a post-doctoral researcher at the Norwegian University of Science of Technology, working with Professor Bian Yang. Her PhD was on trusted computing, trusted execution environments and online user authentication. Research interests include: Online identity management, namely self-sovereign identities; Applied cryptography, namely, proxy re-encryption; and Verifiable Remote Computation.
  • Mary Beth Chalk: Mary is the Co-founder & Chief Commercial Officer at BeeKeeperAI, Inc. has over 25 years of healthcare innovation experience improving outcomes through data-informed decision making, services, and processes.  Her early work with health systems was grounded in statistical process control enabling healthcare executives to discern the signal from the noise of their data.  As COO of a mental health organization, she created and implemented a system of predictive algorithms to improve the effectiveness of psychotherapy treatment.  Mary Beth was also the co-founder of a chronic disease self-management platform that combined monitoring device data with algorithm-driven digital behavioral coaching to improve health engagement and outcomes.  Her current work is focused on the development of healthcare AI from the perspective of the data owner and the algorithm owner including issues such as data access and intellectual property.
  • Ellison Anne Williams: Anne is the Founder and CEO of Enveil, the pioneering data security startup protecting Data in Use. She has more than a decade of experience spearheading avant-garde efforts in the areas of large scale analytics, information security and privacy, computer network exploitation, and network modeling at the National Security Agency and the Johns Hopkins University Applied Physics Laboratory. In addition to her leadership experience, she is accomplished in the fields of distributed computing and algorithms, cryptographic applications, graph theory, combinatorics, machine learning, and data mining and holds a Ph.D. in Mathematics (Algebraic Combinatorics), a M.S. in Mathematics (Set Theoretic Topology), and a M.S. in Computer Science (Machine Learning).
  • Sandrine Murcia: Sandrine is the CEO and co-founder of Cosmian, The Personal Data Network. Powered by peer-to-peer and blockchain technologies, Cosmian is the reference for personal data control & access, while favoring sustainable economic models for publishers and brands. Sandrine began her career in 1995 at Procter & Gamble. In 1999, thrilled by the emerging potential of the Internet, she switched gears and joined Microsoft’s MSN consumer division. In 2004, Sandrine joined Google and exercised responsibilities as Southern Europe Marketing Director. Sandrine holds a BA in Biotechnologies from INSA Lyon and a HEC Paris Master in Entrepreneurship. Sandrine is a 2004 Kellogg School of Management MBA graduate.

CCC and FHE

Dan Middleton, CCC TAC Chair, and Rosario Cammarota, Chief Scientist | Privacy-Enhanced Computing Research, Intel Corp., published a special blog post comparing Confidential Computing and Homomorphic Encryption. The blog post is available here:

Wikipedia

The Wikipedia article for Confidential Computing is now under the “Drafts” section, awaiting for one of the Wikipedia maintainers to review and publish it. The article was led by Mike Ferron-Jones under the guidance of Wikipedia consultant Jake Orlowitz with the help of multiple CCC members. The article is available here:

https://en.wikipedia.org/wiki/Draft:Confidential_computing

Thanks,

The Confidential Computing Consortium

Unifying Remote Attestation Protocol Implementations

By Blog, Featured Article No Comments

Shanwei Cen (@shnwc), Dan Middleton (@dcmiddle)

We’re excited to announce some recent attestation news. One of the hallmarks of confidential computing is the ability to build trusted communication with an application running in a hardware-based trusted execution environment. To make attestation easily accessible it can be incorporated into common protocols. That way developers don’t need to figure out all the details to build a secure protocol themselves. One of these protocols is called Remote Attestation TLS (RA-TLS), which builds on the ubiquitously used Transport Layer Security protocol underlying most secure internet communication. It turns out several projects independently implemented RA-TLS with tiny but incompatible differences. In the CCC Attestation SIG, we’ve agreed on and, in some cases, already implemented changes to make them all be able to interoperate.

The CCC Attestation SIG is chartered to develop attestation-related software aimed at improving interoperability, and to achieve harmonization and de-fragmentation between multiple projects. One approach is to identify and review projects in SIG meetings, propose improvements for interoperability and standardization, and work with these projects for implementation and tests. Interoperable RA-TLS is a great example showcasing how the SIG delivers on its charter.

RA-TLS (Remote Attestation TLS) architecture is defined in the white paper Integrating Remote Attestation with Transport Layer Security, to enable Intel® Software Guard Extensions (Intel® SGX) remote attestation during the establishment of a standard Transport Layer Security (TLS) connection. In a TLS server / client scenario, the TLS server runs inside an SGX enclave. It generates a public-private keypair, creates an SGX report with a hash of the public key in its user-data field, and gets an SGX quote for this report. It then creates an X.509 certificate with a custom extension containing this SGX quote. This customized certificate is sent to a TLS client in the TLS handshake protocol. The client gets the SGX quote from the certificate and performs remote attestation to verify that the connected server runs inside an authentic Intel® SGX enclave.

There are a few aspects of RA-TLS architecture that were not covered in this white paper. Some of the gaps include the specific X.509 extension OID value for the SGX quote, the supported types of SGX quote, and how the public key is hashed. Additionally, since the white paper was published, new TEEs like Intel® Trust Domain Extensions (Intel® TDX) and new quote formats have become available. The level of specificity in the RA-TLS paper left room for incompatibility between different implementations and prevented their interoperability.

RA-TLS has been supported in multiple open-source projects, including Gramine, RATS-TLS, Open Enclave Attested TLS, and SGX SDK Attested TLS. The CCC Attestation SIG invited these projects to its meetings for review, and recommended further investigation to look into harmonization between them for interoperability. Following up on this recommendation, we conducted an in-depth investigation and identified areas of incompatibility. We documented our findings, created a draft proposal for an interoperable RA-TLS architecture, and presented our work back to the SIG.

Based on the interoperable RA-TLS draft proposal, we refined the design, and aligned it with the upcoming DICE Attestation Architecture v1.1 draft standard on X.509 extension OID value and evidence format definition (as a tagged CBOR byte string). We created an CCC Attestation SIG github project interoperable-ra-tls to host the design documents and interoperability tests. This project also facilitates discussion among members of the RA-TLS projects and the CCC Attestation SIG community in general. In addition, we registered the needed CBOR tags with the IANA registration service. In the process, we provided feedback to the DICE Attestation Architecture workgroup for refinement of their draft standard specification.

Great progress has been made to implement this proposed interoperable RA-TLS scheme in the RA-TLS projects. We’ve worked with all the projects to create issues and pull requests for their implementations. Especially, as discussed in some of the interoperable-ra-tls project issues, Gramine and RATS-TLS have completed their implementation, and have been active in interoperability tests.

In summary, the interoperable RA-TLS work demonstrated the value of the CCC Attestation SIG in providing a constructive forum to collaborate on attestation technology. We invite you to try out the new unified implementations in Gramine and RATS-TLS. If you are interested in getting more involved, please join us at the CCC Attestation SIG or any other facet of our Confidential Computing Consortium open source community. All are welcome here.

CCC Newsletter – February 2023

By Newsletter No Comments

Welcome to the February 2023 edition of the Confidential Computing Consortium newsletter! We look forward to sharing every month news about projects underway, new members, industry events and other useful information to keep you updated with what’s happening at the consortium. This newsletter is also available on our website.

Recent Events

FOSDEM

The Confidential Computing Consortium participated at the Confidential Computing devroom at FOSDEM on the 4th and 5th of February. The event was organized by Jo Van Bulck and Fritz Alder, from the University of Leuven, Belgium, and Fabiano Fidencio, from Intel. This was the fourth edition of this devroom at FOSDEM. The event was very successful. The devroom, with a capacity for 80 attendees, was mostly full throughout the day. Half of the people in the devroom have heard of Confidential Computing and many of the speakers were members of the CCC. Jo and Fritz highlighted the importance of bringing developers and academia together around Confidential Computing. There was also a social event organized by Richard Searle, Chair of the EUAC.

State of Open Con

The Confidential Computing Consortium participated at the State of Open Con in London on the 7th and 8th of February. This was the first conference of its kind being organized by OpenUK and it was located at the Queen Elizabeth II Centre, in the heart of London. Amanda Brock, the Executive Director of OpenUK, kicked off the event with a keynote. Other keynote speakers included Jimmy Wales, Founder of Wikipedia, Camille Gloster, Deputy National Cyber Director from the White House, and Eric Brewer, VP Infrastructure & Google Fellow. The CCC had a booth where Nick Vidal, the CCC Outreach Chair, was joined by Liz Moy (Evervault). There was good engagement at the booth, with the presentation of demo use cases that resonated with attendees. Stephen Walli, the CCC Chair, was also present and gave a talk entitled “What do we mean by Open Governance?” Mike Bursell, co-founder of the Enarx project, gave an entertaining talk on ConfidentialComputing.

CCC Webinar: Confidential Computing in Financial Services

The last CCC webinar that happened this month of February is already available online. Featured speakers include Bessie Chu (Cape Privacy), Gavin Uhma (Cape Privacy), Mark F. Novak (JP Morgan Chase), and Richard Searle (Fortanix).

Upcoming Events

OC3

The Confidential Computing Consortium is a sponsor of the Open Confidential Computing Conference (OC3). The online conference will take place on the 15th of March. Registration is free. Stephen Walli, Chair of the CCC, will give one of the keynotes. The main keynote “Industry Perspectives: the impact and future of confidential computing” features Ian Buck, VP of Hyperscale and HPC at NVIDIA, Mark Papermaster, CTO & EVP at AMD, Mark Russinovich, CTO at Microsoft Azure, and Greg Lavender, CTO of Intel.

Confidential Computing Summit

The Confidential Computing Consortium is a co-organizer of the Confidential Computing Summit. The event will take place in San Francisco on the 29th of June. The Confidential Computing Summit brings together experts, innovators, cloud providers, software and hardware providers, and user organizations from all industries to accelerate key initiatives in confidential computing. Call for Speakers are open.

White Papers & Reports

The National Cybersecurity Center of Excellence (NCCoE) has released a draft report, NIST Interagency Report (NISTIR) 8320D, Hardware Enabled Security: Hardware-Based Confidential Computing, for public comment. The public comment period for this draft is open through April 10, 2023. Abstract from the report: In today’s cloud data centers and edge computing, attack surfaces have shifted and, in some cases, significantly increased. At the same time, hacking has become industrialized, and most security control implementations are not coherent or consistent. The foundation of any data center or edge computing security strategy should be securing the platform on which data and workloads will be executed and accessed. The physical platform represents the first layer for any layered security approach and provides the initial protections to help ensure that higher-layer security controls can be trusted. This report explains hardware-enabled security techniques and technologies that can improve platform security and data protection for cloud data centers and edge computing.

Technical Advisory Committee

As part of 2023 goals, the TAC is looking to increase the impact of the CCC in the ecosystem:

  • Cross-project Integration event for discussion.
  • Portfolio growth and maturity, hosting projects that are adopted by the community. Look into new projects from member companies and academic research.
  • Cross-org and Cross-SIG coordination.
  • Outbound education and DCI revisit.

– Dan Middleton, TAC Chair (2023)

Outreach Committee

The CCC Outreach Committee has brought in Jake Orlowitz (WikiBlueprint) as the Wikipedia consultant with the goal of facilitating the creation of a top-quality Wikipedia article on Confidential Computing on English Wikipedia using an efficient participatory approach. As a result of this collaborative participation, Mike Ferron-Jones (Intel) has shared a Wikipedia article draft.

The CCC Outreach Committee has also brought Noah Lehman (Linux Foundation) as a social media consultant with the goal of facilitating the creation of top-quality posts on Twitter and LinkedIn. In collaboration with CCC members, he’ll create up to 8 on-demand social posts per month (this includes social posts promoting ad-hoc announcements, events, news and initiatives) and up to 4 on-demand social posts per month shared on Linux Foundation social media. Noah has shared the Social Media plan with the CCC.

Kate George (Intel) has volunteered to help with the CCC Event Strategy. She highlighted 5 event objectives: 1. Raise awareness of Confidential Computing & Open-source projects under the foundation, and participating companies; 2. Accelerate the adoption of Confidential Computing; 3. Present panels, talks, and demo cases to targeted audiences – security, health care, financial services, and government. (Consider compliance piece too); 4. Recruit new members or projects; and 5. Foster collaboration and open-source. Kate and Nick Vidal have shared the Event Strategy slides and List of Events.

– Nick Vidal, Outreach Chair (2023)

ProjectsEnarxThe Enarx project is looking for a custodian, as Profian had to close its doors. Both Profian and Red Hat have invested heavily on the development of Enarx, which has reached a good stable release with a number of key components to establish the foundations for a comprehensive Confidential Computing solution. The Linux Foundation is providing full support to the project.
GramineGramine version 1.4 has been released, with important new features, including support for EDMM (Enclave Dynamic Memory Management), and performance improvements. Key milestones for 2023 include support communication with hardware accelerators (GPUs), support dynamic thread creation/destruction, support additional runtimes and workloads, integration with confidential container deployments (Kata containers, enclave-CC), interoperate with RA-TLS (standardization), support additional TEE backends (Intel TDX), and explore coarse-grained partitioning for certain I/O bound applications (DPDK).
KeystoneKeystone aims to enable TEE on (almost) all RISC-V processors. It’s very popular in academia, gaining 133 yearly citations (+28% YoY), however in the past year four students from UCB working on Keystone have graduated and left the project. Key milestones for 2023 include better application support (dynamic library), parity with industry standards, increase dev board accessibility, and work closely with the RISC-V AP-TEE working group. 

Thanks,

The Confidential Computing Consortium

CCC Newsletter – January 2023

By Newsletter No Comments

Welcome to the January 2023 edition of the Confidential Computing Consortium newsletter! We look forward to sharing every month news about projects underway, new members, industry events and other useful information to keep you updated with what’s happening at the consortium. This newsletter is also available on our website.

Introduction

The start of the new year is the perfect opportunity to reflect about the year that has passed and what we have accomplished collectively in 2022. It has been a pivotal year for the CCC in many regards. Please check the updates from the Technical Advisory Committee, the Outreach Committee, the CCC projects, and the Special Interest Groups.

New Members

Cape Privacy and Canonical joined the Confidential Computing Consortium.

Cape Privacy is a confidential computing platform to easily run serverless functions on encrypted data. Cape empowers developers to build secure applications which protect the underlying data and code from the cloud.

Canonical is committed to enabling Ubuntu users to leverage the strong run-time confidentiality and integrity guarantees that confidential computing provides. The mission of the Confidential Computing Consortium of driving cross-industry open source software, standards and tools greatly resonates with us and we are really excited to have joined its members.

Upcoming Events

FOSDEM

The Confidential Computing Consortium will be participating at the Confidential Computing devroom at FOSDEM. A social event is being sponsored by the CCC on the 4th of February.

State of Open Con

The Confidential Computing Consortium will have a table at the State of Open Con, a conference being organized by OpenUK in London on the 7-8th of February.

CCC Webinar: Confidential Computing in Financial Services

The next CCC webinar will happen on February 16 at 8:00 am PT. Featured speakers include Bessie Chu (Cape Privacy), Gavin Uhma (Cape Privacy), Mark F. Novak (JP Morgan Chase), and Richard Searle (Fortanix).

White Papers & Reports

The Confidential Computing Consortium has published the Common Terminology for Confidential Computing. As more companies and open source projects begin to use similar terms to describe similar paradigms that build upon hardware-based, attested Trusted Execution Environments (TEEs), it will be increasingly important that vendors use consistent terminology that describes the ways in which these new capabilities are applied within different functional domains.

Technical Advisory Committee

It was a busy year for the Technical Advisory Council (TAC). We had a number of goals for the year across the spectrum of maturing our projects to collaborating with other open organizations to acting on our diversity & inclusion plans. Attestation was a pronounced theme for the year. We revised the definition of Confidential Computing to include attestation as an essential element. The TAC approved the Veraison project which focuses on building blocks for attestation verification. We created the Attestation SIG last year and throughout 2022, it found its legs and created a good deal of content. You can browse our meeting recordings and presentations for a series of talks on Secure Channels and Attestation Formats. An outcome of this sharing led to two additional initiatives. CCC projects Gramine, Occlum, and Open Enclave SDK all rely on separate implementations of “Remote Attestation TLS.” The independent implementations were not interoperable. The Attestation SIG helped uncover and resolve variations arriving at a proposal to harmonize the implementations of those projects. Contributors to the SIG are also creating an Attested TLS proof of concept based on a similar design. We look forward to attestation of TEEs becoming a fundamental part of communications as Confidential Computing becomes pervasive.

Harmonization was not unique to the Attestation SIG. The TAC also engaged with a variety of organizations looking for opportunities for collaboration and coordination. We hosted speakers from RISC-V, MPC Alliance, IETF, TCG, CDCC, TrustedComputing.org, HomomorphicEncryption.org, PCI SIG WG, and the OCP Security SIG. In fact, most of our TAC meetings host a Tech Talk and our meetings have become a place for learning a variety of security related technical topics. As an open collaborative community, everyone is welcome to join our meetings or view the recordings. We hope to see you in one in 2023.

The TAC also had direct collateral outputs. In addition to revising our primary whitepaper, we also generated a new whitepaper which is going through the final layout. That paper focuses on terminology to give greater clarity to the different ways Confidential Computing artifacts can be packaged and what that should imply to a consumer. We were also able to collectively form a response to the OSTP’s request for comments on Privacy Enhancing Technologies (PETs).

This government interaction suggested a broader need for similar discourse. The TAC subsequently approved the creation of a Governance, Risk, and Compliance SIG. This newly chartered SIG already has representation from representatives from Meta, Microsoft, Intel, NVidia, Arm, CSA, JPMorgan Chase, Anjuna and others.

Of course, as an open source organization, our main focus is on open source projects. This year the TAC provided projects with additional resources. Our focus on diversity and inclusion took a few forms. Each of the projects were introduced to D&I training specifically for open source provided by the Linux Foundation. We made Outreachy internships available and Veracruz and Enarx piloted this membership program for the rest of the CCC. As the year progressed we created other resources for projects – increasing funding for CI, creating conference travel funding for projects, and making additional security tooling available.

All in all it has been a very productive year for the Technical Advisory Council, our SIGs, and our projects. We have a number of ambitious goals coming together for 2023 and will communicate those in a future blog.

– Dan Middleton, TAC Chair (2023)

Outreach Committee

2022 was a year of two halves. While the effects of COVID restrictions were still being felt in the first half of the year, things really turned around in the summer, and by the end of the year life was back to pre-COVID levels in most regions of the world. The outreach committee had to be nimble and adapt to the changing circumstances. In some ways, some of the impetus was to lay the foundation to hit the ground running again in 2023.

The committee implemented multiple important initiatives during this time including:

  • For the second year in a row, CCC sponsored the OC3 Summit, a virtual Open Confidential Computing Conference held in early 2022.
  • Building brand awareness and visibility in industry events like RSA. We were able to negotiate a co-marketing arrangement at no cost, whereby RSA promoted the CCC on their website, and in promotions, and CCC did the same for RSA. We’ll have a similar arrangement with RSA in 2023 as well.
  • Expanding our presence to Latin America, participating at Roadsec 2022 in Sao Paulo, the biggest hacker festival in Latin America. 
  • After a hiatus due to COVID, CCC had a presence at Black Hat USA, in Las Vegas. This included a meeting room where we received visitors wanting to learn and/or get engaged with CCC. In addition we also got exposure in some of the member booths at the show, by way of presentations, CCC handouts etc.
  • We were also able to get brand visibility at the Crypto & Privacy Village at DEF CON 2022.
  • Rekindled industry analyst interactions including recent briefing with ABI Research, and communications with Gartner, Forrester, IDC, 451 Research, OMDIA, Nemertes and other Tier 2/3 analyst firms
  • Secured a speaking spot for the consortium in the Keynote segment of the upcoming OC3 event in March 2023
  • Signed up a consultant to greatly increase our social media activities starting Jan 2023
  • Shortlisted a consultant to help guide the committee to get Confidential Computing on Wikipedia
  • Made good progress on content refresh of our website, with the updates scheduled to be rolled out in March 2023

The committee is very excited about the foundation that has been laid, and we are looking forward to a highly successful 2023!

– Ravi Sharma, Outreach Chair (2022)

ProjectsPlease find updates from the CCC projects below:

Special Interest Groups
Please find updates from the SIGs below:

Thanks,

The Confidential Computing Consortium