The Linux Foundation Projects
Skip to main content
Category

Blog

Confidential Computing: logging and debugging

By Blog No Comments

Mike Bursell

This article is a slightly edited version of an article originally published at https://blog.enarx.dev/confidential-computing-logging-and-debugging/

Debugging applications is an important part of the development process, and one of the mechanisms we use for it is logging: providing extra details about what’s going on in (and around) the application to help us understand problems, manage errors and (when we’re lucky!) monitor normal operation.  Logging then, is useful not just for abnormal, but also for normal (“nominal”) operations.  Log entries and other error messages can be very useful, but they can also provide information to other parties – sometimes information which you’d prefer they didn’t have.  This is particularly true when you are thinking about Confidential Computing: running applications or workloads in environments where you really want to protect the confidentiality and integrity of your application and its data.  This article examines some of the issues that we need to consider when designing Confidential Computing frameworks, the applications we run in them, and their operations.  It is written partly from the point of view of the Enarx project, but that is mainly to provide some concrete examples: these have been generalised where possible.  Note that this is quite a long article, as it goes into detailed discussion of some complex issues, and tries to examine as many of the alternatives as possible.

First, let us remind ourselves of one of the underlying assumptions about Confidential Computing in general which is that you don’t trust the host. The host, in this context, is the computer running your workload within a TEE instance – your Confidential Computing workload (or simply workload). And when we say that we don’t trust it, we really mean that: we don’t want to leak any information to the host which might allow it (the host) to infer information about the workload that is running, either in terms of the program itself (and any associated algorithms) or the data.

Now, this is a pretty tall order, particularly given that the state of the art at the moment doesn’t allow for strong protections around resource utilisation by the workload. There’s nothing that the workload can do to stop the host system from starving it of CPU resources, and slowing it down, or even stopping it running altogether.  This presents the host with many opportunities for artificially imposed timing attacks against which it is very difficult to protect.  In fact, there are other types of resource starvation and monitoring around I/O as well, which are also germane to our conversation.

Beyond this, the host system can also attempt to infer information about the workload by monitoring its resource utilisation without any active intervention. To give an example, let us say that the host notices that the workload creates a network socket to an external address. It (the host) starts monitoring the data sent via this socket, and notices that it is all encrypted using TLS. The host may not be able to read the data, but it may be able to infer that a specific short burst of activity just after the opening of the socket corresponds to the generation of a cryptographic key. This information on its own may be sufficient for the host to fashion passive or active attacks to weaken the strength of this key.

None of this is good news, but let’s extend our thinking beyond just normal operation of the workload and consider debugging generally and the error handling more particularly. For the sake of clarity, we will posit a tenant with a client process on a separate machine (considered trusted, unlike the host), and that TEE instance on the host has four layers, including the associated workload. This may not be true for all applications or designs, but is a useful generalisation, and covers most of the issues that are likely to arise.  This architecture models a cloud workload deployment. Here’s a picture.

TEE layers and components

These layers may be defined thus:

  1. application layer – the application itself, which may or may not be aware that it is running within a TEE instance. For many use cases, this, from the point of view of a tenant/client of the host, is the workload as defined above.
  2. runtime layer – the context in which the application runs. How this is considered is likely to vary significantly between TEE type and implementations, and in some cases (where the workload is a full VM image, including application and operating system, for instance), there may be little differentiation between this layer and the application layer (the workload includes both). In many cases, however, the runtime layer will be responsible for loading the application layer – the workload.
  3. TEE loading layer – the layer responsible for loading at least the runtime layer, and possibly some other components into the TEE instance. Some parts of this are likely to exist outside of the TEE instance, but others (such as a UEFI loader for a VM) may exist within it. For this reason, we may choose to separate “TEE-internal” from “TEE-external” components within this layer. For many implementations, this layer may disappear (cease to run and be removed from memory) once the runtime has started.
  4. TEE execution layer – the layer responsible for actually executing the runtime above it, and communicating with the host. Like the TEE loading layer, this is likely to exist in two parts – one within the TEE instance, and one outside it (again, “TEE-internal” and “TEE-external”. 

An example of relative lifecycles is shown here.

Component lifecycles

Now we consider logging for each of these.

Application layer

The application layer generally communicates via a data plane to other application components external to the TEE, including those under the control of the tenant, some of which may sit on the client machine.  Some of these will be considered trusted from the point of view of the application, and these at least will typically require an encrypted communication channel so that the host is unable to snoop on the data (others may also require encryption).  Exactly how these channels are set up will vary between implementations, but application-level errors and logging should be expected to use these communication channels, as they are relevant to the application’s operation. This is the simplest case, as long as channels to external components are available. Where they cease to be available, for whatever reason, the application may choose to store logging information for later transfer (if possible) or communicate a possible error state to the runtime layer.

The application may also choose to communicate other runtime errors, or application errors that it considers relevant or possibly relevant to runtime, to the runtime layer.

Runtime layer

It is possible that the runtime layer may have access to communication channels to external parties that the application layer does not – in fact, if it is managing the loading and execution of the runtime layer, this can be considered a control plane. As the runtime layer is responsible for the execution of the application, it needs to be protected from the host, and it resides entirely within the TEE instance. It also has access to information associated with the application layer (which may include logging and error information passed directly to it by the application), which should also be protected from the host (both in terms of confidentiality and integrity), and so any communications it has with external parties must be encrypted.

There may be a temptation to consider that the runtime layer should be reporting errors to the host, but this is dangerous. It is very difficult to control what information will be passed: not only primary information, but also inferred information. There does, of course, need to be communication between the runtime layer and the host in order to allow execution – whether this is system calls or another mechanism – but in the model described here, that is handled by the TEE execution layer.

TEE loading layer

This layer is one where we start having to make some interesting decisions.  There are, as we noted, two different components which may make up this layer: TEE-internal and TEE-external.

TEE loading – TEE-internal

The TEE-internal component may generate logging information associated either with successful or unsuccessful loading of a workload.  Some errors encountered may be recoverable, while others are unrecoverable.  In most cases, it may generally be expected that a successful loading event is considered non-sensitive and can be exposed to the TEE-external component, as the host will generally be able to infer successful loading as execution will continue onto the next phase (even when the TEE loading layer and TEE execution layer do have not explicitly separate external components), but the TEE-internal component still needs to be careful about the amount of information exposed to the host, as even information around workload size or naming may provide a malicious entity with useful information.  In such cases, integrity protection of messages may be sufficient: failure to provide integrity protection could lead the host to misreport successful loading to a remote tenant, for example – not necessarily a major issue, but a possible attack vector nevertheless.

Error events associated with failure to load the workload (or parts of it) are yet more tricky.  Opportunities may exist for the host to tamper with the loading process with the intention of triggering errors from which information may be gleaned – for instance, pausing execution at particular points and seeing what error messages are generated.  The more data exported by the TEE loading internal component, the more data the external component may be able to make available to malicious parties.  One of the interesting questions to consider is what to do with error messages generated before a communications channel (the control plane) back to the provisioning entity has been established.  Once this has been established (and is considered “secure” to the appropriate level required), then transferring error messages via it is a pretty straightforward proposition, though this channel may still be subject to traffic analysis and resource starvation (meaning that any error states associated with timing need to be carefully examined).  Before this communication channel has been established, the internal component has three viable options (which are not mutually exclusive):

  1. Pass to the external component for transmission to the tenant “out of band”, by the external component.
  2. Pass to the external component for storage and later consumption and transmission over the control plane by the internal component if the control plane can be established in the future.
  3. Consign to internal storage, assuming availability of RAM or equivalent assigned for this purpose.

In terms of attacks, options 1 and 2 are broadly similar as long as the control plane fails to exist.  Additionally, in case 1, the external component can choose not to transmit all (or any) of the data to the tenant, and in case 2, it may withhold data from the internal component when requested.

If we take the view (as proposed above) that at least the integrity, and possibly the confidentiality of error messages is of concern, then option 1 would only be viable if a shared secret has already been established between the TEE loading internal component and the tenant or the identity of the TEE loading internal component already established with the tenant, which is impossible unless the control plane has already been created.  For option 2, the internal component can generate a key which it can use to encrypt the data sent to the external component, and store this key for decryption when (and if) the external component returns the data.

TEE loading – TEE-external

Any information which is available to any TEE-external component must be assumed to be unprotected and untrusted.  The only exceptions are if data is signed (for integrity) or encrypted (for confidentiality, though integrity is typically also transparently assured when data is encrypted), as noted above.  The TEE-external may choose to store or transmit error messages from the TEE-internal component, as noted above, but it may also generate log entries of its own.  There are five possible (legitimate) consumers of these entries:

  1. The host system – the host (general logging, operating system or other components) may consume information around successful loading to know when to start billing, for instance, or consume information around errors for its own purposes or to transmit to the tenant (where the TEE loading component is not in direct contact with the client, or other communication channels are preferred).
  2. The TEE loading internal component – there may be both success and failure events which are useful to communicate to the TEE loading internal component to allow it to make decisions.  Communications to this component assume, of course, that loading was sufficiently successful to allow the TEE loading internal component to start execution.
  3. The TEE runtime external component – if the lifecycle has proceeded to the stage where the TEE runtime component is executing, the TEE loading external component can communicate logging information to it, either directly (if they are executing concurrently) or via another entity such as storage.
  4. The TEE runtime internal component – similarly to case #3 above, the TEE loading external component may be able to communicate to the TEE runtime internal component, either directly or indirectly.
  5. The client – as noted in #1 above, the host may communicate logging information to the client.  An alternative, if an appropriate communications channel exists, is for the TEE loading external component to communicate directly with it.  The client should always treat all communications with this component as untrusted (unless they are being transmitted for the internal component, and are appropriately integrity/confidentiality protected).

The TEE runtime layer

TEE runtime – TEE-internal

While the situation for this component is similar to that for the TEE loading internal component, it is somewhat simpler because the fact that this stage of the lifecycle has been reached means that the application has, by definition, been loaded and is running.  This means that there are a number of different channels for communication of error messages: the application data plane, the runtime control plane and the TEE runtime external component.  Most logging information will generally be directed either to the application (for decision making or transmission over its data plane at the application’s discretion) or to the client via the control plane. Standard practice can generally be applied as to which of these is most appropriate for which use cases.

Transmission of data to the TEE runtime external component needs to be carefully controlled, as the runtime component (unless it is closely coupled with the application) is unlikely to be in a good position to judge what information might be considered sensitive if available to components or entities external to the TEE.  For this reason, either error communication to the TEE runtime external component should be completely avoided, or standardised (and carefully designed) error messages should be employed – which makes standard debugging techniques extremely difficult.

Debugging

Any form of debugging for TEE instances is extremely difficult, and there are two fairly stark choices:

  1. Have a strong security profile and restrict debugging to almost nothing.
  2. Have a weaker security profile and acknowledge that it is almost impossible to ensure the protection of the confidentiality and integrity of the workload (the application and its data).

There are times, particularly during the development and testing of a new application when the latter is the only feasible approach.  In this case, we can recommend two principles:

  1. Create a well-defined set of error states which can be communicated via untrusted channels (that is, which are generally unprotected from confidentiality and integrity attacks), and which do not allow for “free form” error messages (which are more likely to leak information to a host).
  2. Ensure that any deployment with a weaker profile is closely controlled (and never into production).

These two principles can be combined, and a deployment lifecycle might allow for different profiles: e.g. a testing profile on local hardware allowing free form error messages and a staging profile on external hardware which only allows for “static” error messages.

Standard operation

Standard operation must assume the worst case scenario, which is that the host may block, change and interfere with all logging and error messages to which it has access, and may use them to infer information about the workload (application and associated data), affecting its confidentiality, integrity and normal execution.  Given this, the default must be that all TEE-internal components should minimise all communications to which the host may have access.

Application

To restrict application data plane communication is clearly infeasible in most cases, though all communications should generally be encrypted for confidentiality and integrity protection and designers and architects with particularly strong security policies may wish to consider how to restrict data plane communications.

Runtime component

Data plane communications from the runtime component are likely to be fewer than application data plan communications in most cases, and there may also be some opportunities to design these with security in mind.

TEE loading and TEE runtime components

These are the components where the most care must be taken, as we have noted above, but also where there may be the most temptation to lower levels of security if only to allow for easier debugging and error management.

Summary

In a standard cloud deployment, there is little incentive to consider strong security controls around logging and debugging, simply because the host has access not only to all communications to and from a hosted workload, but also to all the code and data associated with the workload at runtime.  For Confidential Computing workloads, the situation is very different, and designers and architects of the TEE infrastructure and even, to a lesser extent, of potential workloads themselves, need to consider very carefully the impact of the host gaining access to messages associated with the workload and the infrastructure components.  It is, realistically, infeasible to restrict all communication to levels appropriate for deployment, so it is recommended that various profiles are created which can be applied to different stages of a deployment, and whose use is carefully monitored, logged (!) and controlled by process.

Why is Attestation Required for Confidential Computing?

By Blog No Comments

Alec Fernandez (alfernandez@microsoft.com)

At the end of 2022, the Confidential Computing Consortium amended the definition of Confidential Computing. We added attestation as an explicit part of the definition, but beyond updating our whitepaper we did not explain to the community why we made this change.

First off, an attestation is the evidence that you use to evaluate whether or not to trust a Confidential Computing program or environment. It’s sometimes built into a common protocol as in RA-TLS / Attested TLS. In other uses it might be built into the boot flow of a Confidential VM or built into an asynchronous usage like attaching it to the result of a Confidential Process.

To many of us attestation was an implicit part of Confidential Computing architecture. However it is so central to the idea of Confidential Computing that it really needed to be part of the formal definition.

Hardware and software providers have long offered assurances of security and these assurances have oftentimes fallen short of expectations. A historical analysis of the track record for placing trust in individual organizations to protect data raises important questions for security professionals. The recurrence of data breaches has led to understandably deep skepticism of technologies that purport to provide new security protections.

Users desire to see for themselves the evidence that new technologies are actually safeguarding their data.

Attestation is the process by which customers can alleviate their skepticism by getting answers to these questions:  

  • Can the TEE provide evidence showing that its security assurances are in effect?
  • Who is providing this evidence?  
  • How is the evidence obtained?
  • Is the evidence valid, authentic, and delivered through a secure chain of custody?
  • Who judges the evidence? 
  • Is the judge separate from the evidence provider?
  • Who provides the standards against which the evidence is judged?
  • Can evidence assure that the code and data protection claims are in effect? 

Hardware based attestation evidence is produced by a trusted hardware root-of-trust component of the computing environment. The hardware root-of-trust is a silicon chip or a set of chips that have been specifically designed to be highly tamper resistant. Some have been reviewed by researchers at standards organizations such as NIST, NSA, ICO, ENISA and academic institutions around the world and the technical community at large. While a critique of the analyses behind hardware roots of trust is beyond the scope of this paper, we take them to represent the current state of the art in computer security. They represent a significant improvement over available alternatives. See reference material at the end of this blog for more information.

Providing Attestation Evidence

Attestation evidence is delivered in a message containing authentic, accurate and timely measurements of system components such as hardware, firmware, BIOS and the software and data state of the computer being evaluated. Importantly, this attestation evidence is digitally signed by a key known only to the hardware root-of-trust (often the physical CPU) and not extractable. This means that the attestation evidence is secured. It cannot be altered, once it leaves the hardware without the alteration being detected. It is impervious to attacks by the host operating system, the kernel, the cloud platform provider. This eliminates chain of custody concerns as the evidence flows from the producer to the consumer.

Validating the Authenticity of Attestation Evidence

Before examining the attestation evidence, the source of the evidence must be established. This is done by matching the digital signature in the attestation evidence with a certificate issued by the manufacturer of the hardware root of trust, for example the manufacturer of the physical CPU in the computer. If the signature on the attestation evidence matches the manufacturer’s certificate, then this proves that the attestation report was produced by the CPU hardware. This means that if you trust the company that manufactured the hardware, then you can trust the attestation report.

Who Judges the Attestation Evidence? Are they Separate from the Evidence Provider?

Having the attestation evidence delivered in a message that is digitally signed by hardware allows for TEE users to establish for themselves that the security assurances provided by the TEE are in place. This can be done without the provider of the computing infrastructure or intervening parties being able to alter the evidence during delivery.

Attestation evidence is highly technical and oftentimes it is not feasible for an organization to judge the evidence themselves. This is especially true when the organization is not specialized in computing infrastructure security. In cases such as these, having a different entity, a third party with security expertise, evaluate the evidence offers a good balance between security and complexity. In this scenario, the computing infrastructure or device user is implicitly trusting the entity that verifies the attestation evidence (the verifier). In such scenarios, it is imperative for the device user to have access to effective mechanisms to verify the authenticity and reliability of the verifier to ensure that the attestation results produced by the verifier are legitimate and trustworthy.

Who provides the standards against which the evidence is judged?

The attestation evidence contains claims about the physical characteristics and the configuration settings of the execution environment. Examples include:

  • CPU Manufacturer, model and version and identifier.
  • Microcode and firmware version.
  • Configuration settings, e.g., whether memory encryption is enabled.
  • Encryption configuration, e.g., whether a different key is used to protect each individual VM

The values supplied in the attestation evidence are compared against reference values. For example, the firmware supplier might recommend that it be patched to a specific version due to the discovery of a security vulnerability. The attestation evidence will accurately reflect the current firmware version. But who decides which are acceptable firmware versions?

  • Since the firmware is typically the responsibility of the hardware manufacturer and they have intimate knowledge of the details behind its security baseline, they should certainly be consulted.
  • The owner of the device or computing infrastructure should also be consulted since they could be responsible for any risks of data exfiltration.
  • In a public cloud environment, the computing infrastructure provider controls patching the firmware to the hardware manufacturer’s recommended version but they do not make use of the resulting environment. The user of the TEE is responsible for data placed in the environment and must ensure that firmware complies with their security policy

Remote attestation provides a way to evaluate evidence that shows the actual firmware version provided by the TEE. This evidence is provided directly by the hardware on which the TEE is executing and allows the attestation verifier to independently verify when the patching was completed.

More generally, attestation can be used to check whether all available security standards and policies have been met. This practically eliminates the possibility that a configuration error on the part of the computer or device owner will result in a security guarantee being falsely reported. The computer or device owner might be incorrectly configured in a way that goes undetected, but the attestation evidence comes directly from the hardware component that is executing the TEE and so remains accurate.

Relying on Attestation Evidence to Secure a TEE

An example of using attestation to provide data security is secure key release (SKR). One excellent use case for SKR is configuring your key management infrastructure (KMI) to evaluate the attestation evidence against a policy controlled by the verifier which is deemed to be trustworthy owner of the TEE and configuring your KMI to refuse to supply the key needed to decrypt the computer’s OS disk unless the attestation evidence shows the computer to be in compliance. In this example, the attestation evidence is generated when the computer is powered on and sent to the KMI. If the attestation evidence indicates that the TEE is not in compliance with the policy (perhaps because the CPU firmware was not an acceptable version) then the KMI would not release the decryption key to the compute infrastructure and this would prevent data from being decrypted and this prevent the risk of data exfiltration.

Conclusion

Confidential computing, through the use of hardware-based, attested TEEs and remote attestation protects sensitive data and code against an increasingly common class of threats occurring during processing while data is in use. These were previously difficult, if not impossible to mitigate. Additionally, Confidential Computing allows for protecting data against the owner of the system and public cloud platforms which traditionally had to simply be trusted to not use their elevated permissions to access the data.

 

References

https://nvlpubs.nist.gov/nistpubs/ir/2022/Nist.IR.8320.pdf

https://tools.ietf.org/html/draft-ietf-rats-architecture

CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_Updated_November_2022.pdf (confidentialcomputing.io)

Common-Terminology-for-Confidential-Computing.pdf (confidentialcomputing.io)

CCC_outreach_whitepaper_updated_November_2022.pdf (confidentialcomputing.io)

Unifying Remote Attestation Protocol Implementations

By Blog, Featured Article No Comments

Shanwei Cen (@shnwc), Dan Middleton (@dcmiddle)

We’re excited to announce some recent attestation news. One of the hallmarks of confidential computing is the ability to build trusted communication with an application running in a hardware-based trusted execution environment. To make attestation easily accessible it can be incorporated into common protocols. That way developers don’t need to figure out all the details to build a secure protocol themselves. One of these protocols is called Remote Attestation TLS (RA-TLS), which builds on the ubiquitously used Transport Layer Security protocol underlying most secure internet communication. It turns out several projects independently implemented RA-TLS with tiny but incompatible differences. In the CCC Attestation SIG, we’ve agreed on and, in some cases, already implemented changes to make them all be able to interoperate.

The CCC Attestation SIG is chartered to develop attestation-related software aimed at improving interoperability, and to achieve harmonization and de-fragmentation between multiple projects. One approach is to identify and review projects in SIG meetings, propose improvements for interoperability and standardization, and work with these projects for implementation and tests. Interoperable RA-TLS is a great example showcasing how the SIG delivers on its charter.

RA-TLS (Remote Attestation TLS) architecture is defined in the white paper Integrating Remote Attestation with Transport Layer Security, to enable Intel® Software Guard Extensions (Intel® SGX) remote attestation during the establishment of a standard Transport Layer Security (TLS) connection. In a TLS server / client scenario, the TLS server runs inside an SGX enclave. It generates a public-private keypair, creates an SGX report with a hash of the public key in its user-data field, and gets an SGX quote for this report. It then creates an X.509 certificate with a custom extension containing this SGX quote. This customized certificate is sent to a TLS client in the TLS handshake protocol. The client gets the SGX quote from the certificate and performs remote attestation to verify that the connected server runs inside an authentic Intel® SGX enclave.

There are a few aspects of RA-TLS architecture that were not covered in this white paper. Some of the gaps include the specific X.509 extension OID value for the SGX quote, the supported types of SGX quote, and how the public key is hashed. Additionally, since the white paper was published, new TEEs like Intel® Trust Domain Extensions (Intel® TDX) and new quote formats have become available. The level of specificity in the RA-TLS paper left room for incompatibility between different implementations and prevented their interoperability.

RA-TLS has been supported in multiple open-source projects, including Gramine, RATS-TLS, Open Enclave Attested TLS, and SGX SDK Attested TLS. The CCC Attestation SIG invited these projects to its meetings for review, and recommended further investigation to look into harmonization between them for interoperability. Following up on this recommendation, we conducted an in-depth investigation and identified areas of incompatibility. We documented our findings, created a draft proposal for an interoperable RA-TLS architecture, and presented our work back to the SIG.

Based on the interoperable RA-TLS draft proposal, we refined the design, and aligned it with the upcoming DICE Attestation Architecture v1.1 draft standard on X.509 extension OID value and evidence format definition (as a tagged CBOR byte string). We created an CCC Attestation SIG github project interoperable-ra-tls to host the design documents and interoperability tests. This project also facilitates discussion among members of the RA-TLS projects and the CCC Attestation SIG community in general. In addition, we registered the needed CBOR tags with the IANA registration service. In the process, we provided feedback to the DICE Attestation Architecture workgroup for refinement of their draft standard specification.

Great progress has been made to implement this proposed interoperable RA-TLS scheme in the RA-TLS projects. We’ve worked with all the projects to create issues and pull requests for their implementations. Especially, as discussed in some of the interoperable-ra-tls project issues, Gramine and RATS-TLS have completed their implementation, and have been active in interoperability tests.

In summary, the interoperable RA-TLS work demonstrated the value of the CCC Attestation SIG in providing a constructive forum to collaborate on attestation technology. We invite you to try out the new unified implementations in Gramine and RATS-TLS. If you are interested in getting more involved, please join us at the CCC Attestation SIG or any other facet of our Confidential Computing Consortium open source community. All are welcome here.

CCC at Black Hat and DEF CON 2022

By Blog, CCC Events No Comments

The Confidential Computing Consortium (CCC) was present at the 25th edition of Black Hat USA and the 30th edition of DEF CON.

At Intel’s booth for Black Hat, there was a big effort towards bringing awareness to Confidential Computing, including the distribution of outreach material from the Confidential Computing Consortium, as well as sessions from Anjuna (“Confidential Computing 101”) and Fortanix (“Confidential Computing AI & Intel SGX: accelerating the use of AI/ML”).

One of the highlights of Black Hat was the responsible disclosure of the ÆPIC Leak by researchers Pietro Borrello (Sapienza University of Rome) and Andreas Kogler (Graz University of Technology) and their collaboration with Intel to mitigate the vulnerability. After their session at Black Hat, the researchers and their colleagues met with the Confidential Computing Consortium representatives and shared how they worked closely together with Intel to follow responsible vulnerability disclosure practices. Intel has provided a microcode update for processors with Intel SGX to enable support to clear buffers and mitigate potential exposure of sensitive stale data when exiting Intel SGX enclaves.

At DEF CON, the Confidential Computing Consortium was mostly present at the Crypto and Privacy Village, which provides a forum for the hacker community to share knowledge and discuss cryptography and privacy.

Community members of the Enarx project gave two talks at the Crypto and Privacy Village: “Owned or pwned? No peekin’ or tweakin’!” and “Cryptle: a secure multi-party Wordle clone with Enarx”. The talks were presented by Richard Zak, Tom Dohrman, and Nick Vidal, with assistance from Ben Fischer from Red Hat.

We would like to thank attendees and organizers of Black Hat, DEF CON, the Crypto and Privacy Village, as well as staff and members of the Confidential Computing Consortium, including representatives from Anjuna, Fortanix, Intel, Profian, and Red Hat/IBM.

Response by the CCC to the Office of Science and Technology Policy’s RFI on Advancing Privacy-Enhancing Technologies

By Blog No Comments

July 7, 2022

To Whom It May Concern:

Please consider the following submission to the Request for Information on Advancing Privacy-Enhancing Technologies from the Confidential Computing Consortium. The Confidential Computing Consortium (https://confidentialcomputing.io) is a Linux Foundation project “to accelerate the adoption of Trusted Execution Environment (TEE) technologies and standards” and has a diverse membership of hardware and software vendors and cloud service providers (https://confidentialcomputing.io/members/). This response was prepared by the group’s Technical Advisory Council with participation from across the membership, and ratified by its Governing Board. The Linux Foundation is a non-profit organization registered in the United States as a 501(c)(6).

The Confidential Computing Consortium has a mandate to engage with governments, standards agencies and regulatory agencies to encourage adoption of Confidential Computing, as well as work with the larger ecosystem and engage with existing and potential end-users of the technologies. It also works with open source projects to further development of implementations. The Confidential Computing Consortium is committed to encouraging open source implementations of Confidential Computing technologies to ensure wide-spread adoption, scalable community involvement, transparency of process, increased security and ease of auditing by relevant interested parties and authorities.

The Confidential Computing Consortium welcomes collaboration with governmental and non-governmental organizations and has mechanisms in place to provide appropriate membership, as well as open technical participation without any membership requirement.

Sincerely,
Stephen R. Walli
Confidential Computing Consortium, Governing Board Chair

Read the response here.

Roadsec: LATAM’s largest hacker conference

By Blog No Comments

The Confidential Computing Consortium (CCC) was one of the 10 communities selected to be part of Roadsec, LATAM’s largest hacker conference. Over 5000 participants were present at this in-person conference held in Sao Paulo.

Roadsec started as meetups about cyber-security that were organized across different cities (thus the name Roadsec, as speakers were always on the road). Every year the community gathers in Sao Paulo for the main conference.

Sao Paulo is considered an alpha global city and serves as Latin America’s financial and technological hub. Major banks and cloud service providers have their headquarters and data centers in this city.

Nick Vidal, CCC’s Outreach Committee Co-Chair, was at the conference promoting the CCC and also inviting participants to the Cryptle Hack Challenge, a secure multi-player Wordle clone that demonstrates how Confidential Computing works.

Roadsec organizers were kind enough to provide the CCC a booth to present this emerging technology called Confidential Computing, which protects data in use by performing computation in a hardware-based Trusted Execution Environment. These secure and isolated environments prevent unauthorized access or modification of applications and data while in use, thereby increasing the security assurances for organizations that manage sensitive and regulated data.

Recently, there have been many serious cyber attacks in Brazil, including the leakage of sensitive patient data from DATASUS and sensitive client data from Banco Pan. Confidential Computing could have helped prevent these data leakages.

CCC Project Updates

By Blog No Comments

Check out what the CCC Projects have been up to!

Gramine

Gramine project (formerly known as Graphene) will release a new stable version v1.2 in upcoming weeks.

Gramine is a library OS that enables protecting sensitive workloads with Intel® Software Guard Extensions (Intel® SGX). Gramine runs unmodified Linux applications on Intel® SGX out of the box and provides all functionality required for end-to-end protection of workloads: remote SGX attestation, transparent encryption of security-critical files, secure multi-processing. Gramine follows a “lift-and-shift” paradigm for running unmodified applications: to “graminize” the application, it is enough to write a so-called *manifest* file that reflects a runtime configuration of the protected application. Gramine also supports Docker integration via a tool called Gramine Shielded Containers (GSC) and provides a growing set of curated applications, runtimes and frameworks.

In comparison to the previous release, Gramine v1.2 introduces a major overhaul of the FS subsystem. In particular, the Protected Files (PF) feature was significantly reworked. A new manifest syntax allows to mark whole FS mounts for encryption. The PF feature is now available not only in the SGX mode of Gramine, but also in the direct mode, for ease of debugging. We also added support for renaming PFs, memory mapping them with read-write permissions and encrypting them with different user-supplied encryption keys. As a side effect of this rework, multiple bugs in the FS and PF subsystems were fixed.

Additionally, Gramine v1.2 introduces a final reworked CPU/NUMA topology feature (previously marked as experimental). Now, CPU/NUMA topology is securely forwarded inside a Gramine SGX enclave and enabled by default. Among other improvements in Gramine, we highlight better support for CentOS/Fedora/RHEL Linux distributions and the update of the EPID SGX attestation tools to use IAS API v4. We also added a Rust example (a simple web server that uses hyper and tokio crates), as well as a new Python example for SGX quote retrieval.

Along with this technical work, Gramine was presented in different forums and featured in articles and blog posts:

– Gramine talk at the FOSDEM’22 conference: https://fosdem.org/2022/schedule/event/tee_gramine/

– Gramine talk at a Confidential Computing Consortium (CCC) webinar:  https://confidentialcomputing.io/webinar-gramine/

– Highlighted in several use cases and projects at the Open Confidential Computing Conference (OC3 2022) conference: https://www.oc3.dev/program

– Integration with Open Federated Learning (OpenFL) framework: https://medium.com/openfl/a-path-towards-secure-federated-learning-c2fb16d5e66e

– Integration with IBM/Gematik e-Prescription solution: https://github.com/eRP-FD/vau-base-image

– Reference solutions with Gramine as part of the Confidential Computing Zoo (CCZoo): https://github.com/intel/confidential-computing-zoo

– Whitepaper “Computation offloading to hardware accelerators in Intel SGX and Gramine Library OS”: https://arxiv.org/abs/2203.01813

– Blog post “How Open Source Gramine Accelerates Expanding Confidential Computing Market”: https://www.linkedin.com/pulse/how-open-source-gramine-accelerates-expanding-confidential-mona-vij/?trk=articles_directory

– A series of technical blog posts: https://gramineproject.io/blog/

For more information on the release please check out: https://github.com/gramineproject/gramine/releases/tag/v1.2

We invite you to join the Gramine community and contribute to adoption of confidential computing through open source collaboration. We also look forward to your feedback as you deploy this latest release of Gramine for your solutions.

Enarx

The Enarx project had three releases this quarter:

– Enarx 0.3.0 (Chittorgarh Fort) released in March with TLS support, attestation & validation support (https://blog.enarx.dev/chittorgarh-fort/).

– Enarx 0.4.0 (Fort of Dhat al-Hajj) released in April with SGX2 support, improved TLS support, and much more (https://blog.enarx.dev/enarx-0-4-0-fort-dhat-al-hajj/).

– Enarx 0.5.0 (Elmina Castle) released in May with many new/improved features: New enarx deploy subcommand. SGX with EDMM / SGX2 support (https://blog.enarx.dev/elmina-castle/).

In addition to Linux, Enarx is now available on MacOs, Windows, and Raspberry Pi:

– Enarx can now be compiled on additional platforms in a light development version. From MacOS to Raspberry Pi — Extending the Enarx Development Platforms.  (https://blog.enarx.dev/backend-nil/)

The Enarx project announced the Cryptle Hack Challenge:

– Cryptle is a secure multi-player clone of Wordle. The goal of the Cryptle Hack Challenge is to uncover vulnerabilities in the Enarx project. (https://blog.enarx.dev/cryptle-hack-challenge/).

The Enarx community has achieved a huge milestone: we have collectively published 100 tutorials and articles over at Wasm Builders!

– As part of the Confidential Computing Fellowship program, the Enarx project has received several mentees from Outreachy and LFX Mentorship. Wasm Builders has served as a welcoming environment where Enarx community members can share their learning experiences with others (https://blog.enarx.dev/enarx-community-reachs-100-tutorials/).

The Enarx project has participated in the following events:

– Nathaniel McCallum presented “WASI Networking” at Wasm Day at KubeCon + CloudNativeCon Europe 2022 (https://blog.profian.com/wasm-day-at-kubecon-cloudnativecon-europe-2022/).

– Outreachy intern Shraddha Inamdar presented “Enarx: The Platform Abstraction for Trusted Execution Environments” at FOSSASIA (https://enarx.dev/resources/2022-04-09-fossasia).

– CCC Fireside Chat: Stephen Walli received Mike Bursell to discuss his book “Trust in Computer Systems and the Cloud,” with a particular focus on the impact of Confidential Computing on security, trust and risk (https://blog.profian.com/trust-in-computer-systems-and-the-cloud/).

Veracruz

  • We recently announced our 22.05 release which included first-time contributions from several people including Aryan Godara, Mohamed Abdelfatah, and Sagar Arya.  Many of these contributions focussed on adding new examples to the Veracruz repository.  Mohamed will be joining us as our Outreachy-sponsored intern shortly, working on providing better documentation of the expected behavior of Wasi system calls (https://github.com/veracruz-project/veracruz/releases/tag/veracruz-2205).
  • We’ve worked to simplify Veracruz attestation further, across all of our supported platforms, making the process more uniform and removing platform-specific quirks.
  • We’ve started work, and are progressing quickly, on supporting seL4 as an in-enclave operating system for ultra-low TCB enclaves.
  • We’ve worked to improve Veracruz documentation.
  • Many other smaller bug fixes, performance improvements, and upgrades of dependencies to fix security concerns.

CCC Project Updates

By Blog No Comments

Check out what the CCC Projects have been up to!

Gramine

Following the first production-ready release “v1.0”, The Gramine Project is releasing “v1.1” in upcoming weeks. One highlight of this release is stability improvements for Golang and Rust workloads. Another prominent feature of the release is support for the musl C standard library – now Gramine allows users to choose between glibc and musl, depending on users’ requirements on the binary size (TCB), as musl is more light-weight than glibc. Also, AddressSanitizer was integrated in Gramine, and it runs in the CI on each change, for detecting any security issues ahead of code merge. This version adds several other features as well as multiple bug fixes (thanks to our ever-increasing user base for reporting issues!).

While there are several use cases under development, we would like to highlight the production release of the OpenVino Security Add-on (OVSA) for Model IP protection (consider using it for your protected ML workloads). Please reach out to the Gramine team if you are experimenting with Gramine and would like to be added to the list of “Users of Gramine

Enarx

In Enarx’s first release “version .0.1.0” (codenamed Alamo) we provided WebAssembly as a runtime. For our upcoming release “version 0.2.0” this coming quarter we are looking forward to providing support for attestation, including Intel’s SGX and AMD’s SEV.

Other areas where we are working on are support for filesystem and networking, which depend on upstream collaboration with the WebAssembly community.

Enarx is under high development and is not production ready yet, but our hope is that these initial releases will allow developers to experiment with Enarx and see its progress.

If you are interested in learning more about the Enarx project, please access our website, star us on GitHub, and join our chat.

Gramine 1.0 release

By Blog No Comments

Announcing Gramine production ready release!

Having recently joined the Confidential Computing Consortium in the Linux Foundation, The Gramine Project (formerly known as Graphene) is proud to announce the first production-ready version to enable protecting sensitive workloads with Intel® Software Guard Extensions (Intel® SGX).

The project started as a research prototype at Stony Brook University in 2011, and the first open-source version was published in 2014, followed by the Intel® SGX port in 2017 in collaboration with Intel Labs. In December 2018, Golem and ITL joined the project, forming the core of the open source community around the project, including a first release.  The Gramine community has subsequently grown into a diverse group of contributors, from universities, small and large companies, as well as individuals.

Gramine not only runs Linux applications on Intel® SGX out of the box, but also provides several tools and infrastructure components for a push-button lift-and-shift paradigm for running unmodified applications on confidential computing platforms based on  Intel® SGX. Gramine supports both local and remote Intel® SGX attestation, and with both EPID and DCAP schemes. With the protected files feature, security-critical files are automatically encrypted and decrypted inside the enclave. Gramine supports several performance optimizations for Intel® SGX applications including asynchronous system calls. Gramine is one of the few frameworks that supports multi-process applications by providing a complete and secure fork implementation. Gramine supports Docker integration via a tool called Gramine Shielded Containers (GSC) that automatically converts Docker images to Gramine images.  Containers built with GSC can be deployed via Kubernetes for confidential containers and microservices.  Gramine also supports cloud deployment with Azure Confidential VMs and integrates with Azure Kubernetes Services in Azure cloud.

Since our last release, there have been major changes in the code with 1272 files changed, 100637 insertions, 112144 deletions, 1648 commits from 49 authors. This includes a major rewrite of the code that handles features including memory management, thread handling, process handling, filesystem and signal handling. You can find the detailed changelog at our github.  In future, we plan to continue Gramine development with additional features, code cleanup, tooling, and documentation. We also plan to add generic support for I/O device communication as well as add additional Platform Adaptation Layers (PAL) for other TEEs like Intel® TDX.

Gramine has a growing set of well-tested applications including machine learning frameworks, databases, web servers, and programming language runtimes and there are several projects that are already experimenting with Gramine for developing their solutions to protect data in use. We expect that Gramine 1.0 will bring many of those solutions to production. We look forward to your feedback as you deploy this latest version of Gramine for your confidential computing solutions with lift-and-shift capability.

For more information on the release please check out: https://github.com/gramineproject/gramine/releases/tag/v1.0

We invite you to join the Gramine community and contribute to adoption of  confidential computing through open source collaboration.