Skip to main content

The Future of Attestation in a Confidential World

Abstract

Attestation is a key enabling technology to unlock confidential computing that you can trust. Meet a panel of attestation experts discuss where attestation technology is headed, where the technology is today, and what improvements are still needed. Bring your questions to discuss with the experts.

Speakers

Dave Thaler

Technical Advisory Council Chair

Dave Thaler is a Software Architect at Microsoft, where he works on IoT security.  Dave has over 25 years of standards body experience and currently chairs the IETF group on Software Update for IoT, as well as the Confidential Computing Consortium's Technical Advisory Council. He previously served as a member of the Internet Architecture Board (IAB) for 11 years.

Read More

Lily Sturmann

Senior Software Engineer

Lily Sturmann is a senior software engineer at Red Hat in the Office of the CTO’s Emerging Technologies Security team.

Read More

Mark F. Novak

Director, Applied Research Emerging Technologies

Director, Applied Research Emerging Technologies at JP Morgan Chase.

Read More

Nathaniel McCallum

CTO

Profian CTO and Enarx co-founder, Nathaniel has been engineering systems at scale for more than 15 years, with an emphasis on cryptography and security. Previously the Virtualization Security Architect for Red Hat, he lives near Raleigh, NC.

Read More

Simon Johnson

Senior Principal Engineer

Simon Johnson is a Senior Principal Engineer in Intel’s Office of the Chief Technology Officer where he is a Security Architect.

Read More

Transcript

1
00:00:18.240 –> 00:00:26.280
Lily Sturmann: Hello everyone, welcome to this confidential computing consortium webinar on the future of attestation in a confidential world.

2
00:00:27.900 –> 00:00:37.860
Lily Sturmann: My name is lily sterman i’m a senior software engineer at red hat and I am the CCC technical advisory committee red hat representative, I will be your moderator today.

3
00:00:39.150 –> 00:00:47.490
Lily Sturmann: A quick introduction to the confidential computing consortium or CCC the CCC is a project Community at the Linux foundation.

4
00:00:48.000 –> 00:01:02.400
Lily Sturmann: dedicated to defining and accelerating the adoption of confidential computing, it is a Community focused on open source license projects, securing data in use and accelerating the adoption of confidential computing through open collaboration.

5
00:01:03.780 –> 00:01:17.910
Lily Sturmann: Every member is welcome and every project meeting our criteria is welcome, we are a transparent collaborative community, and we have Members contributors and leaders pledged to make participation in our Community, a harassment free experience for everyone.

6
00:01:19.770 –> 00:01:26.310
Lily Sturmann: We at the CCC also have a whole series of webinars which you can check out on confidential computing.io slash webinars.

7
00:01:27.330 –> 00:01:33.390
Lily Sturmann: We have mentioned this topic of at the station, and if you previous webinars so if you’re interested, you should definitely check those out.

8
00:01:33.900 –> 00:01:39.390
Lily Sturmann: I know there was a talk about the hard facts about confidential computing that mentioned at the station.

9
00:01:39.870 –> 00:01:52.050
Lily Sturmann: And we also had an entire webinar on the remote at the station procedures architecture or rats presented by our very own Dave Taylor, who is also on this panel, so if you’re interested, please go and check these out.

10
00:01:53.070 –> 00:02:01.890
Lily Sturmann: I want to also mention that the CCC has an access station special interest group which has been meeting regularly since around March of 2021.

11
00:02:02.880 –> 00:02:08.220
Lily Sturmann: These meetings are every other Tuesday at 9am Pacific or 4pm GMT.

12
00:02:09.120 –> 00:02:22.080
Lily Sturmann: And there are opportunities there to do prototyping integration with other CCC projects standardization technology dissemination and also to fund some in scope activities, so if you’re interested in learning more about that.

13
00:02:23.460 –> 00:02:33.270
Lily Sturmann: Please check out the webinar description for some links, because there is a special mailing list for the station sig there’s also a slack channel, and there are these regular meetings.

14
00:02:34.620 –> 00:02:46.230
Lily Sturmann: So for this webinar we do encourage audience questions, please put them in the chat as they come to you and we will probably try to round them up at the end in the last 10 or 15 minutes.

15
00:02:48.210 –> 00:02:54.000
Lily Sturmann: And now I want to introduce our excellent panel of four experts.

16
00:02:55.050 –> 00:03:12.870
Lily Sturmann: So in alphabetical order, we have Dave Taylor of Microsoft mark Novak of JPMorgan chase nathaniel mccallum of proof Ian and Simon Johnson of Intel and if you all want to just say a brief few words about your backgrounds and introduce yourselves.

17
00:03:14.100 –> 00:03:16.050
Lily Sturmann: I will call on Dave first.

18
00:03:17.910 –> 00:03:24.420
Dave Thaler: All right, i’m Dave Taylor i’m a software architect at Microsoft in a group, where we do both windows and Linux things.

19
00:03:25.440 –> 00:03:31.320
Dave Thaler: Within the company edge computing area been working in this area, for I don’t know four or five years, maybe.

20
00:03:32.400 –> 00:03:38.580
Dave Thaler: I chair, some working groups in the ETF, so my role on this panel is basically from a standards perspective.

21
00:03:40.080 –> 00:03:47.340
Dave Thaler: The IGF rats architecture that we had a webinar on before i’m an editor of the document in that working group.

22
00:03:47.970 –> 00:03:56.730
Dave Thaler: And I co chair to other areas within the IGF that’s in a security area within the company which computer consortium I chair the technical advisory council.

23
00:03:57.270 –> 00:04:02.910
Dave Thaler: And so I do other things too, but those the ones that are most relevant to hear so happy to be here and i’ll pass on to the next person.

24
00:04:04.110 –> 00:04:05.940
Lily Sturmann: Thank you Dave mark.

25
00:04:06.600 –> 00:04:17.160
Mark F. Novak: hi so i’m mark Nova I am a former colleague from Microsoft, I was actually involved in confidential computing for much longer than the moniker itself has existed, and I was.

26
00:04:17.760 –> 00:04:25.320
Mark F. Novak: The original architect, for what became azure at the station service now our philosophy years i’ve been working JPMorgan chase I do.

27
00:04:25.830 –> 00:04:34.920
Mark F. Novak: i’m a director of the applied research team here at the JPMorgan chase focused on emerging technologies and my focus areas are.

28
00:04:35.400 –> 00:04:47.760
Mark F. Novak: ledger’s and press toward the execution, so in that capacity i’m the only person on this panel with firsthand interest in how trust execution environments are governed.

29
00:04:49.770 –> 00:04:52.170
Lily Sturmann: Very good, thank you mark and Daniel.

30
00:04:53.190 –> 00:05:01.200
Nathaniel McCallum: hi my name is nathaniel i’m the cto of profane and i’m also the Co founder of the company, as well as the annex Open Source project.

31
00:05:01.800 –> 00:05:16.140
Nathaniel McCallum: which allows you to deploy web assembly workloads into confidential computing environments and frequent contributor to it, if one of the co founders of the byte code alliance.

32
00:05:17.520 –> 00:05:23.910
Nathaniel McCallum: and other standards committees, like the w three see so yeah that’s that’s basically me.

33
00:05:25.050 –> 00:05:26.880
Lily Sturmann: Thank you, Daniel and Simon.

34
00:05:28.170 –> 00:05:50.040
Simon Johnson: Simon Simon Johnson i’m formally the the program architect for the software God extensions hardware trusted execution capability, as part of the internal processes i’m now the confidential compute lead across a range of activities that are happening here happening here at until.

35
00:05:51.480 –> 00:05:52.890
Simon Johnson: i’ve been a member of the TAC.

36
00:05:54.840 –> 00:05:58.260
Simon Johnson: For confidential computing and generally.

37
00:05:59.610 –> 00:06:10.740
Simon Johnson: Originally, I was my my main task as part of sex was to help develop the the other station architecture, so I think that has some relevance to play.

38
00:06:11.700 –> 00:06:27.330
Lily Sturmann: Yes, thank you, thank you very much, everyone, so I want to make this webinar sort of conversational, so I will be asking some questions and I encourage everyone to jump in as they are moved to respond.

39
00:06:28.080 –> 00:06:35.670
Lily Sturmann: we’ll see if we can generate some good conversation, but, of course, if you want to raise a hand, please do so as well panelists.

40
00:06:37.740 –> 00:06:56.820
Lily Sturmann: And First, I will begin by briefly defining anticipation and i’m going to use the definition that we had in the rats architecture webinar which is at a station is the process of verifying that some system is in a good state for some value or definition of good.

41
00:06:58.650 –> 00:07:02.160
Lily Sturmann: Which is a very general definition so.

42
00:07:03.240 –> 00:07:13.050
Lily Sturmann: To our panel My first question is, do you agree with this definition would you expand upon it, would you propose an alternative and.

43
00:07:14.250 –> 00:07:26.550
Lily Sturmann: What What would you say is meant by verifying the system state and what it adds in terms of security and why is this critical to confidential computing so it’s a few questions bundled in one i’m all going in the same direction, but uh.

44
00:07:27.210 –> 00:07:30.210
Lily Sturmann: Anything that brings to mind, please uh please go for it.

45
00:07:30.750 –> 00:07:41.370
Mark F. Novak: i’ll go first I have given this question, a lot of thought, so I used to tell people when I was doing our architecture work run at the station at Microsoft that.

46
00:07:42.000 –> 00:07:50.550
Mark F. Novak: When when I say Dave Taylor has integrity, we usually would mean by that you immediately kind of know what that means that means that you know dave’s a good guy you can probably.

47
00:07:50.820 –> 00:07:57.960
Mark F. Novak: You know, Linda minute to piggyback and he was not going to lie to you and cheat and all that stuff machines don’t know right from wrong.

48
00:07:59.040 –> 00:08:04.920
Mark F. Novak: And the only way to really reasonable the machine is have this is this is.

49
00:08:05.790 –> 00:08:12.480
Mark F. Novak: This other parties integrity, with respect to a policy so when you say the student statement is the system is in a good state.

50
00:08:12.780 –> 00:08:20.550
Mark F. Novak: That is actually not correct, the system is a known state and the system cannot lie about the state it is in.

51
00:08:20.970 –> 00:08:28.800
Mark F. Novak: Whatever that state is right, the process of adaptation is basically the irrefutable proof that the system has the.

52
00:08:29.700 –> 00:08:43.200
Mark F. Novak: running the code that you expected to run Ian configuration that you expected to run in and both are equally important right that integration can make with code do bad things and.

53
00:08:44.340 –> 00:08:51.750
Mark F. Novak: When I say why we’re having an issue with the definition of good is because, if i’m looking at the historical at the station statement.

54
00:08:52.410 –> 00:08:59.550
Mark F. Novak: Then it may be good as defined some time ago and it may have passed some policy them but wouldn’t pass the post and now.

55
00:08:59.790 –> 00:09:11.580
Mark F. Novak: But at least you know what to expect have a system in a given states at the station is ultimately about irrefutable proof that the thing will behave in a way consistent with whatever code and configuration, it is executed.

56
00:09:12.780 –> 00:09:28.230
Nathaniel McCallum: I was glad to see that that mark brought time into the equation, because I think that it’s really fascinating that the definition of education does not actually talk about time it talks about that that a system is an unknown state using the present tense of the verb.

57
00:09:29.520 –> 00:09:43.890
Nathaniel McCallum: that the problem with this, of course, is that no confidential computing technology actually measures that what it measures is an initial state which is extrapolated to be the current state through a trust that the system.

58
00:09:45.000 –> 00:09:51.360
Nathaniel McCallum: Has a predictable behavior from the initial state to now notice that I didn’t say deterministic because.

59
00:09:51.840 –> 00:10:00.060
Nathaniel McCallum: there’s a whole variety of ways in which the software may not be deterministic but nonetheless the non determinism is within specific bounds.

60
00:10:00.390 –> 00:10:09.780
Nathaniel McCallum: But so basically today, when we do as a station we’re measuring the initial state of a system we are assuming from that initial state that the State can’t be tampered with.

61
00:10:10.380 –> 00:10:19.590
Nathaniel McCallum: And that the State transitions even though non deterministic are are trustworthy up until the point at which I measure them now.

62
00:10:20.280 –> 00:10:27.510
Nathaniel McCallum: So that’s a way more detailed me probably need, but I do think it’s important to be able to talk about it in those terms because.

63
00:10:27.810 –> 00:10:41.910
Nathaniel McCallum: We We may want to consider what does it need to be in a good state well if it was in a good state, you know, five years ago, is that the same as being in a good state five minutes ago and it probably isn’t so i’ll pass on to my other colleagues.

64
00:10:43.440 –> 00:10:53.910
Dave Thaler: Sure i’ll comment as one of the co editors in the of the rats architecture document from which the definition came about some of the discussions split into that.

65
00:10:54.210 –> 00:11:01.230
Dave Thaler: I wasn’t going to comment on the timeliness and freshness aspect, but thank you for bringing that up nathaniel to have a conversation about that, because those are all great points.

66
00:11:02.370 –> 00:11:15.810
Dave Thaler: So once upon a time, if we go back you know before at a station became popular than authorization decisions, the closest thing that they could do was you know how do you know.

67
00:11:16.860 –> 00:11:22.620
Dave Thaler: Whether an entity is been compromised is actually you know, is it actually authenticating as the entity that it.

68
00:11:22.620 –> 00:11:23.130
Nathaniel McCallum: Is.

69
00:11:23.400 –> 00:11:28.020
Dave Thaler: And so we have something called what was at least in the ETF called posture right you’d ask it.

70
00:11:28.530 –> 00:11:38.670
Dave Thaler: And you ask the entity, so are you in a good state, and it would say yes, I need all the following requirements and again for some definition of good right doesn’t have all the latest patches whatever you’d ask and it says yes, I have all those patches.

71
00:11:39.060 –> 00:11:49.170
Dave Thaler: Of course, it can lie right, and so the whole posture assessment was really doesn’t know the correct answer was the closest thing right is it secure, in a sense, not like at a station.

72
00:11:49.350 –> 00:11:55.770
Dave Thaler: it’s just a, it has to be able to know the correct answer, but it could still lie right if it doesn’t know the correct answer then you know it’s out of date right.

73
00:11:55.980 –> 00:12:02.670
Dave Thaler: And so it was more of an incentive to be patched it was an incentive to be managed and so on, as opposed to some secure.

74
00:12:03.270 –> 00:12:08.940
Dave Thaler: You know, proof or level of confidence that it actually was, in fact, correct and not compromised by malware and so.

75
00:12:09.600 –> 00:12:20.220
Dave Thaler: So that was a history where where there was more posture assessments when we use the term posture and standards we mean more like a self test station which is not really our definition of a test station here right.

76
00:12:20.760 –> 00:12:23.940
Dave Thaler: And so, just like mark was saying, you know there’s no such thing as.

77
00:12:24.660 –> 00:12:35.190
Dave Thaler: You know, objectively good or even objectively trustworthy right, those are statements from the perspective of a peer right which we would call say a relying party right.

78
00:12:35.430 –> 00:12:42.750
Dave Thaler: There lying party gets to decide whether something is good or whether something is trustworthy it’s a statement of the recipients level of trust or faith.

79
00:12:43.410 –> 00:12:51.600
Dave Thaler: or judgment about the state of a particular thing so as mark was saying, part of the job of that temptation is to expose what the truth is, in a way that can’t be.

80
00:12:51.870 –> 00:12:59.970
Dave Thaler: lied about is different from from posture right such that the relying party or some external peer can make a judgment as to whether that is then good.

81
00:13:00.210 –> 00:13:10.620
Dave Thaler: or whether it chooses to place trust in it, based on some policy, the policy and the judgment is done by the relying party or by the peer it’s not a statement about the state of a particular.

82
00:13:10.950 –> 00:13:17.820
Dave Thaler: device it’s a state, but about your level of appraisal or faith or trust in that other entity.

83
00:13:18.120 –> 00:13:24.810
Dave Thaler: And so that test station is kind of those two aspects put together the statements about what the truth is, in a way that can’t be tampered with.

84
00:13:25.110 –> 00:13:35.730
Dave Thaler: Because they chained down to the hardware and you’re assuming that the hardware itself has protections against tampering right it’s not like malware where you can you know flash it or put things on there it’s more immutable right.

85
00:13:36.060 –> 00:13:38.160
Dave Thaler: And so, as long as it changed to an immutable.

86
00:13:38.610 –> 00:13:45.720
Dave Thaler: root of trust, then they’re relying party can actually make some judgment based on appraisal policy is to is that true something that is.

87
00:13:45.960 –> 00:13:51.450
Dave Thaler: Good because it’s The thing that gets to decide what the definition of good is when you say for some definition of good.

88
00:13:51.690 –> 00:13:59.340
Dave Thaler: What we mean is the relying parties definition of good which sometimes he will delegate to a management system which we might call a verify or in this sense.

89
00:13:59.790 –> 00:14:10.140
Dave Thaler: To make that judgment on his behalf, because it trust somebody else to make that judgment but it’s not putting faith and in the software say running on the actual device performing at the station so.

90
00:14:11.760 –> 00:14:26.160
Simon Johnson: I think there’s a there’s a couple of responses to the things that have been said, he said, the you’ll point brings forward Dave first is that I think mark said that it was an irrevocable.

91
00:14:27.180 –> 00:14:36.780
Simon Johnson: Proof of the actual state you don’t know it’s irrevocable until you’ve actually done that assessment by by by the.

92
00:14:37.530 –> 00:14:47.940
Simon Johnson: By the second bug it’s really an assessment of it, do I trust all the information that was given to me right, and then the second part, I think I want to pick up on the sales.

93
00:14:48.510 –> 00:15:00.030
Simon Johnson: perspective about it’s it’s the initial state of the system, I think it’s the initial state of the tcp that you’re trusting right that that may come that megan built much, much later.

94
00:15:00.480 –> 00:15:03.690
Simon Johnson: than that the boot platform I know we’re going to get to what makes.

95
00:15:04.620 –> 00:15:15.870
Simon Johnson: So, like future validation and things like that, and one of the things we’ve learned over time right is trying to fully evaluate a platform from the very beginning boot.

96
00:15:16.500 –> 00:15:29.160
Simon Johnson: To you know some piece of code that is running you know 2025 minutes later is actually quite a difficult thing to do right people have done this it’s it’s overwhelming right and.

97
00:15:30.150 –> 00:15:41.370
Simon Johnson: You know what we’ve seen is you know some of these teams have tried to simplify as much of understand you know what you’ve got to understand.

98
00:15:42.780 –> 00:15:53.010
Simon Johnson: In order to make that good judgment call that the system is in a in a good enough state for you to to track to transact with.

99
00:15:56.040 –> 00:16:12.510
Lily Sturmann: Thank you everyone, these are awesome additions and exactly the sort of conversation I had hoped for I do see, we have a question already Thank you i’m going to get back to questions on towards the end so we’ll be able to address some of those, but please keep sending them in.

100
00:16:14.790 –> 00:16:16.710
Lily Sturmann: yeah so going off of this.

101
00:16:17.820 –> 00:16:33.480
Lily Sturmann: I want to ask who is anticipation, for, and in what scenarios would be would it be advisable or even required are there some example use cases and also coming back again to what makes this a critical technology for confidential computing.

102
00:16:36.480 –> 00:16:39.600
Dave Thaler: i’m happy to kick this one off because, when the.

103
00:16:40.740 –> 00:16:47.340
Dave Thaler: ETF work was being chartered I threw out a conjecture on the youth kids use cases and so, at least.

104
00:16:47.880 –> 00:16:56.370
Dave Thaler: i’ll currently call it the favor conjecture and so far it’s held up, and I would love anybody else who has who wants to weigh in on this topic which.

105
00:16:57.210 –> 00:17:08.220
Dave Thaler: The claim was that every authentication use case is a potential access station use case Okay, I say potential because, whether the.

106
00:17:08.940 –> 00:17:16.200
Dave Thaler: Roi is there to actually invest in doing it depends on the actual use case but it actually would make sense, so as at.

107
00:17:16.920 –> 00:17:24.510
Dave Thaler: Usually you’re trying to make an authorization decision okay it’s not the only case for authentication but let’s just say that you’re making it, for example, you make an authorization decision.

108
00:17:24.810 –> 00:17:33.510
Dave Thaler: And you want to know who it is to say is that entity authorized to make this thing so using authentication for the purpose of getting identity to do an authorization check okay.

109
00:17:33.930 –> 00:17:38.520
Dave Thaler: Well, without I a test station you don’t know whether that entity.

110
00:17:38.880 –> 00:17:43.710
Dave Thaler: is really that entity or if it’s been compromised by malware enhances acting as an entity, so what.

111
00:17:43.950 –> 00:17:50.100
Dave Thaler: station lets you do is it gives you a stronger faith and the level of authentication that it really is that entity and not Mel were trying to.

112
00:17:50.370 –> 00:17:57.630
Dave Thaler: act as that entity Okay, and you can’t do that without gestation so that’s where that conjecture come from as an example and that’s just the.

113
00:17:58.170 –> 00:18:09.480
Dave Thaler: that’s just the authorization example but that’s pretty common I think people can relate to, so I would say, but by default the straw man would be every authentication use case isn’t a test station use cases.

114
00:18:10.470 –> 00:18:19.470
Lily Sturmann: I mean we would sorry, you would tie that back to that hardware, we would have trust, and that would be the difference between authentication and not to station in this case, would you say it.

115
00:18:19.500 –> 00:18:23.520
Dave Thaler: yeah so here, the question was about use case and I would just say all of them right is.

116
00:18:23.550 –> 00:18:24.750
Dave Thaler: Is the straw man I would love to.

117
00:18:24.750 –> 00:18:25.920
Dave Thaler: Have the rest of the panel Wayne and so.

118
00:18:26.520 –> 00:18:35.520
Simon Johnson: Please Simon Oh, I was just gonna make the observation I think that’s why you see such an affinity just recently with T less.

119
00:18:36.540 –> 00:18:42.510
Simon Johnson: Based use models for for adaptation right we’ve seen.

120
00:18:44.160 –> 00:18:46.140
Simon Johnson: I think Intel has.

121
00:18:47.220 –> 00:18:57.480
Simon Johnson: Looked at this several times right and we’ve we’ve come up with, I think, at least two or three different variations, but they all seem to revolve around that I want to set up session.

122
00:18:57.780 –> 00:19:08.460
Simon Johnson: there’s authentication required okay let’s ride over the top of CLS, so I think that really sort like goes to to david’s point around authentication.

123
00:19:09.300 –> 00:19:21.780
Mark F. Novak: I will have a couple things just specifically like let me up level to the actual like business use case here, so I equip that a confidential computing has two classes of customers, the paranoid in a regulated and the paranoid have no money.

124
00:19:23.250 –> 00:19:33.450
Mark F. Novak: they’re regulated, however, have a lot of money, and the other thing that is worth noting, is a majority of actual money.

125
00:19:34.200 –> 00:19:52.530
Mark F. Novak: is spent, not on security, per se, but i’m compliance so as a bank, we operate in you know at jurisdictions, we have you know 5500 pages of various regulations, we need to comply with we have 2700 applications to prove compliance.

126
00:19:53.670 –> 00:20:01.440
Mark F. Novak: In the you know the results all all sorts all sorts of regulations there now what at a station are fundamentally is.

127
00:20:01.950 –> 00:20:13.260
Mark F. Novak: it’s the tcp for the entire confidential stack that relies on it, and the other thing if you’d like just take a look at the history of regulations when it comes to information technology.

128
00:20:14.100 –> 00:20:22.740
Mark F. Novak: You know it wasn’t always the case that you had to prove that your network transport is secure that your storage is encrypted at rest.

129
00:20:23.160 –> 00:20:33.540
Mark F. Novak: Right it’s only when technologies become available then regulators look up to them and they start demanding that you prove compliance with the certain security postures and I believe.

130
00:20:34.410 –> 00:20:41.880
Mark F. Novak: That as confidential computing becomes so more ubiquitous more of a stable when cloud computing as a result, becomes more of a utility.

131
00:20:42.480 –> 00:20:49.080
Mark F. Novak: Where you get you know you pay for network traffic storage traffic and there’s a call the janitorial services but.

132
00:20:49.410 –> 00:20:57.120
Mark F. Novak: When they don’t look when the cloud providers not supposed to look inside your workload what the regulators will want to know is prove to us.

133
00:20:57.660 –> 00:21:08.370
Mark F. Novak: That you are complying with regulations around keeping data secure keeping your workload, secure and when it comes to that the at the station statements become the.

134
00:21:10.260 –> 00:21:12.900
Mark F. Novak: Basically, that juice.

135
00:21:14.460 –> 00:21:14.850
Mark F. Novak: That.

136
00:21:16.050 –> 00:21:22.050
Mark F. Novak: You know the core of how you prove compliance with regulations that were actually the money is going to be.

137
00:21:25.590 –> 00:21:29.190
Nathaniel McCallum: yeah i’ll just i’ll just wait and say that I agree with both Dave and Simon.

138
00:21:30.960 –> 00:21:32.670
Nathaniel McCallum: First the question was asked.

139
00:21:34.980 –> 00:21:43.980
Nathaniel McCallum: Who is at the station for, and I think that’s a station is for everyone, I think that that’s the reason it’s not today is because, at a station is hard.

140
00:21:44.970 –> 00:21:51.330
Nathaniel McCallum: And because it’s hard it’s expensive and because it’s expensive as mark said, you have to have people that have money that can be able to do it.

141
00:21:52.380 –> 00:21:54.510
Nathaniel McCallum: But there’s no reason that education has to be hard.

142
00:21:55.140 –> 00:22:01.560
Nathaniel McCallum: We can actually build out to station services which are scalable and we can actually make this both easy and cheap.

143
00:22:01.860 –> 00:22:07.680
Nathaniel McCallum: And when it’s easy and cheap then everyone can have access station and when everyone has to station, it increases.

144
00:22:08.130 –> 00:22:17.550
Nathaniel McCallum: It creates a new plateau in a certain sense of what we expect out of services today when I make a connection to a remote service a hand them my data.

145
00:22:18.000 –> 00:22:24.570
Nathaniel McCallum: And I have no idea what they’re going to do with that data right and do I trust them, maybe, maybe not I don’t really know.

146
00:22:24.810 –> 00:22:39.120
Nathaniel McCallum: And, and we, the reason I can’t even make that decision is because we just simply don’t have the primitives available to us to be able to do anything that is better than that which is why it’s really important for us to focus on building scalable usable forms of our station.

147
00:22:41.580 –> 00:22:46.980
Nathaniel McCallum: I would also like to add that I do, I would also agree that there is an affinity between our station in pls.

148
00:22:47.910 –> 00:23:00.240
Nathaniel McCallum: As Dave said as a station is one input in in authorization decision, but one of the critical problems that we have is that we also have to secure the channels between confidential Apps.

149
00:23:00.840 –> 00:23:11.940
Nathaniel McCallum: Because if we don’t do them we actually weaken the properties of the the network itself, and so the only real tool we have to do that, besides ssh which is probably not fit for this.

150
00:23:12.450 –> 00:23:25.230
Nathaniel McCallum: Is pls and that’s going to require certificate would, and this is why perfectly and actually ships, a an access station service that does precisely this it validates and as a station and it issues at pls certificate that’s used.

151
00:23:25.680 –> 00:23:37.350
Nathaniel McCallum: By the keep when it’s communicating to other parties, and this means that every time you establish a connection to another party, you have a statement within the certificate from the other end that’s crypto graphically validated.

152
00:23:37.860 –> 00:23:49.680
Nathaniel McCallum: Of what the initial state of the system was and what the workload that’s actually running in that system is and that basically forms the basis of your authorization decision and in today’s modern infrastructure we have.

153
00:23:50.760 –> 00:24:00.690
Nathaniel McCallum: We do basically two levels of authorization, we have channel authorization for pls and then we typically also will have a higher level of authorization and authentication in say http.

154
00:24:01.560 –> 00:24:15.510
Nathaniel McCallum: And so, combining these two these two makes a an effective ability to gather all of the inputs to an authorization process and be able to make an intelligent decision so Simon your hands up.

155
00:24:15.900 –> 00:24:33.480
Simon Johnson: So, so I, I want to pick up on the it’s hard comment that that you read the mechanics is not hard, I think the hard pieces is the adaptation gives companies that want to do risk assessment, a new tool.

156
00:24:34.620 –> 00:24:40.710
Simon Johnson: And the hard pieces exposes them to things that they might previously have wanted to ignore.

157
00:24:42.120 –> 00:24:42.330
Simon Johnson: and

158
00:24:42.690 –> 00:25:01.290
Simon Johnson: That is the hard piece is the integration of understanding machine state into risk management decisions, I think you kind of hinted that in in in as I, as I go to give you something as I go to give up something to you right.

159
00:25:02.580 –> 00:25:06.750
Simon Johnson: Am I good, am I okay with that, because if you don’t get a.

160
00:25:08.070 –> 00:25:15.180
Simon Johnson: New system is fully up to date, I think this is what at a station exposes no system is fully up to date So where is your cut off.

161
00:25:16.680 –> 00:25:21.270
Simon Johnson: Right and that that will change over time it’s not it’s not going to be.

162
00:25:22.680 –> 00:25:32.580
Simon Johnson: And this is sort of the timeliness piece right is what you believe is okay today is not going to be the same in a year’s time because we will have sin, you know.

163
00:25:33.720 –> 00:25:50.460
Simon Johnson: How look at what we sin is like new style attacks come against platforms in the last four years, for instance, right and the emergence of side channels, whether those be transit execution or.

164
00:25:52.110 –> 00:25:52.980
Simon Johnson: You look at.

165
00:25:57.000 –> 00:26:03.300
Simon Johnson: telemetry type stuff that that you can have access to right there’s all sorts of different ways now.

166
00:26:03.840 –> 00:26:20.640
Simon Johnson: How paranoid, you are or how much compliance, you need right will depend on whether where that cutoff line is and what issues are too, so the hard piece, I would argue, is not the mechanics of the other station and what the Atlas station represents but.

167
00:26:21.330 –> 00:26:26.220
Simon Johnson: Assessing the risk i’m trying i’m trying to provoke mark here into reaction, I think.

168
00:26:26.760 –> 00:26:29.910
Nathaniel McCallum: i’ll actually respond that I think the mechanics are also hard.

169
00:26:31.140 –> 00:26:37.650
Nathaniel McCallum: Again, as someone who my in a former job I worked for cryptography doing cryptography at red hat.

170
00:26:38.760 –> 00:26:51.510
Nathaniel McCallum: So I can I was there for 10 years and I can vouch that cryptography is hard and education requires cryptography and there’s a million in one foot guns that will get you if you don’t do it correctly.

171
00:26:52.620 –> 00:27:02.610
Nathaniel McCallum: And so it’s hard and even experts get it wrong, but even beyond the mechanics there are still parts of it that are hard that are more than just business based.

172
00:27:03.900 –> 00:27:08.520
Nathaniel McCallum: You know risk validation so, for example.

173
00:27:09.060 –> 00:27:20.370
Nathaniel McCallum: One one question is, what does it mean to have workload equivalence right, because if you’re running an application on multiple different T implementations which you need to do for redundancy.

174
00:27:20.730 –> 00:27:35.730
Nathaniel McCallum: How do you actually make the decision about which workloads are equivalent to one another, when all you have is a cryptographic hash now you have to assign some sort of human meaning On top of this, and if you haven’t basically big this into the way that you do your t’s which.

175
00:27:36.900 –> 00:27:40.800
Nathaniel McCallum: footnote, and our tests, but you also have to build if you don’t do that.

176
00:27:41.070 –> 00:27:48.660
Nathaniel McCallum: Then you’re going to have to build complex systems on top of that, to track all of the measurements and determine which ones of those are equivalent to one another, so that you know when I have.

177
00:27:49.200 –> 00:27:54.510
Nathaniel McCallum: A bit of code that is running on a TEE on Intel or on an md or on arm.

178
00:27:55.020 –> 00:28:01.380
Nathaniel McCallum: All of which have different cryptographic measurements are those functionally equivalent for the purposes of risk or are they not.

179
00:28:01.620 –> 00:28:05.700
Nathaniel McCallum: So if you don’t have something built into the into the core platform to handle this, you have to build.

180
00:28:06.060 –> 00:28:17.400
Nathaniel McCallum: A pretty extensive meaning tracking on top of cryptographic caches, and that is a that is an issue that is entirely fraught with problems even on a technical standpoint so.

181
00:28:17.910 –> 00:28:27.390
Nathaniel McCallum: To do the basic mechanics of validating all the signatures to determine a cryptographic hash whether that should be trusted or not it’s on the easier side of hard.

182
00:28:27.660 –> 00:28:29.190
Simon Johnson: there’s still lots of crypto crypto.

183
00:28:29.490 –> 00:28:40.890
Nathaniel McCallum: cryptography involved in that, but but that’s still just gives you a cryptographic hash and now you have to go turn that cryptographic hash into some meaning, and that is something that is really fundamentally hard.

184
00:28:41.190 –> 00:28:45.630
Simon Johnson: yeah I would say that the mechanics pieces deterministic right, you can do it.

185
00:28:45.930 –> 00:28:46.320
Nathaniel McCallum: Yes.

186
00:28:47.070 –> 00:28:53.520
Simon Johnson: The risk piece or the or the coolest piece right is I is less deterministic in.

187
00:28:55.020 –> 00:29:02.070
Simon Johnson: From, from my perspective, I mean okay i’ve been doing this for 10 years, so I saw like take mechanics of oh yeah I know how to build mechanics.

188
00:29:02.940 –> 00:29:11.190
Simon Johnson: it’s the it’s the business based stuff and how this integrates with the business, I really would like to if marks got some perspective in this space.

189
00:29:11.730 –> 00:29:15.240
Mark F. Novak: I do it’s not one you’re going to wake up so.

190
00:29:19.950 –> 00:29:31.770
Mark F. Novak: We operate our private data centers so again, speaking on behalf of my employer, we also have footprints in aws as you were and GG CP.

191
00:29:33.210 –> 00:29:54.780
Mark F. Novak: And we have cross product that with the number of trust execution environment technologies warehouse both client side and server side right, so you have you know really phenomenal azure confidential vm with nvidia each 100 consequential gpus you have upcoming.

192
00:29:55.890 –> 00:30:09.150
Mark F. Novak: hardware storage and network accelerators when I connect to a system like really connected for bounder a load balancer some sort of front end and behind that who knows what it does right so.

193
00:30:10.800 –> 00:30:25.530
Mark F. Novak: I think I encourage encourage the industry to think about at a station as a shelling point how many management technologies to our people need to learn to properly a test environments across all of these providers.

194
00:30:26.700 –> 00:30:40.500
Mark F. Novak: Right so verizon has done the very right thing off relying on O P, because the fewer policy management languages will have the better, but when I say shelling point success in it.

195
00:30:41.700 –> 00:30:52.710
Mark F. Novak: When it comes to foundational technologies such as at a station, such as tcp IP comes down to are having so few things to choose from us possible.

196
00:30:53.460 –> 00:31:03.030
Mark F. Novak: Right, we do not want, we do not want to manage at the station services differently, for every technology and for every provider, it is.

197
00:31:03.390 –> 00:31:11.940
Mark F. Novak: unmanageable and when when I say it’s unmanageable it becomes on deployable or becomes excessively complex or it becomes very error prone.

198
00:31:12.360 –> 00:31:18.900
Mark F. Novak: So we need to start thinking about you know the US, the poor customer trying to make sense of it all.

199
00:31:19.290 –> 00:31:26.010
Mark F. Novak: And absolutely at our level at the level of highly technical people that will devolve into how do you make me out of a hat.

200
00:31:26.520 –> 00:31:37.110
Mark F. Novak: Absolutely, that is correct, but when you get to a point of we have a development team that does not have a PhD and 15 years of experience, all kinds of national computing they need to be able to say.

201
00:31:37.890 –> 00:31:45.630
Mark F. Novak: You know, let me explain what it is that i’m trying to run and let some system convert this into a policy, and let me simply say.

202
00:31:45.840 –> 00:31:54.660
Mark F. Novak: You know, make sure that the singer standing up is as the designer has intended something like that so that’s one dimension of complexity that we need to reduce.

203
00:31:54.990 –> 00:32:02.940
Mark F. Novak: And the other thing that we need to address and I don’t know if it can be reduced is confidential computing fundamentally shatters the.

204
00:32:04.020 –> 00:32:11.730
Mark F. Novak: What is it called the separation of duties that happens between cloud and customer.

205
00:32:12.150 –> 00:32:21.630
Mark F. Novak: Where today when they say cloud I trust you to encrypt my data trust I trust you to operate your things correctly, but now i’m basically saying cloud I don’t trust you.

206
00:32:22.440 –> 00:32:33.150
Mark F. Novak: So I need you Microsoft to prove to me that you’re running and configuring MDS intel’s envious environment correctly and sting out of my business.

207
00:32:33.840 –> 00:32:46.740
Mark F. Novak: Right, so this shared responsibility model is what the word I was looking at, we need to have a little necessarily rethink it but certainly be thoughtful about what does it mean in the confidential computing world.

208
00:32:48.000 –> 00:32:56.160
Nathaniel McCallum: mark if I didn’t know better, I would say, you were joining gunning for a job in marketing at profile and these these are exactly the problems we are trying to solve.

209
00:32:58.860 –> 00:33:06.600
Nathaniel McCallum: All of all of the parties here have been working in this area, for a long time and bringing up the core primitives on hardware has been incredibly hard.

210
00:33:07.320 –> 00:33:16.860
Nathaniel McCallum: But we, we need to turn our attention now to these higher level problems questions like am I going to have to teach teach every application, how to do at the station right.

211
00:33:17.400 –> 00:33:25.860
Nathaniel McCallum: If if if that’s our model, we will fail in precisely the same way that hydrogen cars failed against electric cars.

212
00:33:26.160 –> 00:33:31.350
Nathaniel McCallum: And the reason for that is the infrastructure was already in place for electric cars you’re just just plug in the car to your wall.

213
00:33:31.740 –> 00:33:38.790
Nathaniel McCallum: And this is precisely why we immediately convert an agitation into a tms certificate, so that you can.

214
00:33:39.240 –> 00:33:51.480
Nathaniel McCallum: You just make a trs authorization decision, the way that you already do in all of your applications, and that is a direct cryptographic proxy for whether you’re doing agitation, it also, by the way, hides all the details.

215
00:33:51.900 –> 00:34:06.240
Nathaniel McCallum: And doesn’t hide them, but it means you don’t have to validate all the details of you know 100 different hardware configurations across different clouds because they all look exactly the same, so we are trying to solve these problems, I think that there are solvable problems.

216
00:34:07.560 –> 00:34:11.190
Nathaniel McCallum: But I would encourage anyone who’s interested in solving hard problems come talk to us.

217
00:34:12.570 –> 00:34:30.990
Dave Thaler: So I will jump in here as the maybe the standards representative, because I think mark and Nathan touched on an interesting tussle that’s out there, if you look at the the challenges and the trends that standards is kind of the approach that we’re trying to take where mark talked about.

218
00:34:32.370 –> 00:34:41.760
Dave Thaler: You know, do we have heterogeneous or homogenous environments right, especially when you have multiple vendors and things that are involved right and so from the.

219
00:34:43.200 –> 00:34:51.480
Dave Thaler: End administrators perspective, like that you know, maybe mark is representing by proxy that you’d like as much margin it as possible.

220
00:34:52.200 –> 00:35:02.340
Dave Thaler: On the other hand, as Simon talked about right you’re never up to date, we always have new things coming in side channels and so on and so different models and different vendors.

221
00:35:02.730 –> 00:35:08.760
Dave Thaler: want to close those new gaps and things, and so you have some homogeneity over time as different.

222
00:35:09.090 –> 00:35:14.370
Dave Thaler: Say attacks or mitigated in new types of hardware new types of software or new types of systems that mitigate those.

223
00:35:14.730 –> 00:35:23.250
Dave Thaler: And so you have vendor differentiation on one side Okay, you have a desire from Argentina and those side and the other heterogeneity trend is.

224
00:35:24.090 –> 00:35:28.140
Dave Thaler: If you go back a number of years, then you look at say how.

225
00:35:28.740 –> 00:35:37.650
Dave Thaler: came in right, and so there there’s much in it there’s heterogeneity in terms of manufacturing jurisdiction right how many tpm vendors out there right.

226
00:35:37.890 –> 00:35:48.930
Dave Thaler: Well, in China, for example, you, the only things that are authorized is things from Chinese tpm vendors and similarly in other countries, you got to have a tpm vendor in their country right.

227
00:35:49.440 –> 00:35:57.030
Dave Thaler: And so you have a lot of head out how much heterogeneity that says okay if i’m in China i’m gonna use a Chinese tea, if I may say, America i’m going to use.

228
00:35:57.360 –> 00:36:05.790
Dave Thaler: A American tea or i’m in you know, Russia, I might use a Russian tea, and so, if you’re somebody that’s running say a global.

229
00:36:06.180 –> 00:36:14.250
Dave Thaler: banking system, then you might again have a mix of these in different data centers and different technologies and different jurisdictions and So how do we deal.

230
00:36:14.430 –> 00:36:22.320
Dave Thaler: With that level of heterogeneity, both in terms of multiple vendors that wants to differentiate you have different jurisdictions and so on.

231
00:36:22.620 –> 00:36:31.440
Dave Thaler: And, really, at least from my perspective, this is the role of standards, which is not going to solve all of them right because it does not remove the ability to have vendor differentiation right.

232
00:36:31.710 –> 00:36:38.790
Dave Thaler: But it does help to provide some margin at in the case of things like multiple jurisdictions multiple vendors that.

233
00:36:39.270 –> 00:36:49.020
Dave Thaler: You know, in the past, before we started having standards then we had plenty of access station that was out there, but it tended to be vendor proprietary systems and we’re starting to see more and more convergence there which helps.

234
00:36:49.530 –> 00:37:02.340
Dave Thaler: operators and users like Marcus representing to say okay let’s say I have you know, an Intel and an arm and a risk five and amd is there any homogeneity across those are not right and so.

235
00:37:02.580 –> 00:37:08.460
Dave Thaler: Open Source and standards are really what those advances are happening that says let’s take the things that are common right.

236
00:37:08.700 –> 00:37:15.360
Dave Thaler: Yes, there’s going to be differentiation is going to be new things we want vendors to innovate and so on, but other things that are common can we provide some.

237
00:37:15.750 –> 00:37:23.640
Dave Thaler: Homogenous layer that says, I can manage them as one set and that’s where things like standards that have atf tcg and so on, come in.

238
00:37:24.120 –> 00:37:31.440
Dave Thaler: As well as open source projects like for a song, you know open enclave and arcs and so on are trying to provide a homogeneity layer

239
00:37:32.400 –> 00:37:42.870
Dave Thaler: That does not necessarily remove the ability to have vendor differentiation but that’s kind of the puzzle right, you have trends towards much an ad and trends towards heterogeneity and will kind of always be somewhere in between, with a mix of both.

240
00:37:43.230 –> 00:37:53.190
Mark F. Novak: I you know I just had a flashback listening to you Dave this is to date myself like in the early 90s, I was an intern at IBM and.

241
00:37:54.420 –> 00:38:01.590
Mark F. Novak: I was given a computer and in order to have that computer connect to other computers on the corporate network.

242
00:38:02.190 –> 00:38:08.370
Mark F. Novak: A specially trained person had to come to my office and basically you know in.

243
00:38:09.240 –> 00:38:17.910
Mark F. Novak: participate in the set of incantations and that was the day when you had twisted pair and you had tone token ring and you had banyan violence and you had.

244
00:38:18.180 –> 00:38:31.980
Mark F. Novak: This you know the smorgasbord of solutions just just to provide this very, very basic functionality of connectivity and consider the difference between that and now when it’s virtually.

245
00:38:33.090 –> 00:38:41.850
Mark F. Novak: You know painless it’s really comes down to pushing a couple of buttons and maybe my grandma 97 years old still can’t do it, but certainly my very.

246
00:38:42.210 –> 00:38:50.310
Mark F. Novak: Non technical death can, so this is when i’m keep stressing the point, maybe it’s very aspirational yet have a shelling point we will.

247
00:38:50.880 –> 00:38:59.160
Mark F. Novak: Absolutely have differences between vendors, you know, but you don’t need to know, right now, what my wi fi router is that says between you and I.

248
00:38:59.400 –> 00:39:05.820
Mark F. Novak: right because it’s been kind of figured out problem, how to manage this thing like sure to keep it up to date and how to make sure it comply.

249
00:39:06.090 –> 00:39:14.340
Mark F. Novak: And I think this is where we’ll end up happening, a little bit more optimistic long term that we will be able to converge on fairly painless.

250
00:39:14.880 –> 00:39:23.250
Mark F. Novak: You know, management of ensuring that everything stays up to date and according to policies just takes it’ll probably take a very significant amount of effort.

251
00:39:25.260 –> 00:39:37.980
Lily Sturmann: Right, thank you very much, everyone, I have not wanted to interrupt because we’re doing such a good job, covering these important topics and the complexities the multiple types of complexities involved in at the station today, which I think is great.

252
00:39:39.360 –> 00:39:52.680
Lily Sturmann: I did want to make sure that we also talk about other trends or gaps that we see in our station today we’ve talked about the complexity we’ve talked about.

253
00:39:53.610 –> 00:40:05.310
Lily Sturmann: Some of the stumbling blocks that we’ve had any other comments on where agitation is going, since, of course, this is about the future of at a station and where we would like to see it go.

254
00:40:05.910 –> 00:40:08.040
Simon Johnson: I think there is this me.

255
00:40:09.090 –> 00:40:17.310
Simon Johnson: We We certainly sin from you know customers that that Intel has spoken to write this desire to.

256
00:40:18.780 –> 00:40:27.240
Simon Johnson: Be out for it to be able to not have to write to one interface for a specific client vendor or particular data Center vendor.

257
00:40:27.720 –> 00:40:46.500
Simon Johnson: And so it’s not just going to be about well standards at the, this is the bits that you get down the stream, but certainly Okay, as I, as I go to find out these things where I hand them over i’ve got a service that I can go and give that to right so.

258
00:40:47.970 –> 00:40:53.400
Simon Johnson: I you know we certainly made an announcement around around amber which is.

259
00:40:54.600 –> 00:40:55.470
Simon Johnson: built on.

260
00:40:57.240 –> 00:41:03.150
Simon Johnson: Public components that we have had not necessarily fully open source, but public components.

261
00:41:04.530 –> 00:41:16.500
Simon Johnson: and turn that into sort of your service offering now that I think Daniels talked about you know his capabilities and I would imagine that will see more of that it’s it’s been interesting.

262
00:41:16.950 –> 00:41:27.630
Simon Johnson: I look at the history of at a station and one of the reasons that when that when I sat and was working on sex like 10 years ago.

263
00:41:28.110 –> 00:41:38.970
Simon Johnson: Maybe a bit longer we were looking at the tpm going why isn’t the tpm being successful because no one is building the services that go along with that that was our conclusion.

264
00:41:40.830 –> 00:41:54.000
Simon Johnson: And, to a certain extent we did that for for client sex we wrote back a little bit of the data Center we’re going back to that story again and I think if we if people aren’t building those services.

265
00:41:55.260 –> 00:42:00.960
Simon Johnson: and making those available so like the big guys I don’t think the.

266
00:42:02.820 –> 00:42:07.770
Simon Johnson: At a station will be you need those things to really like push that ecosystem through.

267
00:42:08.880 –> 00:42:14.250
Nathaniel McCallum: One of one of our perspectives on this and actually suspects that probably everyone will agree with this.

268
00:42:15.360 –> 00:42:21.150
Nathaniel McCallum: Is that whoever is work is hosting the workload should not be the party that is also hosting I just station service.

269
00:42:21.840 –> 00:42:31.320
Mark F. Novak: It can, so long as they remove themselves from the tcp so Microsoft azure at the station service Dave please connect correct me in fact itself runs an enclave.

270
00:42:32.610 –> 00:42:38.550
Mark F. Novak: And it is a I worry I worry if we do what you say.

271
00:42:39.660 –> 00:42:49.770
Mark F. Novak: That, if the meditation services hosted off Prem it might become a very significant bottleneck because of the amount of traffic that has to go to inform.

272
00:42:50.490 –> 00:42:58.410
Mark F. Novak: And certainly possibility, but I would be worried about scalability I do take issue with aws as approach where they.

273
00:42:59.220 –> 00:43:17.340
Mark F. Novak: Literally own the entire stack hardware, software management policy staff right so and they made a conscious decision to stay firmly planted in your tcp and if you go to aws you know I would not have been called what they’re doing confidential computing.

274
00:43:19.500 –> 00:43:20.880
Lily Sturmann: Good do you have something to add.

275
00:43:22.230 –> 00:43:22.620
Dave Thaler: um.

276
00:43:23.730 –> 00:43:31.410
Dave Thaler: yeah, so I think it first, if you go to the as the audience, we have a company competing one of our White Papers is about.

277
00:43:31.980 –> 00:43:41.940
Dave Thaler: The technical analysis and one of the points that it makes in there is that one of the goals is to remove the operator of, say, the service from.

278
00:43:42.270 –> 00:43:47.790
Dave Thaler: The set of entities that you have to implicitly trust right, and so it talks about that in more detail, I think that gets to.

279
00:43:47.970 –> 00:43:55.920
Dave Thaler: You know mark’s point about if you’ve done a good job and it shouldn’t matter who’s hosting it, what matters is who has admin control over the contents of it right and so.

280
00:43:56.730 –> 00:44:05.490
Dave Thaler: let’s say Microsoft is hosting it, but you know JPMorgan chase has admin control over the contents of it right so that’s Okay, the fact that Microsoft is hosting and I think that’s what marks getting it.

281
00:44:06.540 –> 00:44:10.830
Dave Thaler: I on your question really about, you know the gaps and trends and stuff I thought.

282
00:44:11.700 –> 00:44:18.600
Dave Thaler: I think in the chat Tom brought up a great question that I that I wanted to touch on because I think that’s actually one of the larger issues.

283
00:44:19.110 –> 00:44:24.780
Dave Thaler: which had to do with you know levels of compliance and regulation and so on.

284
00:44:25.500 –> 00:44:34.800
Dave Thaler: Because, if you look at recent attacks, even in let’s say critical infrastructure, perhaps one of the most dire cases right where you have you know danger to.

285
00:44:35.220 –> 00:44:43.170
Dave Thaler: You know life or environmental damage and things, and so you look at recent attacks like over the last couple years things like you know.

286
00:44:43.650 –> 00:44:52.890
Dave Thaler: triton which took out the petro chemical plants and Saudi Arabia and power and so on some of the more dangerous aspects you might think are great use cases for conference computing.

287
00:44:53.280 –> 00:45:02.010
Dave Thaler: Well, the systems in those types of attacks are already certified at the highest levels of the regulatory things are out there, right now, so I you see 64 for three years, for example.

288
00:45:02.340 –> 00:45:10.620
Dave Thaler: Is a popular one in many industries and the systems that are taken out there are actually complained at the highest level of that and so why is that well what they’re missing.

289
00:45:10.950 –> 00:45:19.050
Dave Thaler: is so such things the complex computing brings in Okay, and so one of the problems that we have right now that Tom is really alluding to, I think, or at least my perspective.

290
00:45:19.680 –> 00:45:23.220
Dave Thaler: Is that one of the gaps right now is that vendors.

291
00:45:23.730 –> 00:45:33.120
Dave Thaler: to supply to those types of things there’s no checkbox for do I have to a station ID with a hard word of trust is there confidential computing protection things like that.

292
00:45:33.300 –> 00:45:42.030
Dave Thaler: Is there a way to differentiate and the answer is right now the regulations have a gap that’s there and that’s what I think Tom was pointing out, which I, you know it’s something that I find really concerning.

293
00:45:42.330 –> 00:45:49.560
Dave Thaler: And that’s something as a as an industry, we need to work together to get those into into the regulations and so that’s not answering mark.

294
00:45:49.800 –> 00:45:53.670
Dave Thaler: tom’s question that’s explained to the rest of the audience why tom’s question is actually quite critical.

295
00:45:54.210 –> 00:46:04.050
Dave Thaler: that the sooner we can get things in a regulation that means the better opportunity for vendors to differentiate and people to be rewarded for actually solving these problems and that’s what will actually make the world safer right.

296
00:46:04.260 –> 00:46:11.430
Dave Thaler: So the town’s question is so critical it’s like how soon can we get that because, as soon as we can get that the rule starts to become a safer place I would love to live in a world.

297
00:46:11.790 –> 00:46:17.430
Dave Thaler: Which where my kids are safer than than we are right now okay that’s what kind of motivates me and competence computing.

298
00:46:17.790 –> 00:46:26.940
Dave Thaler: And so, this is something that we really have to work together, I don’t know what the time frame is right, because I don’t have a great answer to this question, but if I look at gaps Okay, I think that’s one of them.

299
00:46:27.180 –> 00:46:33.450
Dave Thaler: And that’s certainly something that within the company edge computing and storage and we for him to form a collection of companies with similar interests right.

300
00:46:33.690 –> 00:46:39.600
Dave Thaler: it’s something we have to work together, and I would call everybody else it’s on the attendee list here, this is something you have any influence.

301
00:46:39.960 –> 00:46:46.620
Dave Thaler: If you work for a company or organization or have contacts international delegation that participate in these discussions.

302
00:46:46.920 –> 00:46:58.530
Dave Thaler: That really do need to get these things in a regulation it’s something we have to work together on to prototype pull in this timeline so I don’t have a great answer but I agree that we have to call, and this is one of the most impactful things we can do so.

303
00:46:59.400 –> 00:47:15.090
Lily Sturmann: Thank you very much Dave since we’re done does anyone else have comments on these gaps or like possible timescale on to answer this question, which is what does the panel think will be the time scale for these regulations on to demand confidential computing and at the station.

304
00:47:15.570 –> 00:47:24.510
Nathaniel McCallum: So i’m optimistic in the mid term but i’m but i’m pessimistic, in the short term, in the short term, the cost is simply too high.

305
00:47:25.170 –> 00:47:36.120
Nathaniel McCallum: We need efficient systems to be able to deploy Apps at scale using confidential computing and because the cost is too high, basically regulators attempt to enforce this.

306
00:47:36.600 –> 00:47:42.600
Nathaniel McCallum: they’re just going to get too much pushback because it would be too onerous of a requirement on businesses.

307
00:47:43.380 –> 00:47:50.970
Nathaniel McCallum: We need to get to a phase where it is cheap enough that we can basically allow competence computing to be a regulatory shortcut.

308
00:47:51.510 –> 00:47:59.370
Nathaniel McCallum: Meaning that you have basically two paths that you can choose one is the confidential computing path and the other is the more traditional path.

309
00:47:59.640 –> 00:48:06.120
Nathaniel McCallum: But the more traditional path would be more expensive, where you can just bypass a lot of this stuff as long as you’re using confidential computing.

310
00:48:06.420 –> 00:48:11.010
Nathaniel McCallum: And this provides an economic incentive without requiring everyone to adopt confidential computing.

311
00:48:11.640 –> 00:48:16.830
Nathaniel McCallum: that’s where I think we’ll start to see a real sort of watershed moment for confidential computing.

312
00:48:17.640 –> 00:48:27.000
Nathaniel McCallum: And then, of course, the last phase is precisely when the regulators require it at different regulation level levels, and this is something I think we’re all hoping that we get to.

313
00:48:27.660 –> 00:48:39.030
Nathaniel McCallum: Because it does provide real benefits to the industry, but I hope we can get there in a in an incremental way but we’ve got it, we have to reduce the cost of adoption in order to make this even plausible.

314
00:48:40.710 –> 00:48:46.650
Lily Sturmann: Any other thoughts on that it does, it seems pretty critical to our conversation about the future about a station.

315
00:48:46.890 –> 00:48:49.050
Mark F. Novak: or regulations are written in a funny way.

316
00:48:50.100 –> 00:49:07.950
Mark F. Novak: You know, they do not usually they’re not prescriptive they usually descriptive and they will typically say things like you know you need to you know, ensure that you know data safe from disclosure something like this right they don’t say you know.

317
00:49:09.180 –> 00:49:15.360
Mark F. Novak: At that certain level, then it translates into what’s called control procedures and also a lot more prescriptive.

318
00:49:16.110 –> 00:49:26.700
Mark F. Novak: And I believe again when I say to my previous point, the sooner we get to shelling point status, the easier it’s going to be for regulators to formulate requirements and for us to prove compliance.

319
00:49:27.210 –> 00:49:30.810
Mark F. Novak: With these requirements and also do not forget how many regulators, there are.

320
00:49:31.230 –> 00:49:43.560
Mark F. Novak: So JPMorgan the subject to something like 78 different jurisdictions so i’m sure changes some our regional like think you know, like gdpr covers all of your summer industry so like.

321
00:49:44.070 –> 00:49:53.760
Mark F. Novak: pci DSS some are federal like so they will have something that’s us wide or summer state New York California right Singapore right so.

322
00:49:54.210 –> 00:50:05.670
Mark F. Novak: All of them all of them have their own you know take on things, and again, all of these different takes a better convert to a relatively small and manageable set of primitives.

323
00:50:05.940 –> 00:50:14.850
Mark F. Novak: That your regular Joe developer, who should not have any do you know much difficulty compliant and I do believe we’ll get there it’s just the.

324
00:50:15.450 –> 00:50:28.980
Mark F. Novak: More, the more the industry’s fragmented the farther that goal is, and the more we focus on converging and things like policy languages and like Dave said set of standards and protocols, the easier it’s going to be so, you know that’s going to be the.

325
00:50:29.730 –> 00:50:39.480
Mark F. Novak: You know if you’re building your own at a station service and you’re creating your own management language policy language around it, you are in the way of selling point.

326
00:50:41.550 –> 00:50:51.270
Lily Sturmann: yeah so that’s a great reason to come back to the station and special interest group and PCC as well just to plug that Dave I see you have your hand up.

327
00:50:52.770 –> 00:50:57.300
Dave Thaler: yeah so triggered it other thoughts I was listening to my colleagues here speak.

328
00:50:58.230 –> 00:51:05.460
Dave Thaler: The one of the challenges of getting So this is the point about I think nathaniel, you said you were optimistic in the.

329
00:51:06.270 –> 00:51:19.110
Dave Thaler: eventual but pessimistic, in the short term, and I think one of the challenges in the short term is when you look inside the regulatory discussions and regulatory bodies right that the.

330
00:51:19.920 –> 00:51:30.090
Dave Thaler: Things that demand or or motivate confidence computing is only one of the constituencies in the regulatory discussion right the other counter.

331
00:51:30.420 –> 00:51:38.250
Dave Thaler: Often counter i’ll put it that way, often counter constituency is lawful intercept right and so that’s one of the reasons why it takes them a long time because.

332
00:51:38.610 –> 00:51:42.060
Dave Thaler: You have lots of people from, say, the competent lawful intercept side.

333
00:51:42.360 –> 00:51:49.890
Dave Thaler: That want to make sure they have the chance of doing things and and in many circumstances right that is kind of a tussle that competes with the.

334
00:51:50.070 –> 00:51:56.910
Dave Thaler: The goals of competency computing as to how do you find something what’s the right thing to regulate when you have both of these constituencies that want to.

335
00:51:57.270 –> 00:52:02.550
Dave Thaler: play in that they play in the overall cyber security space so like a couple years ago, just before coven started.

336
00:52:02.910 –> 00:52:17.940
Dave Thaler: I was a panelist in a cybersecurity summit is hosted by Texas a&m and the panelists we had one person was from the FBI we had me there from competency computing and then we had a black cat person with a that was like an ex hacker that had gone into the security.

337
00:52:19.260 –> 00:52:23.040
Dave Thaler: consultant business right, and so we had the three of us as the panelists in this one.

338
00:52:23.280 –> 00:52:31.080
Dave Thaler: And it was actually a great discussion about you know what’s the right thing for cyber security and, of course, you know which position, I was advocating so it was my role that was on there, but my point is.

339
00:52:31.290 –> 00:52:38.610
Dave Thaler: The same sorts of discussions, what happens inside the regulatory bodies, and this is one of the reasons why short term it’s very difficult right because.

340
00:52:39.060 –> 00:52:46.950
Dave Thaler: The cost aspect that mark mentioned, plus the fact of getting agreement within regulatory bodies takes a really long time because of tussles like this, so.

341
00:52:47.700 –> 00:52:50.040
Mark F. Novak: I you know I like to make this point that.

342
00:52:50.820 –> 00:52:57.750
Mark F. Novak: I, and this is, this is the thing that Microsoft which, and I had a conversation some years ago, where I basically said I want the cloud to be utility.

343
00:52:58.050 –> 00:53:09.150
Mark F. Novak: What I mean by that is I buy water and electricity from a utility company, but I do not have the representative sitting in my living room observing what I do with the water and electricity.

344
00:53:09.690 –> 00:53:19.530
Mark F. Novak: yeah it’s entirely possible and electrocuting someone or waterboarding someone, but if they wanted to know if i’m not doing that they probably need to serve a warrant this whole lawful intercept.

345
00:53:20.010 –> 00:53:30.030
Mark F. Novak: is very correctly in constant confidential computing and needs to go away, and I think we need to enable that something like that to go away because a cloud is not a.

346
00:53:30.750 –> 00:53:41.310
Mark F. Novak: license for a nation state or government to go and do effectively extra judicious warrantless surveillance, we should not make that easy, we should make that hard.

347
00:53:42.540 –> 00:53:45.360
Lily Sturmann: This is a great point Simon you have your hand up.

348
00:53:48.900 –> 00:53:49.770
Lily Sturmann: Question after that.

349
00:53:50.220 –> 00:54:00.660
Simon Johnson: Oh, I had a really bad line interruption there, so I hope that’s not not my end i’m the on the on the gap side and the regulation piece.

350
00:54:01.560 –> 00:54:12.780
Simon Johnson: I think one thing we tend to forget is we’re at the very, very early start of confidential compute What do I mean by that at the moment, we know how to protect protect.

351
00:54:14.280 –> 00:54:16.770
Simon Johnson: workloads that happening on the host process.

352
00:54:18.690 –> 00:54:36.510
Simon Johnson: we’re starting to see emergence of well when those workloads move to other devices that there are other mechanisms being proposed right, I mean we know that the the io guys are starting to look at providing you know protections to iot devices that’s a few years down the you know.

353
00:54:37.830 –> 00:54:41.610
Simon Johnson: Fully a few years down the pipe yet so.

354
00:54:42.630 –> 00:54:48.270
Simon Johnson: i’m not sure that we have all the technical mechanics in place to say no matter you know.

355
00:54:48.840 –> 00:54:58.950
Simon Johnson: at what time at what place that compute happens that it’s confidential and I think that’s that’s that’s, a role that we have you know it’s not just a matter of.

356
00:54:59.250 –> 00:55:12.300
Simon Johnson: Software ecosystem, but there’s actually a hardware ecosystem we’ve actually got to build out to enable that sort of that confidentiality so in for Moshe that’s what confidential computers and I think.

357
00:55:13.680 –> 00:55:30.780
Simon Johnson: Only until we understand that fuller picture looks like will we really start to see regular you know full regulations starting to appear regulating just you know the host processor doesn’t help if you’re actually doing most of your computations on the gpu.

358
00:55:32.160 –> 00:55:32.520
Simon Johnson: Right.

359
00:55:32.700 –> 00:55:52.680
Simon Johnson: So, or some other you know highly specialized include that seems to be the 10 the way things are going we’re going towards specialized compute right and even within the data Center so thank you and to solve that problem, I think it will be difficult for full regulation to to emerge.

360
00:55:55.050 –> 00:56:02.520
Lily Sturmann: So great point yeah I want to make sure I know we have a few audience questions left, I believe, several of them have been answered in the chat already.

361
00:56:03.480 –> 00:56:20.070
Lily Sturmann: There is one, the gist of this is at the station is not yet properly specified the registration mechanism is under specified and has some conflicting information if anyone has a response to this one we have just a few minutes left, I would say two or three minutes.

362
00:56:24.330 –> 00:56:30.930
Dave Thaler: I think it’s done this specifically directed to Simon but i’m always happy to come it on the general I tf stuff but.

363
00:56:32.220 –> 00:56:32.610
Simon Johnson: again.

364
00:56:32.850 –> 00:56:34.680
Simon Johnson: This is specific to TD X.

365
00:56:35.910 –> 00:56:38.700
Simon Johnson: I was mentioned as an example, but it seems like it can.

366
00:56:38.700 –> 00:56:41.280
Lily Sturmann: also be generalizable in terms of standardization.

367
00:56:43.590 –> 00:56:50.160
Nathaniel McCallum: yeah i’m not sure that we want standardization yet, and I know that that sounds odd.

368
00:56:52.770 –> 00:56:55.920
Nathaniel McCallum: We we need enough time to experiment.

369
00:56:57.480 –> 00:57:09.540
Nathaniel McCallum: To make sure that we are doing things in common patterns will emerge and that’s when standards are most successful is when it clearly becomes apparent to the industry that there are some best practices and.

370
00:57:10.500 –> 00:57:16.770
Nathaniel McCallum: You know, and that we can adopt standards to codify those best practices, I still think it’s too early, I mean if.

371
00:57:17.370 –> 00:57:26.160
Nathaniel McCallum: we’ve already been talking about doing competence computing on other things that aren’t cpus what does that to station look like in those cases, the answer is, we don’t know.

372
00:57:26.610 –> 00:57:34.890
Nathaniel McCallum: And if we tried to standardize something right now we’re going to shoot ourselves in the foot and then we’re going to have to create a second standard just to solve the problem with the first standard.

373
00:57:36.210 –> 00:57:45.600
Nathaniel McCallum: So i’m actually bearish on standard in the short term but bullish in the medium to long term, I think that, once the common.

374
00:57:47.100 –> 00:57:56.550
Nathaniel McCallum: usage patterns are identified once people begin to start adopting these things at scale using the existing experimentation is that we have.

375
00:57:57.510 –> 00:58:02.340
Nathaniel McCallum: That we can we can then start to begin to standardize this process, I will say that.

376
00:58:02.730 –> 00:58:12.300
Nathaniel McCallum: The existing agitation mechanisms provided by all of the parties are are woefully under documented, I can say this as someone who’s implemented all of them.

377
00:58:13.290 –> 00:58:27.390
Nathaniel McCallum: That there is a number of very big questions I have resolved all of them, but but it didn’t it didn’t come without much pain and implementation so that’s definitely something we could improve on in the short term.

378
00:58:28.050 –> 00:58:32.670
Lily Sturmann: For sure Dave I see you have a comment, and then we really probably about one minute and.

379
00:58:32.670 –> 00:58:33.150
i’ll be sure.

380
00:58:34.350 –> 00:58:45.390
Dave Thaler: i’ll be short in the IDF I think the position was very similar to what nathaniel said is that it’s often and may be too early for the protocol so like when the working group does that gestation was chartered.

381
00:58:45.750 –> 00:58:56.760
Dave Thaler: They were trying to do some common message for them as but not the protocols to carry them, so if you’re like a certificate kind of things you know or an e token if you’re familiar with the rents architecture and it’s because of that there’s there’s so much.

382
00:58:57.540 –> 00:59:07.950
Dave Thaler: Innovation still happening there, so much heterogeneity that they weren’t ready to pick and bless you know one or two so far right, and so I think the main point is really.

383
00:59:08.640 –> 00:59:17.640
Dave Thaler: The specification is what’s needed and I think the question would probably agree with that and not necessarily the standards, but to make sure that what is out there, even if their vendor proprietary.

384
00:59:17.820 –> 00:59:22.050
Dave Thaler: That they are actually specified and understandable and I think that’s what we’re calling for right now so thanks.

385
00:59:23.040 –> 00:59:29.340
Lily Sturmann: Thanks very much um yeah this has been a great conversation it’s exactly what I was hoping would come out of this as we cover.

386
00:59:29.730 –> 00:59:40.800
Lily Sturmann: The the current state of at a station and the gaps and also some of the things that we’re doing well, so I wanted to thank all of our panelists for being here today.

387
00:59:41.370 –> 00:59:53.970
Lily Sturmann: This was great and definitely also thanks to Linux foundation for making this possible, if you are interested in this topic again, please check out the other webinars at confidential computing.io slash webinars.

388
00:59:55.140 –> 00:59:57.210
Lily Sturmann: I know that we have also.

389
00:59:58.560 –> 01:00:01.830
Lily Sturmann: More interaction from the audience that.

390
01:00:03.150 –> 01:00:16.230
Lily Sturmann: We would like to engage So if you want to reach out to us, please look at the webinar description, you can go on mailing list select channel We look forward to hearing more from you thanks everyone.

391
01:00:17.820 –> 01:00:18.300
Simon Johnson: Thank you.

392
01:00:18.570 –> 01:00:18.960
Thank you.