 Good afternoon. Let's get started. Hello. We have a couple more people coming in. My name is Mike Russell, and I'm here to talk about trust, confidential computing and supply chain stuff. If that's not what you're expecting, feel free to leave now. I won't be too hurt. Oh, someone just has. That's always worrying. So here's kind of the agenda. A bit about me, a bit about trust, a bit about confidential computing, hopefully some interesting stuff about supply chain, and a bit about the confidential computing consortium. Oh, and some memes. I promised some memes when I did the thing. So I put some memes in, and I apologize to the M&M quotes at the beginning of the abstract. You have to get people's attention somehow. I'm not going to be doing any M&M. You're safe with that. Who am I? So I'm that most dangerous of creatures, a techie in a management position. I have been an architect, engineer, product manager, startup CEO, open source project founder, and now I'm an executive director of a Linux foundation project. I'm also the author of a book called Trust in Computing Systems in the Cloud, published by Wiley in 2021. The fact that I always have to remind myself what it's called suggests that it hasn't sold as many as it should have done, but hey, I like tea. That's a mug of tea there. So let's start a bit with talking about trust. Because it's kind of foundational to what I want to talk about. Just to give you a little bit of a heads up, I'm going to be saying that secure supply chain isn't secure. It's just not. So that's kind of my starting point. If you think it is perfectly secure, again, feel free to heckle and we can have a discussion later on. But here's my book's definition of what trust is. Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation. One of the problems we have when we talk about trust, zero trust, or I trust my bank, or I trust my brother, or I trust my computer system, is that we don't really say what we're trusting it to do. And this is a good start, but we need three colors. First one is trust is always contextual. So I trust you, or I trust this computer to do what in what context? What's it about? Is it in the context of IO? Is it in the context of authentication? Is it in the context of whatever it may be? Second one is one of those contexts for trust is always time. So your trust may decay over time. It may grow over time depending on what your expectations of the other entity were. And when you're thinking about how to build a system which holds a trust relationship to another system, you need to think about that. And the other one is that trust relationships are not symmetrical. So just because I trust my brother, who happens to be a doctor, to perform CPR on me doesn't mean he'd necessarily trust me to do that. As it happens, I'm trained in CPR and done it, so I can. But it's all those sorts of things, right? So my book says that I trust my brother and my sister with my life, but my sister was a diving instructor and my brother's a doctor and there's a very different set of trust relationships. And we can think about trust in computer systems in the same sorts of ways. So should I trust you is kind of the big question. It's a meme, right? I promised you memes, future armor, I believe. Okay, so here's why this is relevant to people who attend in this conference and hopefully to you people in this room and people who are watching this. The cloud is just somebody else's computer. We know this, right? So supply chain security is arguably all about trust. Who do you trust early in the supply chain to have done what things? So that means we need to decide, based on the definition I gave you before, who we trust to do what and when. And if we can't even elucidate, if we can't clarify what those things are, we can't even begin to build an understanding of the security of the supply chain because we don't know which bits are trusted to do what. Okay, so this is kind of me. If in doubt I default to not trust in people, which makes life a bit difficult, but it's kind of the safest thing. So let's talk a bit about confidential computing and then we'll bring it all together into why this is relevant and interesting. So as I said, I'm the executive director of the confidential computer consortium, which you'll talk about in a little bit. It's a Linux Foundation project with currently seven, soon to be granted, 10, 12 open source projects associated with it. It protects confidential computing, protects data in use by performing computation in a hardware-based and a tested trusted execution environment. The ones you've probably heard of are SGX and TDX from Intel, AMD's, SEV, SNP, ARM, CCA Realms. There are a couple of the risk is doing some stuff as well and some other bits and places from IBM, but those are the ones that are most widely deployed. But they're all chip-based technologies, hence the hardware-based thing. A test that is important, we might talk about that in a little bit. But basically, you need to make sure that if you say to me, please run this thing and I say, yeah, I'm doing it securely, you're going to say, no, the whole point I've asked you to do it securely is I don't trust you. Why don't I trust you to tell me it's right? So the attestation is the cryptographic process to allow you to make decisions about whether to trust what I've just told you. So how does it work? Well, let's think of a, this is a very simplified, but hopefully everyone recognizes this picture, a basic sort of cloud CSP or remote host. And it's got, the red dotted line is the extent, the physical extent of it. We've got the host aware supervised container management, all of the flim flammy stuff, the infrastructure that runs stuff, which I'm uninterested in. Fine if you are. But that's all the stuff that's making everything run. And that's in the little red box. And then we've got a couple of workloads. Okay. And the one on the right is the one that we care about, because that's our workload. The other workload could be from somebody completely different. And I don't own any of these pieces, right? I want to, I want the workload to be associated with me. I want to be able to trust it to do what I want it to do. That might be managing patient data. It might be using cryptographic keys. It might be doing AI training. It could be pretty much anything you want. So we usually call trust execution environments, which are basically the CPU created sets of memory pages, which are protected by encrypting the workload in such a way that even the host OS hypervisor, all that sort of gubbins, English word, rubbish stuff at the bottom, can't look into, can't mess with the workload. So it's integrity and confidentiality are protected. And when you do this with attestation, that allows us, and I'm not going to go into the details, it's not particularly germane to this topic, but allows us to say, you know what, I can be sure that what is in there is what I think is in there. And that the owner, the administrator, the compromiser, the owner of that machine, can't look inside it. My workload is protected. Now, they could, of course, do resource starvation. Not much we can do about that. So if the CIA tried, we can only really be sure of C and I, but we do get those. And that allows us to start doing some really quite interesting stuff. So how is this relevant to supply chain management? I'm going to try and leave lots of time at the end for questions. But if you do have questions in the middle, I'm absolutely happy for people to put their hands up and ask questions because I've really skipped over quite a lot of detail here. And if it's stuff that people want to know more about, very happy to stop or to go back if anyone likes. So does everyone recognize this much overused picture? I suspect one of the people in this room possibly wrote, oh, we've got a question at the front. Excellent. What can I do for you, sir? Oh, well, I've lost it. There we are. That one, yeah. That one. That's not a definition of trust. That's a definition of confidential computing. So the great question. So the question was, I say it protects data in use. What about the actual application? What about the code? How do you know that that's performing the right operations? That's a great question. So the official definition of confidential computing doesn't, we talk about code and we talk about data. Okay. And I need to get this right around. Integrity is protected for data and confidentiality is protected for data by the definition. The integrity of the code is also protected by the definition, but the integrity of the confidentiality out of the code is protected only in certain implementations. So the integrity of the code can be checked. So you can be sure that what is running is what you think. This is obviously a fairly short definition. If you go to the confidentialcomputing.io, you'll find a lot more detail and white papers and stuff as well. But the integrity of the code is also protected. And the confidentiality of the code can also be protected depending on the implementation. So a very, very important point. Because, you know, sometimes the, well, almost always you care about the integrity of what's running it because data in and out is rubbish if you're not getting, it's not be protected in terms of the integrity of the running thing. But sometimes the confidentiality of the code is also important. If you're a bank running risk calculations or deciding what to build or buy or if you're doing, I don't know, pharmaceutical work, actually the algorithm itself may be worth protecting. So great question. Does that answer? Okay. Anything else I'll carry on otherwise? Thank you. Right. So I'm saying, I hope everyone recognizes this picture. Again, probably one of you created the original picture. This is the other problem, right? In that you break all of it if you break one of these because it's a chain, right? That's the weakest length in the chain. And if you can hit one of these, you can't trust what comes out at the end. Makes sense for everyone? Which means, of course, that there, you see, it's amazing what you get when you search for trust in a meme. So we need to fix this, right? So we've got some source code. Let's say that we're happy that the source code is what we think it is and it's come from where we think it is. But we need to build something. We need to make sure that what is built is what we think is being built with the tools we think it is building. We've got a bunch of dependencies. We want to make sure that the dependencies are the things that we think they are. When we do the packaging, we want to make sure that what has come out of the build and the packaging is what we think it is. And then when we use it, we want to be doing that as well. So how can we start thinking about it? Oh, the first we could sign the package and all of our problems are over. Thank you for laughing. I appreciate that. That is the correct answer. If we have time, I might go into a bit of a rant about signing later on, but I'll do that later on. But, okay, who's protecting that key? The very, you know, just the operation of signing is an extremely sensitive operation, right? And if we're doing that on that machine is compromised, well, the rest of it is useless, right? That's the last step before you're using it. It's absolutely useless. I'm assuming that these big black arrows are usually TLS or something. So we have at least some hope that we're not having things messed around with on the wire, right? But what are we going to do? Well, oh, yeah. But even if that's right, we need to protect the stuff that's been built, right? Because otherwise you're nowhere. Okay. And arguably, right? So this is kind of interesting. I'll get into the complexity just on the blue boxes. I think I'm going to have time to have a rant about it in a bit. But what can we do about this? Well, this is the first thing we could do, right? We could say the process of building and signing, assuming it's done by the same entity, and there's another whole set of questions, but the process of doing that could be protected with its integrity and its confidentiality, which means that when we get the stuff out of there, we're pretty sure that it has actually happened, as we think. If we check that the packaging software is the software we thought it was going to be, the integrity is correct, it's the right version, it's got the right hash on it and all that sorts of things, then we've got a decent chance that when we sign it with a key which has been protected from whoever runs that machine, that what we get out is going to be some use, right? So that means that if we do this set of operations in a confidential computing environment, we have at least a chance of getting something out that's useful, right? Okay. A chance. Maybe we should do the same for these as well, right? And the reason these are in different colors is because they're likely to be different sets of people, different entities who are controlling this. Okay. Now, there's some interesting stuff here. Remember I talked a little bit about attestation, right? The attestation is the cryptographic proof that when we built that environment, the green box or the sort of purple box or the blue box, that it was built in the way we expected it and is protected from the person who owns that box, right? Now, one thing we can do is given that we have a certificate or other cryptographic artifact from that attestation, we can add that to the signature. We can add that to what's coming out so that the user can say, you know what? Not only do I believe that it was packaged correctly, but I can believe that it was packaged in such a way that it was protected as it happened. We can do the same with the other pieces as well. And because we can change certificates, we can actually add that there. Is this in any specs at the moment? Not that I'm aware. Is it something that might be useful? Oh, I think so. So this is how confidential computing and a very, very simple level can help you with supply chain stuff. So there's the next question. Maybe when you're actually running the thing you've created, you probably want to think about running it in a confidential computing environment as well. I'm not even going to start on the source, right? Because source is kind of interesting too. But I think that's an interesting thing to think about too. So I've gone very quickly. Let me just stop for a minute and ask if there are any more questions. You got a question? Okay, so the question is, how can you be sure of the correctness of what's running there as well, right? Can you trust the hardware? So you have to trust something. Because something has to be doing the work. The smaller your TCB, and I absolutely include the hardware in your TCB, the better, right? And so confidential computing is basically the CPU or the GPU and none of the rest of the motherboard, right? So it's not your RAM, it's not your buses, it's not those. There's some firmware in there and some bits and pieces are registered. But that's all part of the attestation that's checked. So it all should hopefully be good. Can you guarantee correctness? It's very, very difficult at the hardware level, and that's a problem for greater minds than me. And it's outside the scope of confidential computing. Basically, you say that one of the units of trust has to be the hardware and the associated cryptographic materials with that is associated with that. Were there some more questions? I thought I saw another at the back, green. Oh, yeah, no, no, you'd need to go all the way back. No, of course you need to do that. Everything needs to be signed, everything needs to be running in these sorts of ways. I can only put so many boxes on the screen, but you're quite right, of course you are. Yeah, Carol. Yeah, so again, I don't think there's been work done in formalising that, but it is absolutely, whenever you're thinking about these sorts of operations and trust operations, you need to be thinking about chaining. Now, there's at least one chapter in the book about transitive trust and distributive trust and how you can, the extent to which those are additive, or sometimes the opposite. There's some very complicated stuff. And also I have, well, I think it's one of the most interesting bits in the book is around how open source trust works within the community and distributive trust and all those sorts of things, particularly in the area of security. And was there another question? Because if not, I'm going to start off on a rant. Right, okay, it's rant time. Okay, so let's say that the thing we're building, or one of the dependencies, whichever bit really doesn't matter, is a cryptographic library that we're going to be using. Yeah, so it's a useful thing to know. And part of the, well, we'll say the building bit. We have a maintainer who has a cryptographic key which they protect well. We'll just take that as an assumption. So, you know, got to start somewhere. When in, let's say an S-bomb, for example, that maintainer signs that built piece of code, what does that mean? Okay, tell me. Oh, we've got so many transitive trust relationships in there already. Okay, so the code should do what I think it should do. Okay, well, we'll unpack that a little bit. Okay, is the maintainer a cryptographer? One, are they a compiler expert? Two, have they checked every single line of code, looked for emergent behavior based on different combinations of these things? Are they, have they checked all of the people who are putting code in to make sure that they're the people they say they are? The problem is that when you sign something, it is entirely unclear at the moment what that means. And that is why supply chain security is broken at the sort of lowest level. Because, oh dear, even the, even the, even the, I say it and then it goes, even the package, you know, what does that actually mean? That's kind of interesting, let's do this. And nobody really knows because it's not defined. And I think the kind of your answer is it's best effort, right? And some people would say, I'm an idiot and I should have kept these things open. But some people would say that this is much safer and much easier in a proprietary world because it's the, it's the single, this is not, we don't have any AV folks, we'll get them hopefully coming back. Oh, there it is, it's fine, it's fine, it's okay, no problem. Because, you know, they can put their reputation by and they've got all the right people. I just don't believe it, right? Apparently in the open source world, at least, don't get me wrong, many eyes or bug shallows are lie unless you have the right eyes looking at it, right? But at least we can apply those eyes, at least there are enough eyes available to be applied rather than maybe two or three security people of sufficient depth and experience and understanding within any particular company. However big that may be who can actually look at this stuff and do with it. So, you know, I think that, I don't think this fixes the problem. For me, this is a necessary but not sufficient start to think about it because if you don't do this, you can't address any of the other things because all the other stuff is worthless because a signature and a certificate is worthless because you don't know that it's a real one. So, yeah, no, absolutely, you're, you're certain that someone's done it, you're not making any assertions about their capabilities and their knowledge, and that also needs to be part of the context of assigning. A certificate says one thing, which is either the owner or the person with control over a particular signing certificate or private key has the ability to create a certificate and attach it to them. That's all it says. It is, there's no other context associated with it. And the problem is that there are far too few people and a lot of them hopefully in this room who understand that that is what it means. And the problem is particularly when you're taking, and you know, people begin to realize this about open source software supply chain, excellent, but by saying in proprietary case, you're safe, you're just avoiding the problem. Right. So, I'm going to carry on a bit and we may have some time for some questions, but I don't want to run on too long. So, let's see if I can make this one. Okay, so a little bit about confidential computing consortium. So, this is what we are. We're an LF project. We've been around for four years next month. And it's about basically trying to get more people thinking about using, adopting confidential computing. Part of Lynch's foundation, coming to open source software, open source is best. Again, I'm preaching to the choir here for security, for trust, and for auditability. And auditability is absolutely vital here because not only do you need to audit all of the stuff in your supply chain and all the build pieces and all the packaging pieces, but you need to audit the stuff that made those colored boxes. Right. Because if you can't audit that, you have nothing you can do. So, things like, we're encouraging all the hardware manufacturers to make all of their firmware open. It'd be nice if they made their hardware open. No, maybe we'll get there. Arm is out there. Risk is out there as well. So, there's a number of folks doing this stuff. Oh, it's, yeah, open source is just best. Right. These are the current CCC projects. The first one is the one which I was the founder of, a bit quiet at the moment, because the company I was doing didn't work out. But there we go. A bunch of other stuff, and it's worth looking at them. And if you have any interest, please get in touch or just get in touch with the projects. How can you get involved? Well, firstly, you know, use and contribute to those projects at 10 meetings. You can come to any of the technical advisory council or outreach slash marketing meetings. You don't need to be a member. I should probably not say that, given I suppose to recruit people, but it's true. And we encourage people to come along and contribute. And we have these different boards here, have a particular SIG on attestation, a particular one on governance, risk management, that sort of stuff, compliance. And, yeah, if you are here and interested, then your organization should probably think about joining the CCC, right? It's the next foundation projects, not too expensive. It's a large, if it's a, you know, if you're a small company, and we have a couple of different levels as well. Oh, and yeah, if you're interested in this sort of stuff, and you think it's cool or fascinating or scary or bad, I don't pretend to have all the answers. This is about trying to get people talking about these things and understanding and giving us a framework to talk about trust. I literally wrote it because I went to a conference talk about trust and the person had no clue what they were talking about. And we had no shared framework to discuss it. So I wrote this in the hopes that we could at least have some sort of framework. And I could be told I was wrong about stuff. It really doesn't bother me particularly. Well, obviously it does. So that's me. We have five, ten minutes to talk about whatever we want. That can be to do with this or it can be to do with whatever we want. I mean, the weather's here. It's lovely in Bilbao. But if you have any questions, please far away. I will give the best I can. So the question was, in this picture, is all of this supposed to be done in a trusted execution environment? To which my answer is, I would like to see each of these different coloured boxes being a separate trusted execution environment or set of trusted execution environments. So, again, I would generally expect the entity who controls, let's say the packaging and the signing and all of that, to do all of that in a set of trusted execution environments in their own trust domain. And I'm using the term in the technical term that's discussed in the book. So it might be you might have a bunch of these, but they're collaborating together. You control them. They're mutually attested. You're happy that the integrity of all of those is what you expect. Yeah? So the question is, go into the confidence. Could you think it sounds like a lot of gymnastics when you could just be running your own hardware? To which the answer is, yes. I'm really glad that you trust your own hardware. And yes, there may be times where you want to keep a separate build machine where you have special images that you know and you audit every one and all those sorts of things. And that is certainly an option, right? And one of the interesting things about confidential computing, depending on how you do it, and there are a number of different approaches, is that you can actually have a much smaller TCB, software TCB and hardware TCB than you would otherwise, right? So if you can be sure that you don't need to worry about the rest of the OS, for instance, and you can just have a library-based equivalent VM, right? Those sorts of things. That's one way to control it. But yeah, there are times you want to do that. There are times you want to use an HSM, frankly, but HSMs are complex and they're expensive and they're difficult to manage. But this happens on the cloud now all the time, and that's not going to change, right? I'd like it to change for some stuff, but we have to admit that this is where it's happening. And also, can you trust your own hardware and the people who are access to the hardware, et cetera? And it's all very well if you're doing a single build once a month. But if you're doing CI CD, you need to be able to deploy stuff in a way that works. Is it gymnastics? Yes. Is it complex? It's becoming less so. And the more people who are involved, the less complex it will become. So I'm going to summarize the gentleman's question. He said, I would presumably prefer it that people didn't have to work out their own thing and that the industry provided tooling to make it easier. Yes. However, it's important that that tooling provides the visibility into the processes and what's what's happening so that those who are expert enough in a field can do it. And that as we move into making it more mainstream, that good monitoring software and all those sorts of things allow people to be alerted about things they need to worry about. So yes, absolutely needs to get there. If you're interested in the complexities about monitoring and debugging trusted execution environments, there's a blog on the Confidential Computing Consortium article, which I wrote about that. It's horrific. But kind of interesting because the whole point about debugging stuff is that you look at stuff that's going on, but you can't look at stuff that's same with logging. Right. So kind of interesting. Anyone else before this gentleman? He has another one. Okay. Go for it. Yeah. Who is you and who is the user in that question? So the question was, I put a question mark above that. Is there a way for you to trust that the user is using the thing you think they should be using? So I talk quite a lot about trust relationships and understanding who you is is kind of important here. So is there a way for the, let's say, the initiator of that process to be sure. Yay. Thank you. Thank you. Do that again. Is there a way for the initiator of that process to be sure that the person with access to that process, the person they can, it should do? The answer is yes. There's some really interesting and complex ways of doing that. And there are also ways of ensuring that the person initiated it is not the person who has access to it and to do things like shared multi-party compute and to prove that even though I started it, I can't change it and you can trust what's going on in it by using nice crypto and stuff like that. And yeah, there's a whole bunch of stuff around there. So the answer is yes and absolutely no. Which is my kind of favorite type of answer, really. Anyone else? I'm having fun. Yeah, please. Yes, there's a very famous story by, is it Thompson? Yeah, that one. It's in the book. Yeah, it's a great question and a very important one. And yes and no. It's not good enough to go just back to the compiler just so you need to look at the chips too. But yes, thank you whoever it was who referenced that. Yeah, it's in the book. It's a well-known story and yeah, you can't trust the compiler there. There is an open source project called bootstrapable builds which is trying to solve this problem, that problem. I hope it uses this stuff because if it doesn't, I don't trust it. I'm not sure I trust it anyway, but I don't trust many things. But people are looking at this sort of thing and they're quite right. They should. We need to look at all of this stuff because most people take this stuff. They're the people who click through all the cookies, right? None of you do that, do you? No, okay. But we need to help those people because they're not in a position to do it and we are. We have to build this stuff right because critical infrastructure, banking, pharmaceutical, medical stuff, your energy supply, all of this stuff matters and it all uses this stuff and government thinks that the stuff they've put out there now sorts it and it doesn't. We need to expose this stuff and make it happen. If you've got any other questions, find me. I'll be at the booth for a while. This has been great. Thanks for your attention.