 Great. Thank you. So yeah, confidential containers. It doesn't really matter who we are. I wanted to point out one thing though before we get started, which is that Miko came all the way from Finland to do this talk, and he actually walked all the way here too. So I know it's been like some of us have just been in this room for the last two days straight and are sinking into the chairs a little bit and sticking to the tables and all this stuff. But let's all take a minute just to wake up and get ready for 70 minutes of pure raw excitement as we discuss confidential containers. Okay. Can we This talk this workshop this we hope it works shop is mostly going to be centered around this project confidential containers. Very generic name. Sorry about that. Confidential containers is a CNCF sandbox project. We became a sandbox project at the beginning of 2022 time really flies. Has anybody raised your hand if you've heard of this project already confidential containers. That's actually more than I expected. Raise your hand if you've heard of confidential computing. Not too bad. Raise your hand if you've heard of cloud native. Okay, that's over. That's what we hope. Confidential containers is about bringing confidential computing to cloud native. We've got a couple of different goals for this workshop. The first one is just to give people a little bit of a taste of what confidential computing even is. We're aware that at kubecon at cloud native security con. That's not necessarily going to be familiar for everyone. So we're going to start out with an introduction to what is confidential computing very deep subject. So it's just going to be a little taste. Then we're going to talk sort of generally about how confidential computing and cloud native stuff can fit together. And that should lead us into getting a taste of the confidential containers project which does exactly that. We'll get our hands on with that. Those are the goals. The last goal have fun. That's actually a mistake. We're not going to be having fun today. But I do think it's important just to note that this is kind of new stuff for a lot of people. So we don't have very high expectations about people just instantly understanding every single thing about this project. We don't know how many of our slides we're going to get through. Maybe all of them maybe just this one. Feel free to stop us even before the end. If you have some question and he wants to slow down, we can we can probably give you the answer about confidential computing stuff. Like I said, you guys should think of yourselves as pioneers. The first people from this community who are starting to learn about this kind of stuff. That's exciting. And that's really what we're hoping for. What are we actually going to do, though? Here's the agenda, basically. First, like I said, introduction to confidential computing. Then we'll talk a little about confidential containers. Then we have some hands on stuff. Now, in some ways, what we're presenting here is a very difficult topic to make into a hands on workshop. And you'll see why when we try to do that. But there's things, for instance, I don't see anybody here with an AMD Ryzen server in their backpack doesn't look like it. So there's some hardware issues. Oh, there's one. Okay. There's some hardware questions. We've organized some things that you guys can try out and that should give you a pretty good sense of what we're talking about. So the first one is going to be a demo of encrypted container images. That'll be pretty cool. Then we're going to come back and do a little bit more theoretical stuff talking about VM based isolation and process based isolation. Then we're going to give you a demonstration of the confidential containers operator, which is one of the biggest features of the first release of confidential containers, which just came out in the end of September. And then we'll show you something called Enclave CC, which is pretty cool. Then we'll talk a little bit about the future of the project, some exciting vision stuff and how you guys can get involved with confidential containers, which is really our main goal. Convince everyone here to start writing some code for the open source project. I'm optimistic, but we'll see what happens. So first things first is what is confidential computing? Maybe you guys have heard sort of this buzzword. It might be a cool new thing. Let me give you a basic breakdown. Again, if you guys have questions about this, feel free to stop us at any point. But a very, very common way of thinking about confidential computing is that it's about protecting data that is in use. So if you think about data, as most people tend to do day to day, we already know quite a bit about protecting data that's at rest. You can put the data under your pillow, print it out, put it in a bank vault, no problem. There's a lot of ways to protect data at rest. Also data in transit, a little bit more tricky, but kind of a solve problem. In other ways, we have TLS, we have things we can do to communicate securely with other people, to send data securely to other people. But what about protecting data that we're actually executing, like, you know, commands with or that we're, you know, processing or that we're trying to run ML computations against how are we going to do that? Well, one option is confidential computing. And basically, what confidential computing does is it creates an enclave, which is a secure world that has some kind of isolation boundary between the stuff in the enclave and the stuff outside of the enclave. Now, there's different technology out there, but a lot of things use memory encryption, and a lot of things map this enclave versus not enclave onto a VM. So you might think about a VM with encrypted memory that the host cannot read. There's something kind of under the surface here, which is that this gives us a profound new way to think about the relationship between a host and either and a guest. And it could be a guest in a virtual machine, or it could be a guest in another context as well. But a new way to think about trusting the person who owns the computer that you're running on. And there's already a lot of interesting considerations for Kubernetes and Cloud Native that might be coming to mind here. We're going to get to them. But at a very basic level, it allows us to have two different worlds, a trusted one and an untrusted one. Now, isolation is only kind of half of the story with confidential computing. So we have a castle, essentially. We've got these big walls with memory encryption with these other hardware technology that's protecting access to data as it's being processed. But there's also another part of it, which is attestation. And that's figuring out who's actually in the castle what's actually going on. It's not that useful if I have this enclave that's really secure and I have all my data inside of it, but I don't actually know what was loaded into the enclave. So confidential computing almost basically always will involve some sort of attestation where you figure out, hey, what's actually happening inside the enclave? Where did I start out with? Sometimes this is at boot, sometimes it's at runtime. There's a lot of different trade-offs here. But that's kind of the basic idea of confidential computing. Hopefully that makes some sense. I was pressing the shift key, not the error key. It's tough. So when we start talking about confidential computing and cloud-native, kind of the first question we need to answer is what should go into this enclave? What should go into the secure world? And this is actually a question that you need to ask yourself whenever you're doing any confidential computing-related project, because some things have to go inside. The trusted things need to go inside, and the things that are not trusted are outside. Obviously this has huge implications in terms of, oh, I'm on the wrong slide here. Okay, that's the next one. Sorry, sorry. Let me first give you a comparison between some traditional technologies. So containerization versus sort of virtualization versus confidential computing. How do these things compare? To be clear, if it's not clear already, we're way down the stack here. There's been a lot of presentations about security over these two days, and we are really far down. Get a shovel and just dig as far down the stack as you basically can. We're talking about the primitive for isolating your workload all the way at the bottom, so something like Run C. And when we look at containerization over here, I'm not referring so much to using containers, so much as I'm really referring to the mechanisms that something like Run C uses to isolate one container from another. And there's different ways this can work. The reason we have this slide is to create a huge argument and a flame war afterwards, so we'll all dislike each other. It's actually just to give you a little bit of an overview, so this is very general, but usually the way that something like Run C will work is that there'll be a shared kernel on the guest and we'll use mechanisms like C groups to isolate some of the resources. This shared kernel has a huge attack surface, basically, between the guest, so-called guest, the container, and the kernel. There's a lot of syscalls that you can make from inside of a container that the kernel will service. This is a big attack surface, and if you can find a hole in the kernel, you can potentially get into other containers. So this is kind of the classic spiel about how containerization is maybe not the strongest isolation boundary. Again, we're not trying to start any wars here, but this is a pretty common position. So one thing that people have proposed is something like virtualization. Virtualization is another way to isolate workloads, and when you do virtualization, every single VM is going to have its own kernel. This is one of the big distinctions between containers and VMs, is that in VMs, every VM has its own kernel, and you're also going to be making use of a lot of hardware-related features like nested page tables that will have hardware enforcement of the guest address space. So the boundaries are thicker. Now, this is actually an extremely complicated thing to think about in some regards, thinking about, for instance, the API of syscalls, comparing that to the API of hypercalls, is one of those more secure? That's a very interesting and complex question, but generally speaking, VM isolation is thought to be a little bit stronger than container isolation. There are already projects out there that take containers and run them in VMs. One of them is Cata containers. Another one is Firecracker. We'll talk more about Cata containers very soon. Confidential computing goes a step beyond this. In the first two examples, the host is pretty much in control of everything. With containerization, for instance, the host sets up the C groups. Even with virtualization, the host sets up the virtual machine. They control the page tables of the guest. For instance, they control emulating the devices of the guest, things like that. They can also just dump the guest memory and read it whenever they want to. With confidential computing, let's say we've got encrypted memory. Well, the hypervisor on the host can still maybe dump the memory, but when they do that, they're just going to be seeing a ciphertext. They're not going to have access to the keys which are stored in hardware that will allow them to read the memory. The hypervisor might be part of facilitating attestation, but they're not going to be able to generate a fake attestation report because, again, they don't have access to certain keys stored in hardware that they would need to do that. Like I said, bottom of the stack, containerization, kind of what we're doing now, but there's much stronger isolation boundaries that you can get and it's probably self-evident why you might want that. Now, hopefully this will give you a nice sense of deja vu. We've got this slide about where we should actually draw the boundary. Like I was saying, any project about confidential computing, you've got to decide what goes in the enclave and what goes outside the enclave. And the last slide actually had a little bit of a hint about this, too. There's some interesting trade-offs you get with containers and VMs having the kernel be shared on the host or having the kernel be inside of each guest. That has pretty profound implications for the attack surface and for sharing, for how efficient these things are going to be, for how much overhead each guest is going to have, but also for how much sort of shared attack surface there'll be. This kind of builds on that to some extent. So this is sort of a summation of really a couple of years of thinking that has gone on in the confidential container space a little bit. In some ways it's a very simple question, what goes inside the enclave and what goes outside the enclave. But this is a deceptively difficult thing to think about with a lot of nuance. Let me at least give you the three different options that are kind of the most reasonable. One of them is to put every individual container inside of an enclave. And you might be familiar with projects that have done something like this in the past, like with SGX there's been something like SCOAN containers or X containers that's similar. These take one container and they put it inside of an enclave. This is kind of nice because it minimizes the amount of stuff that's in the enclave. In other words, it minimizes the TCB, the trusted computing base, less code inside the enclave. That seems like a good thing, right? There are some other tradeoffs we'll get to. Another approach is to put a pod of containers inside of the enclave. So here there's some sharing of resources between the things in the pod, but we still don't have a massive amount of code in there. Another example, and this is actually one of the first things that occurs to a lot of people, is why don't we just put the entire worker node inside of an enclave. If this enclave is a VM, why don't we just stick our whole worker node in and spin up all the pods in there? Okay. That obviously has a really big TCB that might include things like, oh, you need some sort of demon to mount weird proprietary storage into your worker node. Now this is inside of the enclave. You don't really know where this code came from. It's hard to audit. The TCB gets really big. It also means that the kubelet is inside of the enclave. That's pretty interesting. So if the kubelet is inside of the enclave, now we really care a lot about the API between the kubelet. The kubelet's API is now really the focus of, well, it's part of the attack surface of the enclave. That's what I'm trying to say here. So profoundly there's kind of a bit of a tradeoff. Enclave smaller means smaller TCB, but when you make the enclave bigger it gets easier to share things. You can imagine, for instance, if I have every container in its own enclave, well, I'm going to have to download the container image for every single enclave. And if a bunch of those are exactly the same image, I still have to download it again and again and again. So not great for sharing. And sharing is a very general term too. Like what about sharing on the network? If I have every container inside the enclave, I have to do a bunch of work to make sure that I'm securely communicating with other containers. I can't just throw stuff out on the network and hope that it works out because it's going to go out into the untrusted world. If you look at something like a pod, well, maybe we actually now have network namespaces that are inside the enclave. So pods can communicate with each other inside the enclave. That's kind of nice. Also maybe we could share some of the container images inside the enclave. So bunch of tradeoffs. I'm going to give you a big spoiler here, which is that confidential containers took the approach in the middle. So it is pod centric virtualization. That's kind of the hallmark of the design. There are other projects that do the other things like there's a project called libk run that can do individual containers inside of pods. There's also something called constellation, which is node centric, right? There's a lot of complicated tradeoffs suffice to say confidential containers. We bet on the middle one because we think it's a good compromise and we think it's the best. But this is a big argument that we're not going to have right at this moment. I want to give a little bit of overview just generally speaking. There's a lot of confidential containers or confidential computing technologies on the market right now. If you're looking at VM based enclaves, the two biggest, most well known ones are AMD SCV. They said sort of three different generations. They're actually referred to as features, but they're generations. SCV, SCV ES and SCV SNP. Intel also has Intel TDX. There's a bunch of other options as well. We're also going to be talking quite a bit about doing this not at the VM level, but at the process level, right? And looking at some process based technologies where we have just parts of a process isolated. This is SGX, which you guys might be more familiar with. It's been around a lot longer and it's a lot more mature. So we'll be talking some about both of those. But before we get like way deeper into theory land, we want to go over into demo land a little bit and give you something to try. What on earth could we be trying at this point? Well, if you remember back to the agenda, we're not diving straight into things. We just want to give you a little bit of a taste of encrypted container images. You can sort of imagine if we're going to be doing all this work to have our images be executed inside of an enclave. We probably want to be protecting these images either with signatures or with encryption while they're in the registry and while we're pulling them down. So this is some existing technology that's already sort of standardized that confidential containers didn't exist, but that confidential containers leans on completely. And I think Miko is going to show us how to do this. We have to switch laptops that everybody look over there. I can keep talking about the slides for a little bit here. So we have three demos today. The first one, which is about these encrypted containers is the one that if you're interested in doing like hands on work yourself to follow along. This is the part that you can participate. So we have this QR code, which takes you to one of these repositories I created for this demo and the link is also shown on the slides in black. So apologies for the color, but it's there. Getting to that GitHub repository will show you detailed instructions, but I also have the instructions on the slides and then I'm planning to use the terminal here to kind of walk you through them. You can download the slides, by the way, from like KubeCon website. Download the slides, click on the link or use your phone. But encrypted containers, they have been around for a while. I think the early proposals, how to do encrypted containers, how to add the necessary containers, metadata into images and registries, the proposals have been around since 2018. And I think one of the pioneers in this space has been our tax security lead. Next one, talking about how this flow, image encryption flow works and what are the tools you need to use to get your container images encrypted. So starting from the left, basically you'll find projects like Scopio and Image Crypt. These are the projects that originally implemented the basics of containers layer encryption. And one of the most recent additions is this concept of key providers and external key providers plug-in, which is also something that we are using in confidential containers. So the basic idea is that these key providers, they manage all the details of how you're managing your encryption keys. In this particular case, it's not the layer encryption key itself, but it's basically a key that you're using to encrypt your layer encryption key. And the key provider API supports two functions, basically the wrap and unwrap APIs on both sides of both encryption and decryption. And in confidential containers, we have this component called attestation agent, which is basically what is our key provider implementation. Underneath the attestation agent, we have various key broker client plug-ins that basically then support the connection between the actual key management service and the attestation agent side of things. In this workshop today, we're kind of going to focus on what's on the left-hand side. And then later on, I have a demo how to use the decryption in an enclave environment, basically. We have in confidential containers project, we have this sample key provider implementation that we are going to be setting up next. So if those of you who are willing to try it out, you may have already found your way to the repository. And maybe I'll just use this one because it's easier. So cloning the repository is the first thing. And I tried to make it as easy as possible to use and follow. There is no magic, so what we are doing here in the demo is when you run this, after cloning the repository, it also clones the attestation agent as a Git sub-module onto your system and make setup command. Basically, it does two things. So it builds the sample key provider that we have in our attestation agent repository and sets it up and runs it in its own container. And the container on your system would then be called key provider. And this is basically automated by this make setup command. And the other step, the make setup executes it basically sets up another container that has the network to connect to the GRPC service provided by this sample key provider. And the other container is where this encryption basically happens. If anyone is desperately trying this and wants, I can come out to you and look at the area you're getting and say, oh, that's weird. And then move on if that would be useful. Maybe I'll just try this one here. Just before the presentation, I realized that this talk might end up being a hands-on workshop how to get my screen sharing to work. Let's see how this thing goes. If it doesn't work, then. There you go. Yeah, I sent the other one. So there's also, I've done a prerecording of this whole flow. So the third option would be to replay this ASCII MR recording. It's basically doing the exact same steps that we are going to be executing here, command line. Yeah, sorry about that. So it's this repository under my name. M-Y-T-H-I on GitHub. This is Miko. You can probably find him from there. And by the way, I forgot to mention the prerequisites or dependencies. So this, of course, as I mentioned, runs things in containers. So you need to have a container runtime. And of course, being able to clone the repositories is something that is needed. To speed things up a little bit, I've noticed that getting this key provider container prepared takes a bit of time. So I've started, I think I have those running. So I'm not able to see very well what I'm typing here. But does it say make encrypts now? Yeah, the brilliance of this tiling window, managers Miko has, is that he cannot see his screen. That seems close. Yeah, so what I'm doing here is just pulling Hello World container image, which is a single-layer container image using Scopio, storing it locally, and then running another Scopio commands to encrypt the container layer. And to tell Scopio which key provider to use, I have to specify this ocicrypt.config file. And this is basically pretty simple config file. It just tells Scopio that, hey, we have an attestation agent, and key provider serving gRPC in this local host network address in this particular port. And looks like it successfully encrypted the container layer. It's only one layer that got encrypted. That's because the image, Hello World image has only one layer. With image encryption, it is possible to choose whether you basically want to encrypt all of your container image layers or just a subset of those layers. This might be handy in cases where you are using some very big base image that doesn't contain any confidential information that you would have to encrypt. You would only get to encrypt those layers where you are storing your sensitive data, for instance, like AI machine learning models that you want to make available to your customers on a public container registry. You might want to then encrypt those container layers that contain this sensitive information. And then let's take a bit more closer look about our encryption results. So I ran this make check, which basically just gives Scopio inspect all the container metadata information. The first step shows you that we have an encrypted layer available in that image. And this is the new media type that's been proposed to be part of the OCI image spec. It's not officially there yet, but this is how it's envisioned to look like. So it's an encrypted layer, what we have. And then two new annotations have been added to the image. These two annotations basically specify the key provider information that's embedded in the layer annotations metadata. And then if we pick the first annotation, I'm not showing the annotation values. I thought it was more useful to basically just kind of show you what the annotation labels are and then the keys, they are not so relevant. But here I have another. The last step here is that I've basically taken the first annotation value and then base 64 decoded to show you how it looks. And this value here, it's basically the key provider specific metadata what it stores in the container layer to be then able to later on decrypt the container information or container layers. So this is our sample key provider specific sample metadata that is stored. But if you ever want or see another key provider implementation, they might implement this type of metadata a little bit differently. Going back to the slides. But the encrypted containers are of course nice, but there are also some challenges with them. And these are basically some of the things that we have tried to solve in our confidential containers project. So the two challenges, Tobin talked about memory encryption and like the reason why confidential computing exists. So you are processing without confidential computing, you might be processing confidential information in plain, in your CPU, in your system memory. So that doesn't go away or that problem still remains without any confidential computing and the unwrapped key information would still be one such example that you are processing in plain without confidential computing. The other example, the other challenge which is a bit more severe is that the container layers, they still have to be unpacked on the node file system in order for the container runtime to run the container. And when you are decrypting your container layers content on a host file system that might be visible to people or entities like CSPs, that's not a good thing. Wrong computer, thank you. So how we are solving these two problems with confidential containers. So first of course the confidential computing and memory encryption already solves the first problem. So handling all your keys in confidential enclaves is solving the first problem. The other one required a bit more thinking because we were facing some of the limitations how the Kubernetes and container runtimes architecture works. So on the left-hand side in this diagram, so if we look at VM isolation technologies and projects like Cata containers. So even though you are protecting your host system from untrusted workloads in VMs, you have your images being pulled outside the VM environment. So Qubelet, when it starts a container on a node, it talks to the container runtime like container D to pull in the images. Container D is responsible for pulling these images on the node as of today without any changes. And then container D starts an OCI compatible runtime with Cata containers, it's Cata runtime that is then responsible for first creating the VM environment and then starting the containers in the VM environments. But the thing is the images, they are being pulled and unpacked on the host and in case of encrypted containers, of course, also decrypted. But with confidential containers, we're changing the flow how these images are being managed. And the basic idea is that all this image manipulation, if I can call them manipulation, like pulling on unpacking or pulling decrypting on unpacking is moved into the TEE environment completely. And this has required changes in container D in particular because we needed container D to support this image pool of load. That's how we are calling it. In the future, it's going to be something along those lines that container D supports just like opaque image transfer API and then different runtimes can implement how they are. Different runtimes can then choose how they are implementing this particular image transfer service. But you can think of it like TEE Cata Sandbox telling container D that, hey, I'm your image transfer service. So from now on, I'm handling all the image transfer functions for you. So when Cubelet wants to start a container, container D knows that, hey, okay, I'll have to start a pod. In case of confidential containers, the pod would be a secure encrypted VM that gets started. And then the image transfer function implemented by the Cata Sandbox is responsible for pulling the image. Not just like this image pooling, but then we also need to do this image decryption key retrieval from the key provider service. So in the trusted execution environment, we have a component called attestation agent that again knows based on the layer metadata information like where the key provider is located and how to get the keys to decrypt. How to get the keys to decrypt the layer encryption decryption key on the node or in the Sandbox, sorry. Between the trusted execution environment and this key broker service, we have an attestation in place. So before the key broker service is able to or before the KBS key broker service sends the key necessary keys back to the trusted execution environment, attestation takes place to make sure that the VM environment is a genuine trusted execution environment that it has all the necessary parameters in place like hash of the kernel image or initRD image. And all these right parameters that are the characteristics of a trusted VM. Next one. With the process isolation, this problem is a little bit different because we don't have the luxury of having this full VM environment where to process these images. But before going into how we have solved it, I also wanted to say a few words about how this process isolation worked. Tobin already briefly talked about it. Originally, the idea of process isolation was so that you have an application that you are building from scratch and this application that you're building has two sides or two different worlds like the trusted world and the untrusted world and when you're designing your application during the implementation you decide that what are the parameters that your application needs to process in the trusted execution environment or trusted world and what is the functionality that doesn't need to be in the trusted execution environment that can run outside of it. This model won't work with confidential containers and it has the problem that it only works in cases when you're building your application from scratch but this is not something that we can use. It's very hard and especially with confidential containers so we want to be able to run unmodified applications. Pulling container images, pulling encrypted container images and being able to just run them and process the sensitive information in confidential computing environments is basically what we want. Fortunately, there are projects available that basically allow you to run unmodified applications and still being able to use the process isolation characteristics. Tobin mentioned SCOAN but then we also have projects like the open source projects like Oculum or Grameen. These are library OS projects that basically allow you to run unmodified applications in trusted execution environments. How we are using these library OS projects in our confidential containers work, what we are calling as Enclave CC. We are running these services that need to be run in confidential computing environments. We run them using these library OS components. Inside a pod with process-based confidential containers we don't have a VM anymore. We just have the virtual concept called pod but inside the pod we are still implementing our image transfer service function. As a container, this is just a container that we are running. Right now we are using the Oculum project to do that. We have a component called Enclave Agent which registers itself as an image transfer service again to container D. That's responsible for handling all the image manipulation functions and the key retrievals together with the key broker service. Then the Enclave Agent is also capable of managing... This is not part of the presentation. The speakers had enough of my speak. We can just shout if that's necessary. Enclave Agent runs in an Oculum-based library with the library OS environment implementing all the image pool, image unpack. Then it's also... Like I said, I was getting to the last piece. It's automatically wrapping the original container application together with all the necessary library OS components. When it gets the time to run the actual application workload, what we do is we first start the library OS environment and then we start the actual workload image inside the library OS environment. One thing extra what we have to do with the process-based confidential containers is that we need to be... There's like a two-stage process. So we are pulling an encrypted container image, but since we are not in a VM environment, we have no secure place to unpack it. So what we have to do is we decrypt the container image, but then we have to encrypt it again on the host file system using the mechanisms provided by these library OS implementations. They have this feature called Protected File. So when doing writes inside the library OS environment, it automatically encrypts all the writes to the host file system. And that's how we store the unpacked container images. But then again, still on the host side, all of the container content is encrypted in a way that it is also like confidentially protecting. So if somebody had the access to the node and was able to find your container images, it wouldn't be able to tell what is inside your container images. All of the file names and all the files metadata is also fully encrypted. And then once the application container starts, the library OS implementation is then again able to decrypt the intermediate encrypted format and then start the application. So pretty lengthy explanation, but so is the implementation because we don't have the luxury of storing all of these containers' image content in a VM environment, but we have to rely on doing something on the host, on the bare metal side. But really, you might think that these are so different things that they can't be in the same project, but that is not true. We actually have quite a few components that both the VM-based and the process-based mechanisms are sharing. So I talked about the attestation agent. We have Rust-based implementations for the OCI Crypt and ImageRS. ImageRS is basically the Rust create that we are using to do all this image pooling from registries. So both the Enclave CC and the Cata containers-based functionality, they are actually sharing quite a few of these building blocks. And from what is, I guess, the most important thing or two important things is from the end-user perspective, this is completely transparent. So the end-user, when deploying the workload, only needs to choose what is the runtime environment he or she wants to run this container. So we do this using this Kubernetes runtime class mechanism with Cata containers. And as Tobin is going to show you next, how these runtime classes are being configured. So the user can choose between Enclave CC runtime class or Cata CC runtime class. That's about it. Okay, so it's time for another thing that hopefully you guys will be able to try. This one's a little bit more complicated than the last one, but the first step at least is go to the QR code or download the slides and look at the link. Or this one's on my GitHub, so if you go to GitHub slash user or whatever slash fits them, F-I-T-Z-T-H-U-M, F-I-T-Z-T-H-U-M, that's me, and look at my repositories and you'll find this one, that's the demo. So I'm going to go over to this repository right now. I'm going to give you a fair warning that this will require you to have some kind of Kubernetes cluster that you can use and also you've got to use container D with your cluster or you've really got no hope of making this work, but this repository isn't going away, so if you aren't able to sort it out right now, even with some help, this is something to look into afterwards, especially if you actually happen to have AMD or Intel hardware that supports confidential computing, this will walk you through how to actually try out the project. So this little demo that we're going to do is mainly an adaptation of the project's official quick start guide and I would really recommend that you check out the confidential containers quick start guide at some point because it lays out all of this stuff. As Miko was saying, we have two different things that we support in confidential containers. One of them is VM-based enclaves, the other one is process-based enclaves. I'm going to show you now how to set up confidential containers with VM-based enclaves, which is basically a take-off on CADDA containers. So the first step is not this, don't try and ssh into the node, oh, also ignore that, you should update thing. And let me just say in advance that, oops, in advance that I have a butterfly keyboard from Apple, so I will not be able to type anything accurately, but jeez, as you can see, anyway. So first of all, I just have a single node cluster here. It's important that this thing is labeled as a worker node, or this will not work, but that's noted here in this little guide. The first thing we're going to do is install an operator. So we realized pretty early on that our project was really complicated and that it was hard for developers to work on it and even harder for end users to consume it. So we created a Kubernetes operator that will automatically install everything you need and this is also tested in the CI. So hopefully it'll work. The first step is to just apply this thing you can see it does something and then we're just going to check that it worked. So we have a namespace for all of our pods that are used as part of the operator, which is Confidential Container System. That's our namespace. You pretty much always want to use that namespace when you're checking out what's running. You can see we have this CC operator controller manager and ready two out of two, nice. Despite the brief alien invasion earlier, the demo gods so far are doing okay. This operator is a CRD, so Custom Resource Description. We need to actually now create a custom resource which is going to be the CC runtime and you do that by copying this second YAML thing and this will take a little bit longer to work. You can see this CC operator demon install thing is still thinking. So we'll run this a few times. Yeah, great. This is doing a lot of things behind the scenes. It's installing CADDA, like the components on the host or just like the CADDA shim. It's installing the hypervisor that we're going to use. It's also installing like the firmware that we're going to use inside of the VM. It's installing a lot of different components and you can tell that those things are installed because when we have the operators finished installing, we can do kubectl get runtime class and as Miko was saying, we use runtime classes to actually switch between the different ways of using confidential containers. So you see we've got a bunch of options here. We've got this CADDA runtime class. When you use the CC operator, all of these are going to be confidential containers but CADDA and CADDA QMU are both going to not require you to have any specific hardware. So that's what we're going to use today. Then we've got CADDA CLH, which is no specific hardware but uses cloud hypervisor instead of QMU. Then we've got CADDA CLH TDX, which is cloud hypervisor and uses TDX. Probably you guys are not going to be able to do that now. We also have CADDA QMU SEV, which is the SEV version obviously and CADDA QMU TDX. So a few different options here. To use one of these is really simple actually. So we are going to, if you clone this repository, you'll see that it comes with this enginexcc.yaml and really the only difference between this yaml file and any other is that I'm using this CADDA runtime class. That's it. So it's pretty transparent to switch from a standard workload to a confidential containers workload. Now this container edge image that I'm using here, this bitnami enginex thing, is not encrypted, right? We didn't think we quite had the time or the bandwidth to show you how to pull in unencrypted image with CADDA CC. Although I think Miko is going to show you how to do it with enclave CC for processes shortly. So we're just pulling in unencrypted image. Again, the quick start guide has all the instructions for encrypting your own images and pulling them and doing this on TDX and SEV. So if you guys have the interest and especially if you have the hardware, please check it out. We want as many people to try that as possible. Anyway, so we're going to run this thing and hopefully it'll do something. So there we go. So this is running. Now, this looks an awful lot like what would happen always when you run a pod. So I'm going to do one little thing to you to show that something kind of interesting is happening here and that's this command that we've got down at the bottom where we're actually going to use cry cuddle to talk to container D and ask what images we pulled on the host. And first I'll just list all of the images that have been pulled. It doesn't quite fit on the screen. So let me then grep for the one that we just were doing and you can see nothing shows up. So this container image was not pulled on the worker node. This container image was pulled by the Cata agent inside of the VM. And when you use an encrypted image and you're pulling inside of a confidential guest, that starts to give you some pretty powerful guarantees about who can access that image and how the data will actually be processed by it. I think we're going to move on to other demos, but if people are looking on this, flag me down, I'll come around and we can put our heads together here. Or again, try it out after the fact, look at the quick start guide. Like I said, especially if you have access to the hardware, try it out and hunt me down on the CNCF Slack or the email or something and let me know how it went for you because first release is pretty new. We really want to gather feedback about using this stuff both from a developer's perspective and just like any random person, we can get to try it. So that's that little demo. I'll give them the QR code from here. We have another demo, which is doing something pretty similar, but using Enclave CC. Enclave CC is the process based version of this, basically same thing, but with SGX and scan the QR code. You take that. As Dolbyn said, this is all pretty new. What I have here next is even newer than that because I got this thing in a demoable shape on my flight while waiting for my connecting flight. Pretty fresh, I would say. One thing again, screen troubles again. So what I wanted to say is you saw this QR code already, so it takes you to another Askinema video recording, so just play to get back to it offline. This is basically the scripts that I'm now running that demonstrates this whole functionality and I'm now feeling pretty lucky that I don't have to type in all the Cube CDL commands myself, but these are scripted. I was using these scripts to prepare the Askinema recording, so now I just get to kind of replay the exact same steps here and because of this screen sharing problems and not being able to see things correctly, it helps a lot that I can just use this little helper script here. So what I'm demonstrating here is like Tobin mentioned, the Enclave CC the process-based setup. This runs on SGX, but my demo has a nice little tweak that it runs using this simulated build, so it kind of runs Oculum library OS components as if they were in a real SGX environment, but I have this kind of simulated build enabled for this demo purposes, so that makes it possible to run it from my laptop completely. So let's see if I remember correctly. So it's just checking that the cluster is in a good shape. It shows you a couple of things, so we have the custom container D installed under different host path here and then Enclave CC has this special container D shim that implements all this custom functionality like this image transfer service or offload. These are all implemented by this shim run eversion tool binary that is also here. For Enclave CC we have the Enclave CC runtime class configured in the cluster and then there is something wrong again. What it was supposed to do was show the runtime class name, but it failed. Okay, but moving on to the next step, I hope my cluster is still in a good shape. The next step is that I'm showing what happens if I'm trying to run the encrypted container image without the Enclave CC runtime environment, but for some reason the cluster is not in a good shape right now. You'll be the link there. We'll skip this demo there. The cluster wasn't in a shape, but the basic idea was to just show that it runs the encrypted container image using Enclave CC. So the next step would have been okay, the image decryption fails and then the next one after that was that I'm adding the runtime class Enclave CC and then the same container image it runs correctly and then just like printing hello world in the logs. Apologies for the demo hassle. I don't know what happened to my cluster I just tested it one hour ago. It's kind of a good thing because we are out of time. We have a couple things to wrap up. I'll run through these kind of quickly and then there'll be time for some questions and I mean we'll be around a little bit, I gotta go to the airport, but we'll be around a little bit if you guys want to talk about this stuff. First of all, we've shown extremely basic things today. Unencrypted one pod running one unencrypted container image like the goals of confidential containers are way bigger than that and we're actually already at a place in the community that's way farther along. We just wanted to make this something that people could understand. Here's some huge things we didn't cover for instance. We didn't talk at all about signed container images, right? So we talked about encrypted container images, but signed container images can be used in conjunction or as an alternative to encrypted container images really powerful, really important. We didn't really talk about how attestation works. We didn't really measure the whole stack of the guest. We make sure that we know exactly what's running inside of the enclave. Doing that on different hardware is pretty interesting, pretty complicated, so we skipped it. We also didn't talk about a huge, huge bombshell for certain Kubernetes people which is that we don't trust the Kubernetes control plane, right? So you may have noticed that the kubelet is outside of the enclave. We don't trust the kubelet, we don't trust the control plane. That's pretty darn weird for some people. We believe in this thing called deprivileging orchestration. So we want the control plane to be able to orchestrate the workloads without ever seeing confidential information. That's pretty interesting. That's something we would love to talk to you guys about afterwards, by the way, if it rings if it seems suspicious or if it seems cool or seems interesting, provocative, anything like that. Here's a couple use cases that we're pretty excited about. One of them is secure software supply chains. Obviously that's a big deal right now. Think about if you could have a software supply chain where everything was built inside of enclaves and where the build manifest that is produced contains hardware evidence that proves that it was built inside of an enclave that proves exactly what that enclave looked like when the build was done. And then think about this. You could also use that as basically an admission controller for confidential containers. So you could say, okay, I'm going to have my workloads run in confidential containers but they will only run based, I will only run containers that have been built inside of confidential containers. Or what about the firmware of confidential containers? What if we could build that inside of confidential containers and have this manifest that has hardware root of trust connected to it. Pretty cool idea. So we think it has profound implications for how you can securely build software. Another idea is like what about machine learning where I have really sensitive data and someone else has really powerful hardware. Right now, confidential computing doesn't really have very good device support but in the next two or three years, like IO, really one to two years, things to support IO GPUs with confidential computing, that's going to happen. So, you know, can I lend my data to someone else so they can process it without being able to read it? Or what about somebody else giving me their, you know, data and then I can process it for them without me being able to read it. There's also people who have been talking about, oh, hold on, confidential containers and things like the blockchain. Okay, I'm not going to get into that. But you can see there are a lot of profound use cases for this and in general the idea of separating the trust of the people who are hosting the machines or running the workload is a big idea that we think can kind of transform a lot of things in cloud native and elsewhere. So, we really want people to get involved with this project. Like I said, we're a CNCF project. We're kind of young. We've got a bunch of big plans, like we've got a bunch more hardware stuff to support. We've got a bunch of ideas about how to do attestation best. Like I said, we're sort of looking at the Kubernetes control plane, the best way to integrate that. The best way to use all of the projects that I've talked about already here. There's a ton of security projects that have been talked about, let's just go down the list. Do they fit with confidential containers? What does it take to bring these together? We want to know that about your project, how it can fit with confidential computing. We want to be the people who figure that out. So, we've got a Slack channel in the CNCF workspace. If you scan this QR code, it'll take us to our GitHub. That has a link to our community meetings. We have weekly community meetings. This is a growing community. It's cool technology. It's a cool thing to look at, even as a hobby. It's just an interesting thing that is growing. I think we have a lot of great, nice, fun people working on it. So, that's the pitch for you guys to come and help us out and really figure out what it means to do confidential computing in Cloud Native, because we think it's going to be a big, big deal over the next few years. With that, I think we maybe have some time for a couple of questions if anybody has them. Hopefully. Oh, way over there. I can't hear you at all. Yeah, the mics are... I'll just walk over to you. How about that? Repeat the question. Yeah, so the question is when should we consider using confidential containers? And when not, like are there unregulated things that people don't really care about, or their work loads where it's particularly important or not? And... I shouldn't have walked by the speaker. And basically, I think what you can think about here is that there is some cost of confidential computing, the hardware technology. This is designed to minimize overheads, and it does. Usually, confidential computing might have something like, ooh, low single percentage, like, performance hit for, like, CPU-like benchmarks, like reading and writing memory. It's still very fast, because that's what it's built to do. But there's some performance overhead. The main thing is actually that there's cost to doing it, because you need special servers that have this technology. So the main question is going to be, do I want to pay a little bit extra to have my things run in a more secure context? And that's mainly going to be, like, regulated industries, and that's where we really see confidential computing changing things. It's every kind of workload that right now people are like, ooh, I don't know, can I move this to the cloud? I'm not sure if I want my data to, you know, be out of my hands, right? That's where it's like, hmm, this could have huge, huge potential. It's true, there's plenty of people who already are happy with the security that they've got, and, you know, long may that continue, but stronger isolation boundaries hypothetically benefit everyone, and cost to it as well. Cool. I think we might be out of time, but if anyone else has anything, then... Sounds like... Yeah? Which part? In Workshop 1, hold on, let me pull up where I'm still waiting. It might just take some time if you are waiting for the key provider service. Yeah, it took a very long time. I'm not a Mac user myself. I've been testing it. He told me it doesn't work. No, on the Intel works great. I'll take a look. I know that M1 and containers have some little snags going on there, but there is a way that this should work, so I'll talk to you in a second. Cool. Anything else? Thanks for giving it a try. Yeah, thanks. It's much appreciated. Cool. And yeah, thanks for coming, everybody. Putting up with this, like, weird new thing, and our weird demos that kind of worked and hopefully check on the project. Oh, sorry. So I see that the GPU isn't supported yet. Can you talk about the status to that or the thoughts about getting GPU? Yeah, so generally speaking, GPU support in confidential computing, you're going to need to go to hardware manufacturers, such as AMD Intel and also maybe NVIDIA, people who make the devices, and talk to them about these things. There are some things kind of in the works right now, some standards that are being created for like the PCI SIG and things like that. Some of this is sort of public, some of it's kind of not. So I'm not going to give you any timelines for when this will be, like, ready, but I would say that in the next one or two years we will start seeing, like, devices where I can have a confidential VM and then I can attach a GPU to the confidential VM and have an attestation of the GPU also be part of the attestation of the virtual machine and then be able to transparently share, like, memory between those things. I don't have to re-encrypt everything when it goes out to the GPU. I have a way to securely share my data to the GPU with my confidential VM. That's kind of the vision, and that will happen in the next few years. Obviously it's going to bring with it some complexity in how we validate these measurements and we're going to need to figure out a way to implement it with confidential containers, but support for that in VMs is coming. Cool. This is the last chance for anyone else to have any questions. I think we're out of time. Cool. Thanks, everyone. Thanks.