 Hello everyone, so let's start First of all, we are quite happy to be here at cubicone. This is our first cubicone. I'm from Yandu Fidencio I work for Intel and in part of the Qatar Containers Architectural Committee and one of the containers of some parts of the confidential containers project I'm Jens Fleiman. I'm an engineering manager at Wethat and I contribute to the Cocoa project mostly on the operator side of things Today we're going to talk a little bit about how you can consume confidential containers in a really easy way We are going to cover What is confidential containers? A very quick introduction to that We're going to talk a little a little bit about Qatar containers and this makes me think who here is familiar with Qatar containers? Raise your hand. Oh That's nice. Cool We are going to explain how we went from Qatar containers to confidential containers We're going to cover a little bit of the different flavors of confidential containers that we have show some music cases and Finish with a short demo. So confidential containers. This is an umbrella project The front door of the project is the operator. We are going to be showing the operator today, but it also The ripple also includes a bunch of other Projects we have a key broker service. We have attestation agent attestation service We have lightweight virtual firmware to start the confidential containers VMs We have rusty libraries for pulling encrypting decrypting images and we also have a place for for different flavors of confidential containers and of course Qatar container is not part of the project officially As in like hosted on the same it have a ripple, but we have a really strong relationship. We've got the containers Confidential containers project has been part of CNCF for one year now a little bit more than that. We are a sandbox project and As you can see we have like a bunch of companies contributing to this right now going through CSPs Silicon vendors software vendors and Some research institutes and we are expecting more. So what is the value proposition of the project? We really want to protect data news like we know how to protect Data in transit. We know how to protect data Storage But our focus here is how to protect data news at pod level This is important. There are projects that are doing this at noble level. We decided to Take a different approach do this at the pot level and we are leveraging these trusted execution environments We really want to simplify how people can use those teas mainly when Focusing on the cloud native across space. We want to enforce security requirements and Transparent deployment of unmodified containers. So this should be just lifting shift We are trying to make this transparent supporting multiple teas, which means that Choose your choose the flavor of the vendor that you want to to play with We would try to support that and of course we want to separate CSPs from gas applications as much as we can So the focus of the project I'm gonna skip this I'm gonna talk a little bit about the cut the containers and I will get back to this quite soon So I saw that a bunch of people here are familiar with cut the containers. That's nice You can see this picture here on the Right inside my right your left. We have traditional containers like those are Isolated via name spaces. We can have sec comp. We can have monetary access control like I said, you know, it's And we have like capabilities right each container running on its own name spaces Which is good, but if there's an attack you end up like on the host Linux kernel because this is shared So what cut the containers when it was started like several years ago? We had the idea to just come up with a micro vm and have Exactly is exactly the same things that you have with traditional containers But with an additional layer isolation layer of the micro vm So it helps to protect one or cloth to another It enhances that protection and it also protects the host Against the amalitions or interested workload So cut the containers How how it actually works like a really quick example You start your engine x pod it will go to the cubilette cubilette. We start the CRI engine CRI engine will then be responsible for starting the Sheen The sheen will create a virtual machine This virtual machine will boot up. We have like special guests That is used for cut the containers. We have this agent embedded there this agent is responsible for the managing the lifetime the life cycle of the containers and We put an image on the whole side. We share this image with the guest using VertiOS and that's pretty much what we have here So that's cut the containers and Let's go and talk a little bit on how to get from cut the containers to confidential containers The first thing here is We want to use these we want to have and take advantage of the encrypted memory by the way, this Slide has an issue if you find the issue talk to me later on. I'm gonna give you a gift but yeah, so with the the confidential containers the first thing we had to do was Take advantage of the tease you start the virtual machine now this Virtual machine is running into this encrypted memory. No one has access to that but that That's like partially good, but this does not ensure that your workload running will be secure or can actually trust Or working in an environment where you don't trust the hardware The next step that we did Was actually changing the image that is pulled to be pulled inside the guest That is a work around kind of work around that we started with this is used that for pure pods We're gonna talk about pure pods later, but this is actually used that for different flavors but this does not scale for CSP so Alibaba and Microsoft have been working on a proposal to make this Better in a way that it can actually scale for the CSPs, but yeah, the container image is protected. So CSP does not have access to that and then the last part is actually Having a way to attest that what you are running is Actually, what you are expecting So we have an attestation agent that will talk to a key broker service and only then start running their workloads We have a nice talk later today after lunch about this talk. So please attend that and we're gonna give links So with this just getting back to that previous slide We've got the containers. We really want to protect one workload from each other We want to protect the host from untrusted workloads with confidential containers We are adding one more barrier there We now do not trust the infrastructure This image summarizes a little bit what I just explained it and this is what I would like you guys to Take away of how we went from cut the containers to confidential containers Yes Yeah, thank you Fabiano. So cut our containers comes in different flavors and We can distinct them by the great of isolation that they provide or what they isolate first one is Process-based isolation that's provided by the enclave CC project. That's using the SGX technology from Intel Then there's a second one VM based isolation and that's where Kata containers comes into play Basically Using all the different TEE technologies from different hardware vendors. There's secure execution from IBM SF from AMD there's TDX from Intel and there's more to come So all of these kind of under the hood of the confidential containers project and as we said the goal is kind of to Unite them in one project and make all of them usable using the same software so users can Because can Choose what kind of isolation they require there's another flavor and that's not related to hardware technology it's kind of related to how we deploy the containers and pots and so there's a sub project and confidential containers it's called we call it pure pots and It's basically making use of the Kata remote hypervisor. So what does remote mean in this context? the Kata virtual machine that started in this case is not Launched inside though my worker node and so inside my cluster instead It's launched outside of the cluster and how do we do that? The cloud API provider tool basically talks to the cloud service providers API is to create a machine a Virtual machine and what's the connection to confidential containers here? Well, this virtual machine can be a confidential which machine using the TV technology and it's a tested and measured So you have of a base a trusted base to run container workloads on this machine Cloud API adapter has support for quite a few cloud providers We have a few listed here, but it's extendable more can be added and it's not that hard so Peer parts are really a complex topic you can do a lot of things with it And it deserves its own talk and in fact, there was a really good talk at the office three conference in February We have a link to the recording in the slides and I can highly encourage you to go and watch that to learn more just about this topic So now we talked a lot about technology Let's talk about use cases. Where do you use? Confidential container computing in general and was that also confidential containers so First adopters of this technology are Regulated industries they have to keep up with rising demands for regulation everything becomes most strict And they have to apply to those new rules and still run their workloads still on their businesses And that's where confidential computing and confidential containers can help So it's usually financial services Government and also health care few others But here we picked one example a simple one from health care So imagine a hospital uses an application. It's running in the cloud How can I be sure that the application that's running in the cloud is in a secure environment because it's that trusting patient data To this application test results personal information things like this so If the application is running in a secure environment protected by a TV and we have an Service and at a station service that basically can confirm the identity and Of this workload and the stack that it's running So we know exactly that what is running there is what we expect to be running there and we can do this by basically using at a station services remote at a station and How this works you should go to the talk in the afternoon Jeremy will explain details about how at the station works with different models of remote at a station there are So it will be I think at 2 p.m. Or 2 30 p.m. So this model can be extended to third parties We can include for example a diagnostics provider that processes medical image data and then gives back a result and We can extend the trust basically Running this application in the same kind of environment and I can use the same infrastructure to prove its identity and prove that it's running an Secure environment with encrypted memory and the exact stack that we expect to be running there So this was one There's another Example that we want to briefly mention and then imagine you're running on Machine learning workload using Apache Spark Which is a well-known open source project that basically if you run it on kubernetes It deploys a driver pot and it was executor pots and those process the data So what we can do to enable this for confidential containers is basically one simple change When you run your spark job you can specify also top pot template And you just need to add one new line for the runtime class name and that means The pots will run not the normal run C base container runtime, but using the confidential container runtime So and that's how we Change it from using normal container technology to this confidential containers That's based on Carter containers with this runtime class name So in this case it means the user data is always encrypted with a key provided by the user in storage When these pots come up driver and executor pots They start the attestation Get fetched a key if the attestation was successful. So if the Software has not been changed Started in a different environment or has been modified in some way The attestation will be successful and then the key will be released to decrypt the data That's what the parts internally can do that and will be for example provided in an FML volume The key and then the workload can decrypt the data before it starts its actual work So this is something that will actually Demo at the red head boost tomorrow at 2 p.m You can come and generally watch that So now we talked a lot Let's show a quick demo so What this demo will show is how you can deploy the confidential containers operator, which is hosted in operator hub IO and We will show how you can modify what it Deploys and how it deploys things on your cluster and then we'll just run a simple workload using what we just installed So let's get started So the prerequisites are you need a running kubernetes cluster and you have your nodes have to be ready for confidential Computing so you have to have the right hardware and has to be configured including the BIOS setup Let's take a look at the cluster It's a one-hole cluster in this case running just the default things and now we're going to Deploy the our operator version 0.5, which was just released last week Just apply this manifest to it will create the namespace and the subscription and so on install wire Well M and now we're just waiting until the Actual operator deployment is coming up So so far we have only deployed the operator. We have not triggered the actual installation of the components of the artifacts We'll do that in the next step and for that we create a custom resource It's called CC runtime. So let's check if it's actually there after the installation and yes it is Next step we have to create an instance of this custom resource and here We'll just take a look at an example It shows you can Specify a payload image that has the contents that all the artifacts binaries config files that you want to deploy There are also hooks for running a container image for pre-installed to prepare your cluster and also for post-installed to clean up things In case you have something custom So this is an example default examples very robust kind of includes all the different kind of artifacts that we have For different technologies. So this is where you can go in and customize it for your needs really and In the next step once we've shown all of this We'll actually apply this manifest and that will start the actual installation So in the next step, we're going to be doing just that apply and What happens now is the controller What is watching this custom resource and it's starting the installation process basically running demon sets the ones that we specified in the Custom resource you see the pre-installed demon set is already finished and the actual install demon set that's deploying the binaries from the payload image is running It will also create the runtime classes So here we have one for vanilla cutter containers and then others for the combination of hypervisor technology and TV technology Now we take and look at all the artifacts and stuff that was deployed to the nodes. So we have kinds of binaries QMU Cutter container components and a lot of configuration files. We also have The kernel images the initial RAM disks that we need for the virtual machines and we have a bunch of firmware and BIOS files that we also need So this is a lot, but keep in mind. This is for all the different kind of technologies That's where you come in and customize it to your needs. What technology are you using? What do you want to deploy? What do you want to run? So in the next step We'll create a simple engine export in this case. We are using the runtime class name for cutter containers This is the only change that you make to your deployments. We changed the runtime class name in your pod templates or deployments So now we're waiting until the pot is coming up and then in the final step Just going to make sure it's actually running and we can access it So this will be a now it's up. I know we're going to check If we can access it and yes, there's the page. So that means the pot is running And everything was working as we expected And that's the demo Okay, so in the demo we shown That it's easy to deploy the operator. It's basically one command Then it's another step that you have to do is create a custom resource Which you can modify to a needs but not you can also use the default one if you just want to play with it for now and then we'll show on how you can using the correct runtime class name run a simple deployment and So we just released version 0.5 Fabio mentioned sir. It's a young project But this is our biggest release so far It has a bunch of new and large features we mentioned peerpots the other very interesting ones and Basically, we're here to encourage you to actually go ahead and give this a try play with it If you are interested also join our select channel in the CNCS workspace and we have a weekly community meeting where all of you are welcome to join and Discuss your use cases discuss your needs if you want to contribute, that's also a good first step and Yeah, so please come and join the community Tell us what you expect what you need what your use cases are So as we already mentioned There's a really interesting talk today in the afternoon given by Jeremy Piotrowski from Microsoft He will go more into the low-level details of how this works starting from the hardware technology to be also deployed How adaptation is going to work and different types of Attestation all these things so highly highly encourage you to go there and watch this talk as well There have been other great talks in past cube con conferences They all talk about The purpose of confidential containers and confidential computing, but they look at it from a different angle So it's also worth watching those talks as well and I think with this we're at the end of our talk and Hopefully have some time for questions There's one question here What are the hardware requirement to use these Confidential computer hardware requirements was that a question so the question was what are the hardware requirements and basically you need a server was a CPU chip that has this technology enabled so and Intel TDX or process of AMD SCV There are different flavors of AMD SF so make And or there are the hardware platforms as well like IBM C That you have to use Just to compliment Just a compliment You can test it. You can run and develop without having access to tea For like the this demo for instance This demo for instance was recorded in the development Environment that I have so if you just want to give it a try you don't have to have a tea if you want to actually use it You have to have one of those machines and availability is coming It's not like everywhere right now and maybe to add to that as well We have a block describing exactly this from a colleague of ours that how you can try and try this out Without having the actual hardware We'll make sure to add the URL to the slides after the talk So if you're interested Just look it up in the slides and you can follow the block. It's actually a tutorial how to do it without having the hardware No more questions No more questions. Oh, that's one. I was just curious. What's the performance overhead of running an application in a Confidential container as opposed to regular containers So I'll add Fabiano add to that but basically the overhead that you have starts with Over that you have for Carter containers and so that's I let Fabiano speak to that But on top of that overhead for confidential containers, I would say does But this is the biggest overhead and what comes on top for conflict the confidential part is not that big so The overhead for using the encrypted memory that it's not that big. It's like Notable, but when you're talking about Kubernetes like it's gonna take a few seconds to shadow your pod anyways a big overhead that comes is Those technologies do not support for instance hot plug You cannot hot plug memory. You cannot plug CPU So once you start a pod That's gonna locate the memory that you need and That's a reason of overhead But like time-wise on running should not be a game a showstopper Was where I where we able to transfer your question. So the benchmarks should come From the hardware vendors right from Silicon vendors They have we don't have these as part of the talk but contact us and We can point you to the right benchmarks for depending on the tea that you are using of course so in general This the workloads that you would run on these are not things like function as a service where it Depend on minimum latency for workloads to come up and then disappear It's probably going to the first run more be like long running workloads where they start up time doesn't matter as much So I'm not really an knowledgeable in trusted execution environment But I'm wondering like see if you really don't trust the platform on you which you are running or can you know that? the API that you are accessing that and show you that you are Running like a security is not like fake. It's like a really Unforged by a CPU and really you have access. It's not been mess with If you really don't trust it So depends on what kind of API what I will talk about the pure parts. Was that your question? Yeah, I mean you're executing your container and you are touched that your container cannot be accessed by the by the platform, right? Sorry Okay, yeah, but it's it's the one providing you the like the the platform to run your machine, right? so, okay, you know that You are declaring like you say, okay Well, I want this to be secret but what you get is really really secret like you don't get like a fake service providing that But actually people can look at doing the what's in it. Oh, okay so basically There's a root of trust showing you that from the hardware level that The environment that you're running is based on this on this hardware level Certificate So when you get the attestation report, it's come it's going into the CPU basically creating this attestation report and that's based on Key provided by the hardware manufacturer and from there It's like always in security a chain of trust up to the running system Does that answer your question? Just to compliment at this point you trust the hardware vendor Right, you are buying a hardware from from some vendor. You trust that vendor what you don't trust is Everything that is using that Right. So so that's the how the vendor is providing the key you trust that and then everything that you unsaid Does someone else? Preferably in the front here Has some questions if in the back, that's okay as well Okay, so oh, okay Thank you my question would be how you manage the The fact that pods are moving from one node to another when you have encrypted enclaves memory and So the question is how do you manage that workloads move from one node to another? Basically as you move notes And you start let's say you you move to know you move the pot to another node and you recreate the virtual machine It has to go through the same process that it did on the first place did on the first note Yeah, so You mentioned that to benefit from these hardware T is you need Support from the CPU and you mentioned Intel and AMD a CV and so on Have you considered support for platforms based on arm? So actually What's up to the hardware vendors to join the project and contribute in fact? There was I think the first PR from arm the other day So that's coming In general as hardware providers Learn about the project and want to join typical workflows that they join one of the community meetings and State what their intent is and send PRs and they will usually be integrated into the project. Yeah Okay more questions Yes one No, it wasn't so much a question, but relating to the benchmarking question earlier It is something that's come up in the community so far as nobody's volunteered to help write it I think we'd like it from the perspective of the community rather than the hardware vendor So it's not really a race to the bottom or top whatever way you look at it So if anyone's interested in coming up with a way of benchmarking what we're talking about here Generically for the community to at least as a starting point allow people to try it out in their environment Then all help greatfully received And he's James. He is one of the folks who work directly on on peer pods Thank you