 Hey, welcome to our talk on confidential computing, subtitle, Attestable Security. This is a collaboration between Edgeless Systems and Liquid Reply. Edgeless Systems has a product called Constellation, which we'll be presenting today. Moritz will tell you a lot more about that and how confidential computing works and how it applies to Kubernetes security. I'll hand you over now and I'll see you later for the demo. Thank you, Zo. Now I will give a short introduction into confidential computing. Now we can apply it for Kubernetes security. So when we talk about scopes of security, when we think about the threat model for confidential computing, we don't think about the usual way where an attacker gains access via vulnerabilities in your application or in your dependencies. Instead, we talk about attacks that come through the infrastructure. In a cloud environment, that means potentially other tenants that gain access to your application could be a cloud admin that has just direct access or a data center employee that goes through the actual, access to the actual hardware or a foreign government that has access to the infrastructure directly or via the supply chain. So these are very powerful attackers, a very powerful threat model. So the question is, is that more theoretical or is that a real threat? And what we see lately is a lot of attacks, potential attacks that come through vulnerabilities in the cloud software itself, in the cloud service provider that are exposed and attacked and allow attackers to go to gain a vertical privilege escalation and then go horizontally to attack and access other people's workloads and data. So the question is, how can we protect against this scope? And this is very powerful threat model here. The solution I like to present in the next couple of minutes is based on the technology called confidential computing. That's a relatively new, hardware-based technology that adds the capabilities of runtime encryption, protecting your workload and data while they're in use and the capability of remote attestation, verifying the integrity and those capabilities from the outside, from remote before deploying the sensitive information. And the question is, what is this scope? What are the things that being runtime encrypted and attestable? And with the latest generation of this hardware features, these are VMs. So this means you can create completely isolated and runtime encrypted VMs that are protected against the rest of the infrastructure, the hypervisor, the host who has any other VM and tenant inside the cloud stack and that you can verify. So you can obtain a statement from the hyper saying, this is the exact memory of your VM. This is what is running there. Do you trust it? Is that what you expect to be running there? And these features are available with the latest generation of IMD Intel, also soon ARM and also RISC-5 has something in the pipeline. So the question is, where is this available? How can I use those confidential VMs? And the answer is pretty much everywhere. Most top providers now have some form of confidential computing services and infrastructure available. So you can just use it. You can go there today and use it. The question is, who is you? Who is using that? And as Mark Trasin which the Azure CTO said it, the first movers will be the highly regulated and the paranoid. That makes a lot of sense. Think about financial industry, public sector health care, but it doesn't need to stop there. You can think of use cases in almost any industry. When we talk about deploying, using cloud migration, right? Taking stuff from your own environment to the cloud. Think of simply your age application. Will you just move it to the cloud, right? What would compliance and legal say about it? Think about deploying stuff in untrusted environments like a manufacturing plant, for example. Then confidential computing is a game changer. Now you can deploy stuff always encrypted always verifiable. And this means compliance. Think about GDPR, heat power and so forth. Now things can become more interesting. Now you can move all your workloads to the cloud because they are always encrypted. Now you can, for example, protect your IP. If your IP is software and you are using usual modern day software development where you consume services like GitHub Enterprise, GitLab, Antifor, how you can make sure that nobody steals your crown jewels just from the cloud environment, from this context. And if you're a SaaS provider or a SaaS customer, confidential computing could be very interesting since now you can consume that service and somebody can provide that service without that somebody having access to your data. And now you're enabled to consume that service wherever that service sits because you can verify it. So now we know about the use cases. How can we, from an application development point of view, how can we make use of it? How can cloud native security consume this new capabilities? And you can roughly split applications of confidential computing technology in three levels, right? The easiest would be to just protect your keys. And that's where the first kind of applications we see where we would just replace hardware, dedicated hardware like HSMs with confidential computing software solutions. I would say this was more or less the first generation. Now with confidential VMs, we see more or less the trend to just protect entire containers, entire applications with this isolated properties. But with our modern cloud native way of building applications where we have Microsoft as architectures, where we have the Kubernetes stack, where we orchestrate, update, scale, back up, we have all of these DevOps tasks. We have all of these in-betweens. And the question is how can we then verify it? How can all of these individual components verify each other and making this remote attestation actually usable and keeping those things secure? So if you have a single container, you're probably fine. But if you think of modern Kubernetes deployments, this is not enough. We need to go to the right-hand side where we can create entire confidential deployments where everything is anti-encrypted between those containers, inside those containers at runtime, and where we can orchestrate that in a secure manner in this powerful framework. To illustrate a bit more what I mean with that. So we're talking about right-hand side here, creating confidential deployments in the cloud. To illustrate what that means, right? If we take the base, the fundamental step of moving our Kubernetes node in confidential VMs. So every node is its own confidential VM. Each of them is its own confidential context. The question is, how do you verify those contexts? How do you verify these VMs? How do you chain them together to a Kubernetes cluster? How do you protect the API server? Protect the intransits or traffic between your containers, your pods? How do you protect stuff that's written to storage? All of these things you need to take care of additionally to all the tasks. So think about all the stuff you do with Kubernetes. Now you have to take care of confidential computing stuff as well. This doesn't scale. This doesn't make sense, right? So this is not enough. What we need is an entire confidential deployment or confidential cluster with one confidential context. And I use this more or less like a buzzword, but you get the idea. One context that you can verify in one concise statement. And then it needs to be taken care of that all of these nodes are verified or chained together, that the API server is verifiable, that stuff that's written on the network or in the storage is also encrypted. And then you want all of the usual properties like scaling, updating, backups, recoveries, all of this needs to be taken care of and always have in mind the powerful attacker model. You can trust nothing that's not verified inside a confidential VM context. So a lot of things to wrap your head around. And I can't go into detail of all of these aspects. It's just here to understand that single confidential VMs are not enough. We need to extend this concept to one big confidential deployment. And one project that implements this is Constellation. It's a open source Kubernetes distribution that is built around the concept of confidential computing. That means it isolates entire Kubernetes clusters from the infrastructure, but inside they're just regular Kubernetes there. So they are Kubernetes certified. You can use all of your regular tooling, deploy your applications, and you have a CLI tool that allows you to create such confidential Kubernetes clusters on any of the cloud providers that has the confidential VM capabilities already available. And this is open source. You can find it on GitHub. It comes with a documentation. So you can read more about the concepts I just briefly touched on here, but we will also see a demo by Salo in a bit. So you will see this whole thing in action. So you don't need to understand all of the details and understand fully how this works. As of now, it's just to understand the scope we're talking about here is we wanna protect ourselves against the infrastructure, protect our Kubernetes environment against the infrastructure. So we reduce trust to everything that runs inside Kubernetes and everything that's below it is isolated and don't have access to it. So somebody could exploit the cloud, but they won't get access to any of the stuff that you deploy inside such confidential cluster. And with that, I like to give back to Salo to see how we can bring that together with the regular Kubernetes threat model of protecting the front door and the supply chain and so forth. So that we have a holistic, fully protected and isolated Kubernetes environment that you can trust even in a hostile and potentially malicious environment. All right, thank you very much. Thank you, Moritz. So now that we know what constellation can secure against, let's look at the part that isn't secure. So the classical threat model still applies to your application and your guest OS. If an attacker were to exploit your supply chain or exploit a vulnerability in your application, they can enter your trusted execution environment and they have lateral movement if they break out of the container. So that's always something you need to understand. It's not a blanket safety measure. And I had a bit of a hard time with the demo because I could just, you know, technically, if you're looking at the CNCF landscape, you could throw a dart board or darts at this and then choose a few tools and throw together a demo. But I wanted something that could showcase this confidential computing attestation feature really. And we'll look at that a bit later what attestation is and how it works. But it's a very simple stack in this demo. Before I start, this talk is inspired by a talk I saw last year at the cloud native security con in Valencia by an IBM researcher and a Coverno maintainer who's works at Nomada. So really cool talk about securing supply chains, signing supply chains and steps in your supply chain and your pipeline. So really, really good talk. Highly recommend you check it out and I'll be referencing it throughout. This is the stack. So obviously constellation Coverno and Tecton. I'll explain what those are in a bit. And then six door, which is used for signing containers but also signing artifacts as a whole. So it's a really great open source project and getting a lot of traction. Yeah, quick look at Coverno. If you don't know what it is, it's a Coverno policy. Sorry, it's a Kubernetes policy engine. It's used to check, block and mutate resources as they are created, but also throughout their life cycle. It's tightly coupled. So you can create rules and matching rules and changes or blocking behavior based on that. It's really tightly coupled to Kubernetes as a whole, which makes it great because it can manipulate all kinds of resources, not just primitives but also of other, in this case, Tecton resources but also whatever else you might use. And it plays well with other YAML tooling like Helm or like ROCD. So you can always, it fits really well. The sky's the limit. It's a really great tool. Highly recommend you check out that talk and also just look at the docs. The docs have great examples. Tecton is a decloud native CI CD tool. It runs in your cluster. You can think of it GitHub actions but running inside your Kubernetes cluster. Simply put, it's a collection of resources. So you've got steps, you've got tasks and you've got a pipeline. Steps, makeup tasks and you combine tasks to make a pipeline. You can run tasks individually and a little note, just each step is kind of a pod. You can think of it. So it runs as a pod and you can watch those and see them, how they behave. This is the constellation quick start. It's, I was very pleasant. I was pleasantly surprised, very pleasantly surprised about how easy it was to create a cluster. So you create, you use the CLI. Under the hood, the CLI uses Terraform but that's abstracted away from you if you want. You can also use raw Terraform and integrate it in your own repos and pipelines but you create your service principle. You provide your project ID zone and a service account name. This works on all three clouds. So Azure and GCP are best supported. AWS doesn't have auto scaling yet. But yeah, you create the service principle which constellation uses to spin up your cluster. You generate a config. You define how many nodes you want. In this case, I'm just doing one on one and you have this constellation knit command which actually bootstraps your cluster. Very quick and yeah, depending on the size of your cluster, obviously, you just export this cube config and you can start working with your cluster. Yeah, let's get to the demo. There's not too much point being on the slides for that long. So hopefully this works. My capture software is a bit dodgy sometimes. These are the commands we saw. This is kind of what you get. So you get a admin configuration, constellation configuration, an ID for your cluster and some secrets. Obviously Terraformer State. So this is the one that creates your service principle. This is the one that creates your cluster. It's not recommended you amend these or change them, but if you reach out to the constellation guys or edge of systems, the whole team is really on the ball and experts in our field. So definitely reach out to them. I'm sure they'd be happy to talk to you. Yeah, let's get into it. This whole talk is about attestation, checking, right? So it's useless if you can just spin up clusters and not prove that you're in a trusted execution environment and constellation friendly enough provides this great verify command that you can just run without any flags if you want to and it'll use the files in your repo. So in this case, the config and the ID and check your cluster. Is it matching what we expect? Is it confidential? Is it in the enclave? You can also provide a cluster ID if you're running multiple clusters, but in this case, we just have one so we don't need to provide that. Yeah, a little note on the kind of verification process. So it uses these measurements. This is your constellation config and you've got measurements. Measurements are the collection of code and configuration that make up your cluster in a hashed format and these are what is being checked against. So that's what this verify command does and this is why this file is required for that command. If you weren't running in this directory, then it would fail or you have to pass it somehow. There's a verification service running in your cluster so we can look at that here. This is your verification service. It's not exposed at the moment, but through the CLI, we can access it and ask it to verify our measurements. The cluster, you know, handily enough comes with Hubble and Syllium service mesh so it's a good base to start on. Moving right along, so let's clear that out. Yeah, the idea was I didn't want to be verifying myself. I didn't want to be an outside verifier. I mean, that's often good, but sometimes you'd want to verify from inside the cluster and that's what I wanted to achieve here and we want to do this in our pipeline to make sure our pipeline itself is aware of its surroundings and kind of test to the fact that it's in a good place. So again, we do know about this service. I'm providing the config and the ID as secrets through external secrets, the operator. It's saved in my secrets manager on Google and they're injected into the cluster that way and then used by our pipeline. We've got a Docker registry and registry credentials. These are environment variables for our tecton pipelines if we were to push images, for instance, from that pipeline as we build Docker containers or OCI containers and artifacts. Yeah, let's have a look at the pipeline. So pipelines are fairly simple. They're made up of two tasks and that's what makes up this pipeline. So we've got task one, which is our constellation CLI running our verified command against a local endpoint. We provide our cluster ID and of course, because every step is a container, we can attach a volume with our constellation config. Really handy and nice that every step is kind of a container and you can run custom images and whatever you need. So this is just the constellation CLI. I've got a Docker file here that builds it. Let's have a look at the pipeline. So yeah, we've got our task here and then we've got a second task that just outputs some strings just to not have it be just a simple one task pipeline. We've got our pipeline definition and then our pipeline run. These are two different things. The pipeline can live in your cluster indefinitely. The run is kind of a one instance or one flow through of that pipeline. You define both the tasks, you reference them and then you reference your run. Let's have a look at, yeah, there's no pause running in our name space yet but we've got a handy make file here or we can actually just run the pipeline directly. That's the wrong folder. So, oh, create. And then it creates our pipeline. As I said, the steps are pod so we can actually see our pods initializing and running our steps. Let's take a look at this also in the dashboard. Hopefully dashboard won't freeze up on me today. There we go, nice. So this is our pipeline run. Four seconds total runtime started just now. We see we have this familiar okay thumbs up from the Constellation CLI and we have our second build step that outputs some strings and that's it, right? Kind of a simple demo but attestation isn't kind of a very sexy thing but it is such a critical and powerful thing for auditors or verifiability in general and that's why we have this. So Tecdon won't run a subsequent step if the previous one had failed. Of course there's if statements and you can work around that but in this case our pipeline wouldn't have run if this wasn't the proper exit code. And yeah, that's the pipeline run. So we now have a pipeline that only runs when it is in a trusted execution environment and I think that's a very powerful idea and that's what I wanted to show with this first part. Let's go back to our VS code. Hopefully it hasn't frozen up on me. Nice, okay. Yeah, we can see our pods have completed our pipeline has run. Yeah, nice. Moving on. So this is the second part. Tecdon chains. Tecdon chains is kind of an add-on to Tecdon itself. It doesn't come with the whole pipeline functionality. You have to install the operator yourself. So here we go, we've got this controller. What Tecdon chains does, it watches your pipeline and task runs and makes sure that it collects metadata about it, saves that metadata and then you can extract metadata, you can look at it, you can make sure that this task run happened at this time and these were the tasks or the pipeline that ran. And yeah, really handy thing. And the demo I mentioned earlier that inspired this demo uses pipeline bundles, which is a Tecdon experimental feature. They seem to be pushing this chains thing a bit harder. So yeah, that started to go with that instead. We can look at our previous task run. So using the CLI, we don't have to use the dashboard. We can also just use the CLI here and we can see our previous task run. I think we can close this for now. We can look at it here. We can also use commands to extract our UID. So we take the UID from that output and then we describe it and dump out a base 64 signature of that task run. So copy this, yeah. So we get this signature file here and it's just base 64 encoded a signature of our run. And what we wanna do now is check it, right? We wanna check that was this run by the right person? Was this run in the right cluster? Was this, were the correct signing secrets used? And this is where cosine comes in handy. We have this salsa provenance. So provenance is this, I have it here because this is a nice way of putting it. Provenance is a claim that some entity or builder produced one or more software artifacts by executing some recipe. So this is what we're doing, right? That's exactly what we're doing. And this is what cosine does for us in this case. And that's why it's part of this demo and the part of the stack. And I highly recommend you check out the docs because software signing is gonna be abundant and it's just rapidly increasing. So we get the thumbs up here. We get the verified okay, meaning this is the overlay matched and we can be happy. Yeah, that's the second part. So now we have a pipeline that only runs when it's in an entrusted execution environment. And we've also signed that pipeline and checked that it was valid end to end, right? So double check. Just to reference that talk again. So what they do in their talk is they go into each step and each task and then the pipeline as a whole, sign all of those and verify them with cosine. Using Coverno, this image extractor feature which was released in 1.7, you can check public keys and hashes of not only container images but any sort of artifact. And that's what they're doing. They're doing this like provable verifiability of each of the steps and the steps as some of its parts. Really, really cool things. I just wanted to show a fraction of that here. But again, check out the talk. So yeah, Coverno acrobatics, I call it because this thing is just, you can do anything with it. We have a cluster policy. So you have policies that apply to namespaces only or cluster policy that are cluster wide. This one's called require volanscan, so vulnerability scan. And what it does is it uses this image extractor feature to go into our task run. Let's see if we can look at a task run here. Yeah, we can see two tasks, right? We have our check and then our build. It goes in there, checks the image, checks the image name, goes into the reference and checks that that image was actually signed by this public key that we defined. And that way we can again have attested verifiability. Sorry, the capture software froze on me there but we were talking about the attestations and the other type of thing you can do here with Coverno in the same definition, you can attest to the vulnerability score or the impact score using gripe vulnerability scanning tool and only this would block it. This would be an enforced rule and would block any container that this is being applied to if there are vulnerability in there. The value has to be zero. You can't have any impact score greater than eight. Really cool and we're still using Coverno. We're not obviously using gripe here and a bit of public key infrastructure but we're still on the Coverno and this is the only tool in our stack at the moment. So really, really powerful stuff. I recommend you check out the talk as well as the docs, really good example. And you can do a lot with this, right? You apply pod security standards, you do some good best practices and then you apply those policies to your cluster and then you can always have this piece of mind that this is being enforced everywhere. Always make sure obviously to check not only containers but also in it containers and the terminal containers but the examples are really good on this. Yeah, so that was the kind of second part or we have almost last part. I'll just head back to the slides and then we'll take a look at those. This freeze on me. Yes, it did. I'll be right back. There we go, we're back. So this is a diagram I have from the previous talk I mentioned. It shows how on every level you can perform these sign checks with cosine. You can check individual tasks, steps and then also containers. This one I'd also like to highlight where you have Kubernetes misconfigurations, right? Maybe someone's running the default namespace. You can block that with Coverno. You can require security context. You can apply pod security standards as a whole, right? Automatically through Coverno. In their talk, they're also creating persistent volumes upon creation of the namespace. So really, really powerful tool. That's just one tool and you can do all these things. So no need to blow it up your cluster immediately. I mean, I do recommend you check out all the tools and use what's right for your use case but you can keep it simple and still be very effective. Vulnerability scans all the way down. Yeah, that's all I wanted to show with this. There we go. So this is the getting started tutorial that we kind of just run through with the tecton chains example. It's kind of a redundant diagram here but we have this cosine pair that we generated in the beginning. I had pre-generated it. We used chains, tecton chains to look at our pipeline run. The pipeline run is here in green. And then we get the snapshot. We get all the metadata about it and then we attest it. We check it. We verify the signature of here 0.6 and that's kind of what we're doing. So you overlay your signatures and you check that it corresponds to what we need. That's it. Thank you so much for your attention and joining this dream. Looking forward to all the other talks and we're open for questions now. Check us out on LinkedIn. Go, yeah. This is a little part I forgot but I think it just summarizes what I was trying to say earlier which is where you can verify everything and now with Constellation, we can also verify that our pipeline is actually in a trusted execution environment running on confidential VMs. We can attest to that as well. But yeah, check out Constellation. Find us on LinkedIn. We look forward to talking to you and have a nice day.