 Okay, welcome everyone. Thank you for coming to my talk. This is like a really exciting and also stressful time for me because it's like a huge auditorium and shout out to the people in the Cheap States. Thank you so much for attending. My name is Adolfo Garcia-Letia. I am a staff software engineer with Shingon and we are a company, a new startup that we are working on the supply chain security space. And I am also one of the tech leads with Kubernetes Siege Release. Siege Release is the outfit on the side of Kubernetes who takes care of releasing all of the software artifacts and compiling those binaries that you use and love probably every day. I am also a huge Star Wars fan and I like to ride my bike all over the world. So with that I would like to start. So I would like to talk to you about human stories, how the power they have to connect us and about the things that we... Just kidding. I just figured since people and great speakers use stories all the time and also religions use that convey their messages with stories, I might as well do that. So I am going to quote from the scripture. Yeah. So here we go. Story number one, bad motivator. So remember at that time when you wanted to be like that intergalactic hero and just travel all the way out to the outer rim with your pet robot and you finally convinced your uncle to go get one great model that you wanted only to arrive to your house and have it malfunction and explode in your face. And now of course the guy that sold it to you is laughing at you and making you cry and you're all desperate. Well, yeah, that's a big problem. What was the problem right there? You just accepted a bad component into what you just got. So if the jaguar's there would have been a responsible supply chain citizens, they probably would have produced an S-bomb describing everything inside of that are five robots just to ensure that you at least were aware of what you were getting. Right. So story number two, career goals. So it finally happened, right? That job that you really wanted has become a reality and you are among the organization that you always wanted and you're not just going to be there. Actually, you are not that guy. You are that one. But you're going to grow. You're going to go places. And since you were a kid, you were always followed those role models that you wanted to become. And your organization always opens up new opportunities for people to grow. So you're actually pretty fine. So finally, that project that you really wanted to work on happens. It shows at your door or well, maybe just ingested by the tractor beam into the space station. Your team is already working. Management of service is doing all of the important stuff. And you are in charge of the most critical piece of the project, the SSU. And you, of course, understand what's on your hands. So once you understand your part in the organization, you get a strange visit. And of course, your team is fooled by those guys, which convince them to get those people in. And everything breaks loose inside. And not only that, but they do so by bad acting, which also, well, it's those two guys that always get the girls and you don't, which makes it even worse. So your boss is now really angry at you. And you simply, all of the project went down because of you. So what happened there? You accepted into your project a falsified, whatever, be it a dependency, a module, could it be an artifact, a binary that was compromised? So what's the solution to that? Sign your things. Whenever you are accepting or producing things for others, you might as well sign them and verify them. If you ingest some dependency, you have to make sure that it's coming from where it's supposed to be. There are now tools, six-store, that will help you build that, signing architecture right, and also provide you with the proper tools to make that easy. Final story. Yeah, bad carbonite results. So since forever, you wanted to make a really nice picture for your living room, comprised of your worst enemy and get there in carbonite. And so you finally raise the money, rent a facility, invite all of your friends to watch, set the thing going, and then you get this guy. So for those who are not Star Wars fans, this is not what's supposed to happen. This is George Lucas, instead of Han Solo in the carbonites. And then you are all confused that you don't understand what's happening, and so how did that guy get in there? So someone put it in there, did I make a mistake? You simply don't know. And the worst part, of course, is that your boss is now on top of you. He can get sometimes like, oh, you're stupid. So why couldn't he tell what went on down there? Because there was no provenance information. So you need to get a formal record that you can query and understand what's happening inside of your build process. For that, we have also Salsa can help because it helps you structure and makes you understand where you're doing things right, where you're doing things wrong, and if you're missing one of the pieces. Also, the entire project makes you die all of those information about your build process together. So the idea is, if you generate a provenance attestation for each of the steps in your build process, you get like a nice record of what went in, what went out, and what were the parameters that you used to run that step. And you sign everything, and at the end you can easily detect where any of those things went wrong. Okay, so enough of this stupidity, sorry. Okay, the next one. So first, I want to give a high level overview of how the Kubernetes projects get released. This is not only for Kubernetes itself, but also for some of the tannier Kubernetes projects. So you have things like Cluster API, KO, all of those projects are, in theory, or at least they have what I'm going to show next available to them to release their artifacts. So a project in Kubernetes usually gets its own project and staging bucket, and I'm not going to go into details, but the idea is that they build their artifacts, stage them in their bucket and in their registries, and from there we promote them. This is an actual picture of the image promoter for Kubernetes. Trust me, I'm the tech lead I know. This promoter copies everything to the final release buckets and where they finally rest for everybody to pull and consume. So this is like a high level overview, and I'm going to go a little bit deeper into how Kubernetes gets released, the actual Kubernetes images and binaries. So we start with a build. We build the things, we build the source code at a certain build point, and we generate the binaries and the images, and then we push those to staging, like we showed on the previous one. But what we did to secure, make this a little bit more hardened and secure was we pushed more steps into between building and pushing. So the first one, we built an S-bomb. This Kubernetes got its first S-bomb like two years, one year and a half ago, detailing everything that you needed to know. So this satisfies the problem of if we ship Kubernetes with a bad motivator inside, we want to know right away as soon as possible. So even at some point, you can even imagine stopping the build if we detect one of those components to be bad or compromised. The second one, we attest to the provenance of the artifacts that we are generating there. The first, we record the source code. So we clone Kubernetes, Kubernetes. We establish the build point at which we are going to be building, and all of that goes into the attestation. And we record the parameters for that build, which apply to all of the subjects that we are attesting to. And finally, we signed the images during staging. So this is so that you, in relating to the stories before, that you are ensuring that you get at some trooper and not something else by surprise. So the first thing is when we started all of this work, we were like just testing the water to see how it went. There are problems with this approach. I don't know if anybody can see the tech, the problem that's going on inside of this staging process. Nope. So the problem is this, we are generating the provenance attestations inside of the staging process. You should not do that. If you generate a provenance attestation from inside your build process, anyone that compromises that, it's going to can falsify, potentially falsify the artifacts. So you may get an attestation that says, okay, I built this from Kubernetes, Kubernetes, whatever build point. Well, in reality, it could be from pulling from another repository somewhere, a clone of a hack Kubernetes tree. So this is, we did this like, you know, very early days of stats at 0.1. So this is one thing that we have to change. I just wanted to explain and everybody, let everybody know why this is structured the way it is right now. It's going to be changing soon. So after we have everything built and signed, we pushed everything to the staging repositories. And we signed those artifacts. And then after that, we, the next step is that we check the, we start the promotion process. So Kubernetes itself doesn't yet use the file promoter. We just use the image promoter. So we have two processes, one for promoting images and state and publishing them to the production registries. And another process to promote files. We haven't yet gotten to finish the file part, but it's getting there. So when we kick off, we kick off the artifact promotion, we check those signatures. Why? Because once, if we get images, we're trying to promote images from a staging process, we want to make sure and absolutely sure that we are not promoting images pushed by someone, something else elsewhere. So we check those signatures to ensure that we are dealing with the real thing that we're expecting. The idea is don't let the artifact promoter trust the staging. Just treat it as a process somewhere else. Zero trust today, I think that would be the catch first here. So after that, we copy all of the images to their production registries. And once they're there, we take the signatures from the original staging and transfer those and append them to the images in one of the registries and then the production registry. And then finally, we sign them again. And why do we sign them again? The idea is once we promote these images, they are going to end up in the production registry, and they will have two signatures attached to them. The first one is going to be the signature from the release engineering team, which is us that build the thing. And then the second one that we have there is the Kubernetes wide signature. So if you verify the images that we just pushed to production, you will be sure that one, they were built by the release engineering team. And second, that the Kubernetes organization has officially released those artifacts. And finally, we take all of those images and mirror them to the regions where you can finally pull them from kates.gcr.io. And the other part of the release process is the release stage itself. So this one is not yet as advanced. What we do here is we check the artifact provenance of the files that we are promoting. Why do we check them when we kick off the release? Well, very soon is this. If you think about it, the images are already published at this point. So this process here is kicked off via a process, an external process that we cannot trust. So we check the artifacts that the invocation is commanding us to run. So we check, okay, if I'm going to release this list of artifacts, and we have the complete list because it's in the S1, we check the provenance. And the provenance tells us, okay, that list of artifacts you're about to release actually comes from the staging. So that's the first one. The second is we check the image signatures. Images at this point are already published. So even if we haven't yet sent those emails, you may be familiar with Kubernetes 120, whatever is released, you can pull the images at that point. So before we call the release complete, we check those signatures because in release, we're going to be publishing a set of artifacts that also reference those images. So we publish release notes, documentation, we publish the S-bombs. So we need to ensure that those digest of the images we're publishing are, in fact, what they're supposed to be. Finally, well, not finally, but almost, we attest to the provenance. So we are generating a provenance at the station during staging, and that lets us know what went into staging, what was the process that we ran to transform the inputs, and the list of outputs that we got there. And the second is we produce another provenance at the station right here. And that lets us know in the future if we have to go back and check what the release step did, we have to ensure that we can check what went on during a release in the past. Again, same thing as with the attestation produced during staging. This should not be produced in there because you cannot trust the parameters that went into the attestation. So if you're using this architecture as a reference for your project or your builds in your company, keep in mind that attestation should be generated outside of the good process in whatever way you can do them. There are some patterns emerging now which are useful, and I'm happy to chat about them with anyone. And we finally copy those files to the production bucket. At this stage, relevant to this talk, we push to the production bucket the attestations and the S-bombs. So we get like, at that point we have a complete release where we have all of the artifacts, all of the dependencies described fully in the S-bomb. We have the process covered. If you can think about the description of the artifacts in a vertical way, it would be like the S-bomb. And in an orthogonal way, you can think about the process documented from end to end with the attestation. Except there's one that's missing. We are generating attestation for this process, one for this process. And if you think about it, the image promoter is missing. It's attestations. And that's a project that will be happening soon. I hope by the next QCon, we can show off like a complete picture of SAS attestations from end to end. We're almost there. We have all the tooling libraries, everything already developed, so they're going to be there. So well, yeah. It's a lot to consider. Just going to leave this a little bit so that you can think and let it digest it. But if we are going to have like the main takeaways for this process, I'd say it would be like this. Three things to remember. So first one, observe and document your builds with SAS approval and attestation. So you want to be sure that you can go back and check what happened. What was the actual source code that I pulled? What were the invocations of the commands that I did? And the more granular that you can split those attestations, and the more detail you can get into them, the better. At some point, and this is something we've been seeing at Changa working and talking with customers, builds are going to have any... So your workloads, you can certainly consider a future where clusters will start demanding and enforcing policies based on, are you giving me a proven attestation for this workload? No, then I'll reject it. Same thing with S1, same thing with everything. The next one, ensure that your artifacts are authentic and immutable by signing and verifying them. And it goes both ways. You as a player in the chain and you as a consumer of others. So you need to make sure that those artifacts cannot be changed from the moment someone released them to the moment that you're consuming them. And if you're a nice player in the whole supply chain, you should be ensuring the same thing for others. And finally, document all of your components by consuming and generating S-bombs for all of your releases. It's... I mean, I hate to pile on the same thing all of the time, but examples sometimes are useful. Log4j, how many people are using Log4j for examples? Well, if you have S-bombed the whole world, it would be fairly easy to detect which workloads have a vulnerable Log4j vulnerability in them. And finally, I would like to just leave you with a thank you and just making... I wanted to just let you know that most of the tooling that we are building in Sieg Reliefs to make all of this happen is we are trying to make it as general purpose as possible. So we have a tool for S-bombs. We have a tool that the S-bombs tool can generate part of the provenance attestations from your binaries. And so as we move forward, most of those artifacts are going to be... Most of those tools are going to be available to you. And the final one is... What was my final point? Oh, okay. I think that's it. And yeah, and if I'm happy to open it up for questions if you have one and otherwise, thank you. So first of all, thank you. It was an excellent presentation. I really enjoyed it a lot. Thanks a lot. And my question is, how do... Which format do you use to generate the S-bombs? I saw the SPDX logo, but I was under the impression that that was for licenses, more than actual software versions. And how do you feed it to, I mean, scanners or anything that can consume that format? Yeah, okay. So the first challenge was issuing the S-bombs. And we use SPDX. Kubernetes is a project of the CNCF, which in turn is part of the Linux Foundation. And NSPDX is the S-bombs standard from the Linux Foundation. So we use that. There have been some calls to requests from people for us to use also Cyclone DX. And I mean, we're open to it, but as you can see, we still have... I mean, secretly it sounds like a big deal, but in reality, it's like a really thin team. We're always looking for contributors. If you want to join on any of this, please... That was my final message. Please come and help. And so we're like really stretched in. So I mean, I would love to work on a Cyclone DX version of the S-bombs, but we simply don't have the volunteers all the time. So if anyone wants to work on that, happy to have them over. I would certainly love to have our tool produced both standards. But yeah, currently, that's one. And the second is we produce our first S-bombs, and then we've been trying to fit it into some of the security tools. And we've been making improvements to the output as time has been going on. So for example, we saw that some of the security tools key on the PURL of the packages. So we added that. And then there's another project. Well, another PR going to have the CPEs to it. And well, that's the current status. No. Okay. Two questions. What's your primary threat model and have you caught any attacks in the wild? Sorry, can you repeat it? What is your primary threat model? And what is your... Have you caught any real attacks on the build process? Okay. Someone... This has been going on six security, which I'm not a part of, and they did the threat model. And the second part that I got anything with the... Not yet. Not on our side. Thanks. Also, big star was found. You mentioned about the attestation process. And you said you might have any thoughts or patterns that are emerging. Could you talk a little bit more about that? Yeah. So one, there are mainly two patterns that I've seen to deal with testing to the build process. One is having a binary calling another binary or kicking off a process to ensure that... And observing it. So you have a binary that calls the process, then the process runs, has some outputs, and the same binary sees what it output and records that. And so it knows what it pulls, the way it ran it, and what came out. So that's one. And the other is calling some sort of web hook. So when you trigger your build process, you call a web hook, start recording, when it finishes, use to record the other from the outside. So the idea is that some external entity observes it from outside. Those are the main two that I know of. How do you protect the keys, the private keys that are being used for signing? I don't know if it's related to the threat modeling probably, but I would like to know how is the governance of that? Yeah, excellent question. So I don't know if you saw the Sextor logo on the talk. So we use Sextor for signing. Sextor is an awesome project that you can go and visit our booth and down in the expo hall. And Sextor allows you to sign things with the ephemeral keys. We call this process Keyless. It's not really Keyless, we just use ephemeral certificates and keys. The way it works is you generate a certificate. It lasts like five minutes or so to sign your things. And then that certificate expires, and the keys are now useless because that certificate expires. And then you can always verify the images because the certificates and its public key are stored in a public transparency law. So there's no key management. And also the way you add the identity to the process is that it works with OIDC keys. And the whole release process for Kubernetes runs in Google Cloud Build. So we use credentials from service accounts to give it the identity of the signer. Okay. So probably I didn't get the last part because if you are creating ephemeral keys, you need to somehow sign those as well in order to be trusted, right? I mean, you are creating ephemeral keys in order to sign those artifacts, but those ephemeral keys that you have been using for those processes should be verified as well. That it's coming from the right source. Yeah. So the question is how do I... How do you verify that you are creating that ephemeral keys from the trusted? Yeah. So all of the signatures and operations that happen with the six-star tooling get stored in the transparency law. So when you receive an image, you get the signature. So the signature has the metadata necessary to go back to the transparency law and see who gave the identity to the signer and who... The certificate to verify the artifacts. Okay. Thank you. I'm really excited to see this work because it aligns with a lot of the discussions that we've had within the CNCF security tag. So my question for you is can you describe what kind of constraints or considerations you've put in place for the actual build environment itself and what those images are and what they contain, whether or not you have additional packages built into them and what those signs look like? So the Kubernetes release consists of about 50 images for all architectures and all of the Kubernetes components. So there's one for the API, one for the proxy one, all of those. And we also produce the the binaries for Kube CTL, Kube ADM. I think it gets released with the binaries if I don't remember correctly. It's also like a list of 20 or so binaries for all architectures. And also the environment, it's kind of has been kind of evolving since a long time ago. When we started building all of this, we adapted all of the process to the actual environment and also to the human processes around the release. So the first big one was ensuring that the way that the build process was structured inside of Google Cloud Build allowed us to do this. And the second was we made sure that no other like risk were there. So we patched some of them that we found before and then we started implementing all of this. And what was the second part of the question? What are you using in your build host, those build environments themselves, what constraints do you have on them? Such as obviously you don't want anybody SSH-ing into those boxes while you're doing the build. So what do you have installed? What do you not have purposefully installed? Okay, yeah. So the way it works is this whole world, all Kubernetes has a set of images that are the build environment for Kubernetes and for their projects. This image is called Gates Cloud Builder, if I don't remember correctly. And it's an image that we in Sieg Release produce with all of the tools that we want them to, that we need during the build process. This image is also built and signed by us. And this is the environment, that image creates the environment to run the build. So we defined the steps in Cloud Build and it launched in there. And then the other notable part is the image promoter. The way the image promoter works is that you stage the images or files and then you open up your adding those files to a manifest and then the community can go and authorize the promotion of those images. So someone opens a PR and it's the actual owners of the code that get the authority to approve those images. And after it gets approved, a post submit job triggers a promoter and it kicks off the job to promote. I don't know if they can answer it. Yeah, it's just the same thing where if the CIT who run the runners, if they are compromised and the promotion tool is then compromised, would it be better to have the build be done by a separate team? Because then you can't, you'd have to get collusion then, right, to get the compromisation. Yeah, absolutely. So the question is, if you're building the builder within the same process, can you trust it? Well, I would say never trust that's the first one. But the second is, yeah, there is a, that is one that may be concerning. We build it in a separate process. So it runs in a different step. So it's kind of isolated. But it would be better to ensure that we are using a previous build. I mean, just to ensure that. Because if you are always pulling the latest version of the code, you don't know what got merged, right? But if you know that you have a known good build, you can simply use that one. That's a point for improvement. I don't want to help you if you want to merge that. I'm happy to guide you. But yeah, ideally, the builder should be already built and trusted before you, okay, again. Oh, okay. I think that's all the time we have. Thank you so much.