 All right, welcome to this wonderful sponsored boff. Thank you, CNCF, as weird as that is. Let's talk about baking a cake with supply chain tooling. So this is more or less gonna be a giant Portal 2 joke. Hopefully you've played Portal 2, if not, well, you'll roll with it. So a couple of months ago, my best friend, Bob Killen and I, we are inseparable. We wound up doing a tutorial on supply chain security at DevOps Days Minneapolis, 2022. We've done tutorials before. We have a Kubernetes tutorial that is kind of widely distributed. Many, many moons ago we had a container tutorial that at this point is like six years out of date. And our friend Bridget, who helps run DevOps Days, kindly roped us, Valen told us that we were gonna do a session and we did this on supply chain security tooling. What we, oh, there, what we found was when we were building this tutorial and doing all of our research, there's a lot of tooling that's getting built, but there's not really a complete picture and it kind of looks like a huge hodgepodge of a mess when you're trying to tie everything together. There is no, when you think about Kubernetes and you think about container orchestration, cloud-native applications, there is no de facto standard. There are certain de facto tools, but at this point there's a lot of really good exemplars for how to build your application, how to deploy your application. There is none of that right now in like the SIG store space and just supply chain security tooling in general. And we had to give a tutorial on that so we had to kind of do a lot of that legwork ourselves and for the record, if we don't go into conversation very much, I will just pull open GitHub and kind of talk about it. But what I wanted to talk about was, is there some way for us to actually come together and make a recipe like a generic, I want or I need like these five things in order to say that I both deploy software securely but more importantly, consume that software securely. Right now with SIG store, it's real easy to sign containers and you know, kind of whatever artifacts. And it's also relatively easy using Kyverno and I guess the policy controller when it is working because it wasn't working when I was trying to do the SIG store demo. To verify that this container is signed and with a little bit more work, verifying that the container is signed and it's signed by someone that I trust. But there's no real examples that we found of, yes, this is software that has been built in an environment that I will trust because it is the environment that the developer actually trusts. There is provenance data that shows all of the config data for the build who actually executed the build. And like essentially true end-to-end validation of what is built in what they expect as well as what I expect. So that's kind of the prompt for the conversation. If no one has any ideas, like I said, I would kind of go off the rails. But if you have questions, we can kind of talk as a group. I have this here microphone, we can pass it around. Oh, it's kind of sad that screen is real, real dark. Let's look back at that. Do the demo? Wow, you were, thanks. Thank you, Rob, display settings. For the record, this is not something that I was ready for. Okay, in a nutshell, GitHub workflows, Docker signed build. So when we were doing the tutorial, we wanted again some example to be like, hey, if you are just building a generic container, let's at least generate some of that. SLSA is a standard that's kind of being adopted for generating provenance data in the format of it. Let's build using GitHub Actions. Let's generate Salsa provenance data saying that it's getting built using a GitHub action and then let's all inject that as attestation data within Recor, which is this project of Sigstore. I'm kind of like not explaining the components because this again is kind of off the rails. But in general, this is like at the end of the tutorial, everyone got it. And I was like, this is the TLDR for this whole thing. If you just want to generate secure containers, here's how you do it, copy paste this. And then also in this repo is a Kyverno policy so you can just apply it and make sure that A, you know, you change these two lines to be who you want, who you are so when you're consuming the data, you're validating that it's your stuff and that's done. So yeah, this is honestly the demo. So we build, we build a container. Woo, hopefully everyone kind of has seen that. We have a provenance generator. It detects the environment and gathers a bunch of information on the build environment and then it will actually generate the Salsa data. I will actually show what it looks like. And then just as a sanity check after we've built it and we sign the container and then we add the attestation data to that signature and recore. Hey, just go and validate A that the container exists. It's signed, it has the attestation data that we generated. And since I'm kind of already doing it, six store intro tutorial. Okay, that is, you can see that, right? Cool. This is a cavernal policy and all it's really doing is saying if there is an image that is being consumed on a cluster and it happens to say, okay, oh yeah. If the image is from Jeefy, that's my nickname. Any of the containers from Docker Hub that come from Jeefy forward slash star, so any of my images make sure that they are signed and it is from one of my domain name emails, Jeefy.dev. And then make sure that the issuer is GitHub because essentially that's just what I'm using for my OAuth. This is kind of a SIG store thing specifically but we're using caverno to actually validate this data. And then anything else that may have Jeefy as kind of a namespace. Also make sure that, whoop. Make sure if it's getting built, image references, if it's getting built with a workflow, make sure that that workflow is getting a token from GitHub user actions. It kind of ties everything together. But I'm not kidding. Getting just this was kind of painful and I feel like it shouldn't be that painful and I also feel kind of weird saying, hey, if you want a true end-to-end consuming and producing, go look at a bunch of caverno policies and figure this out yourself or me spending a couple hours digging into it. I feel like the faster we can lower the bar to entry, both producing, which producing is definitely where most of the effort is, but also consuming give us a full picture. A lot more people are gonna adopt this faster. And I think that whole picture again is kind of lacking. Does anyone have any questions or thoughts? I do have a microphone I can throw at you. Rob, come up with a question. Oh no, you actually had one and I don't know you, let's go. What's sort of on your wish list on the consuming side for the policy enforcement that you'd like to see be easier? So with caverno, I got it working. It's cool, right? But I still needed to dive into quite a lot of caverno to really get into it. I feel like there should be a simpler CRD that lets me just say I want this identity linked to this container image specifically, or this like, we'll call it a namespace, right? So like anything under, you know, the GFI, you know, Docker identity should also be signed, you know, the container should be signed by me through GitHub. I feel like this should be five lines, not 30. Have you looked at the six-door policy controller? I did. And the problem was when I was doing this, I mentioned this before you came in, when I was building that tutorial that I gave at DevOps days, policy controller was broken at the time. So I was like, crap. Okay, okay. Don't get me wrong, policy control, and in the tutorial I mentioned this, the policy controller drastically simplifies that because again, when I first built the tutorial, it was like a month before the talk, and then when I went to just validate everything was working like two nights before, policy controller wasn't working, I was like, oh, okay, well, this is actually kind of an interesting story to tell as well. So yeah, I think the policy controller is heading in the right place. I like the Kiverno angle because it gives more options for people. Like you're not just using six-door stuff and if you already have Kiverno in your cluster for other policies, then you don't have to install policy controller and add another thing in your cluster, you already have the thing that you need. So. Makes sense. Yeah. Anyone else? Who wants dinner? I'm like, this was it, like fair warning. I mean, I have another thing. So I saw in the folder you had a Git sign demo. How'd that go? Awesome, cool. Are you chain guard? I'm the person who made Git sign. I will have you, I almost want the camera and mic turned off, I'll have you know, so I work at the CNCF and I have regular meetings with GitHub every month and I'm not kidding, every month I say, are you trusting SigStores certificate so that Git sign commits are verified? And I won't say like their answer on camera, but I'll say this eventually. But I mean, I love it. I have it set up on my laptop and everywhere, like. Cool, yeah, me too. The greatest tweet that I saw that explains SigStore in a nutshell is you're taking impractical key management and making it a tractable identity problem and it's like, yes, this describes you in a nutshell perfectly and I love it so much. So yeah, cool, awesome. Does anyone have any other thoughts, questions? Does everyone know what Git sign is? Good, hmm? Okay, so luckily I have this here, SigStore intro tutorial that has a Git sign folder. Yeah, explain it, that's right. This is your talk now, not mine. If you want to pull the demo, yeah, go for it. I mean, the thing around Git sign is it's basically, are you familiar with Cosign? Okay, so Cosign is for container signing, Git is for Git signing. So the idea is we're using the same ephemeral cert flow that we use for container signing and all of that for Git commit signing. So all we're doing is, when you go through the Git commit flow, we'll open up the browser tab, which is nice because now you can sign your Git commits with your GitHub identity and you never have to actually pre-configure a key, use once, throw away, do all of that. So it's pretty, it's pretty. It is very slick and you can grab this script and it'll configure it for you and just go to town. Yeah, so we, under the chain guard org, we actually have a GitHub action that you can just import and it'll do all these configuration steps for you and you can just sign things with like OIDC identities. So we want to use that for attestation stuff that we can like bake into the repo as well. The same way that like cosine is baking in attestations to like OCI repos, we also want to like see what we can do to bake in Git, like commit source attestation data into the repo itself. So that's like super active development. Like it half works, it isn't ready for people to use yet, but no, it's not that one. Or is it this one? I need to find the right intro and it's something shared with Bob. August 9th, yep. Oh, I wanted to do show like salsa data. So I keep mentioning salsa. Salsa is this thing. This was Bob's kind of section of our tutorial, supply chain levels for software artifacts. His whole thing, by the way, was these are kind of the components or points where people could actually inject code and this isn't necessarily every attack vector but it's certainly a large picture and then generating salsa data at least tries to mitigate as much as possible as long as it's again verified salsa data that is part of the attestation. You can keep me honest if I'm saying words. At this point, it's all words soup in my head. But here's kind of what it looks like and it's honestly, it's JSON. It's not YAML because this isn't Kubernetes, it is just JSON. But it lets you capture pretty much as little or as much as you want in terms of the build environment. So like, is it Jenkins? What's the Jenkins node information? What are the environment variables that were stored when the thing ran? How did it get invoked? All of the stuff and then you get an artifact and then that artifact gets signed. So again, we're trying to capture more information earlier in the process. So you sign your source like, okay, this git commit is signed. Theoretically, the git repo is secure. You take that code, you actually build it. Okay, the build is now verified. We have all the information that we can look at and make sure this is what we would expect. It's not being executed on a server somewhere we don't expect. That would be a little sketch. And then the package itself, the artifacts that we get out of it, those get signed and then shot off. And then we can again build policies in things like I've earned over the policy controller where we can verify everything is good. It's like, honestly, I feel like in six months, and again, I said this in my tutorial as well, the tutorial is gonna be out of date in six months because we're gonna have a lot more tooling that shows us complete picture. I just kinda wanna kick it and make it happen faster. So some of the cool things you can do with this and also with the six-door cert is you can write policies, only allow this image to deploy to my cluster if it was built from main from my repo. From this action that ran on main on my repo. And that's something you could just do with a key. Because what you're doing is you're assuming, oh, this key's only being used in this workflow, but can you actually trust that? So one of the things that I would like to do with a co-built and published container is I would like the application to self-report parts of the S-bomb. And that's one of the things that I've been, I've just been chatting with Delpho today on how to do that. And chiefly because when things are going wrong during development, I wanted to be able to say, well, I came from this commit ID as a bare minimum. Is that possible to have an application self-report? Yeah, or it's not. So yes, and I think that will probably become more and more common. So some tools already go in general, like if you know the commit, you have the go mod file. So you can sort of infer it from there. And I think even more recently, I think in go 118, they actually embed the go.mod into the binary itself. So you can do like go mod, like there's some command you can run it and it'll actually spit you out the go mod that went into that binary. I know other tools like co will also embed S-bombs into images that are produces. I suspect that will become more and more common across build tools. But yeah, those are the examples I know off the top of my head. Hi, thank you. I have like too many disagreement, but let's start with the question. Okay. Is anything blocking GitHub to say that some different, that me as a hacker or GitHub insider or anything, signed, use your OICD to sign it. Because you are saying that verify me, maybe, sorry. I'm not sure if you can put the policy back, but you are saying that verify, trust this address, but the only to me, the only trust thing there is the GitHub saying, oh yeah, he signed it. Yeah, so the subject or whatever you're trusting this, but how is it verified? It seems to me that the only trust is on GitHub. So let me kind of walk through this and make sure that we're both on the same page. This is saying if I do like a Docker pull or I guess like a kubectl run, and I run any image from docker.io slash gfee slash whatever, that image needs to be signed and it needs to be signed with any email address from my domain name. That to me is a pretty good, at least sanity check that anything that you would run for gfee, whether it's like side projects that I run, that I'm the one I had the identity that generated that thing. On top of that, it will also double check and make sure that it's an image that was generated as a GitHub action with, I don't lock it down to main. So, oh, you're right. So, zoom in. I'm trying to say that to me, the main thing you are saying here is like trust GitHub. Trust GitHub as an OIDC provider, but we can also use other OIDC providers. We can tie in Google. You can also run your own certificate instance and then point it at your own OIDC provider and this could be completely internal. I get it. I didn't know about the Docker or container signing, so I double-checked it. It seems to me that there are like two ways. One is, of course, the GPG. The way it works is that it works and nobody would use it, probably, because you would need to verify the GPG to your right. That's, we know that that's a theoretical thing in the real world, nobody's using it. I think that's my opinion. The other way is the OIDC, but I don't really think that it's this OIDC. I think it's, sorry, let me, I think it's nice. It's solving the issues with GPG, so it's definitely better than using GPG. On the other thing, I don't really, I don't think it solves the trust between the developer and the build system, in this case, GitHub, because there's no way how I can verify that you are the one who wrote the code for it. The only trust I have is like, I know that it goes from the build system, but with the GPG, well, I know that, theoretically, if you verify your GPG is now, I know that you wrote the code. No, no, no, that is a very good point, especially in this Kyberno policy, and you'll have to tell me if policy controller actually goes and integrates with Git signing yet or not. So that's actually one of the things that I would have, I'm happy to hear. This is still an unsolved problem. Right now, most of what we're doing is going from C and onward, and we don't have something that gives us a complete picture that verifies essentially be backwards. Right now, I could push a commit up technically if you manually go and look and verify that I am who I am and it triggered a build, you can manually see everything get built and have faith and trust, trust in GitHub that their systems aren't compromised, assuming that's good, I'm the one that pushed the commit, this is the container that you got based on my build, but we really, this is still an incomplete picture. Yeah, so we have some source validation, like verification, it gets signed right now, that the harder part is like, Git only gives you an email address, so how do we do the GitHub actions, like URL for something that doesn't have an email, that's what we're working out, user verification's actually easy, it's the machine verification that's hard, usually it's the opposite. So for the email thing, are you more worried about like GitHub lying about your email or are you worried about like how is the cert, how can the cert be trusted from six store? Yes. I think what we hear on this conference that we need to trust all of the parts, we need to sacred it, and I'm just trying to point out that to me, it's not, you know, it's not the whole thing and I don't really know how to solve it. And so, and I just not sorry to your question, I think it's like how I can, I don't know how to answer that, more like how I can trust GitHub that it's you. So, so the nice thing about the issue we're there is that like you don't have to trust GitHub. So like if you know like it's an atgmail.com, like you can just have a policy that just says I won't only trust like accounts.google.com to say like, oh, like that's the actual OIDC issuer because that's the authoritative source. Like for this, for GitHub, it's basically just saying like, yeah, there was an account that had this email associated with it. Is it actually your email? GitHub doesn't actually know, right? So that's one thing. So if you are more concerned about that for most people, it's probably fine, right? Cause you do have to go through that verified flow. But if you are, you want to be a little more paranoid than you just said that the issuer to be the actual identity provider for that host and only you would know that. From there, I think there is still a bit of trust that like you need to trust like six-door is issuing these certificates properly. I know there is some work going on. I can, it's all open source, six-door is all open source, yeah. Yes, so there is that, but there is additionally, I am less familiar with the crypto stuff going on there but I can put you in contact with someone who would know more. I know there's some research going on for basically being able to verify OIDC tokens after they've been used without actually giving away the OIDC token so that we would be able to include that in their certificate. So ideally then you'd be able to say like, oh no, that actually was that person from that identity provider even long after but that's actively being worked on and looked at and so we're not quite there yet. So there is a little bit of trust that you need to put into six-door that's doing the right thing. But if you don't trust it then you can run your own instance and all of the code and do all that. Yeah, he had his hand up quick. I don't know which point to address. All of them. So one of the nice things about the combination of six-door and salsa is that it makes an intractable problem tractable and then you have to think about, well, do I actually trust that thing rather than just I can't possibly trust any of it so it's just a mess. So things are improving given how recent six-door and salsa are I think like the progress is amazing. Yeah. You can also tell me to put other things up if you want, if there's a different screen. Oh, so I wanted to mention the Salsa project has some reusable workflows that the Salsa GitHub generators so you can generate your container image using this reusable workflow. And the reason it's a reusable workflow is because it stops someone who has push commits on the repository from tampering with the code that's doing the attestation generation and signing. The whole point of a Salsa station is it describes what happened during the build. So if anyone who can push to the repository can change what you're attesting to, then they can cover their tracks effectively if they get access to the repository. So yeah. All right, so that tells you that someone who's approved to push to this repository triggered a build action and it tells you like the commit digest of the repository at the time the action was run. And it tells you that it definitely ran on a GitHub builder. So if you trust GitHub action to operate, which frankly, I trust GitHub actions to operate securely more than most other available options. They're certainly up front. Yeah, I mean, I still fall back on the play that like it's making everything, you can suddenly reason about this and try to determine which things you don't trust rather than like the state before, which was just, it's fine meme, right? Like you couldn't reason about software supply chain security in the same way, I would argue, because it wasn't even clear who you were necessarily trying to choose not to trust or whether to trust, I suppose. I don't think I've convinced him. That was fine. I'm just thinking if we are putting the trust into the GitHub anyway and this is more like an absolutely idea from the point of my head. It's like in my naivety, I would, if we trust GitHub and GitHub somehow trust me, which is also, this is the way that about, for example, me signing the comments with my GPG key or anything, I can imagine that it will be built on the top of that. So I signed my commit to GitHub and then GitHub somehow sign it and it doesn't make any philosophical difference for me because in the end I trust that the code in GitHub was pushed by a user who is approached to do that and then GitHub somehow saying me that, okay, I have built it. I would guess that I understand why this is here because GitHub probably doesn't make that, this is a little bit like a, this sounds to me like we are doing a job for GitHub because GitHub doesn't have any such verification but it can have somehow. Well, with Git sign, there might be a story that GitHub tries to address to do it themselves. I tend to not think they'll do that just because there's a lot of components tied into SigStore specific workflows that Git sign uses. But right now, if you generate your GPG key and sign your commit and then push it up, that's great but if someone stole your laptop, they have your GPG key and they can push up to your repo and that's not great. But with Git sign, if they tried to push up, they're going to get prompted with a login to GitHub and at least that's a bit more of a barrier. That's kind of how I see it. Git sign is a better way to link me as a person to the code that I am committing because right now, how we actually work is this as a laptop is pushing the code, not me as a person. Like yes, there's GPG keys that you can distribute but that GPG key is linked to here. It's not linked to me as a person and a point in time and that's kind of the way that Git sign is solving. My identity has pushed that code, not my laptop has pushed that code. I would like to challenge that. So you can encrypt your GPG key and it doesn't stay available at least not that easy if your laptop gets stolen and the question might be it's like if what is the change of storing your laptop and then storing your GitHub identity? What is easier maybe? This is a balance of course. Yeah, I have opinions but. Yeah, I actually talked about this a little bit at my talk too. Oh sweet. So yeah, with GPG signing, if you have good maintenance of your keys and you have them encrypted and you rotate them and you do everything you're supposed to do, great, fantastic. Yeah, no one's saying drop your GPG keys, like move to Git sign. The reason why we're putting effort into it is like a lot of people don't do that and we wanna make that barrier to entry as easy as possible just to it's like, okay, if you wanna do all that great, fantastic. If you don't, then don't use Git sign, use something else. Yeah, it's just about making these things as easy as possible. So there's really no excuse not to do them. I would like to say one positive thing about Git sign because I don't want to be the hater, I'm not. I think it might solve one of the question. Maybe you can tell me that I'm wrong but I think one of the issues which we saw is that if you have a system behind some jump host or whatever, it's very hard to sign the commits there if you need to do for some reason that is like you can forward GPG agent. It's hard to do. But this probably might solve it because that would work probably anyway. We have five minutes left and we went off the rails in such a wonderful way. Do we have any other comments or thoughts? Again, this was just meant to be a discussion. I didn't have a point. On the committing from the laptop, if you have two FA enables like when you have laptop on the other device, so that is better, isn't it? Yeah. Yeah, when you go to the flow. Yeah, I mean. That's a robot. That was my opinion was yes, that's true if you encrypt your GPG key, that's great but GPG doesn't have two factor, so I mean. They're gonna have to steal all your stuff. Yeah, I guess it could technically. Like the only thing which I, it's like I would imagine that GitHub or Git provider can somehow publicly certify to where you push your work and if you sign commits with two FA, I think they have it for, I saw something, I don't think, I can imagine that that would be a way how we can build some trust and motivate people and it's probably, I'm thinking if from this scenario it would be better to use GPG key or the ICD. But anyway, I can imagine, but again, it's like, I don't like the idea that we put the trust M into the ICD. I can imagine that there is like the effort to have the, some ICDs are bound to the IDs and a lot of verification and the national IDs at least in Europe it's becoming a thing. So I can imagine if it would be bound to some trusted ICD provider, it would somehow help. I'm not sure. So the reason that six store is leveraging OIDC is effectively like most people put a lot more effort into securing their email account and their GitHub account often like not intentionally because GitHub strongly encourages you to have a second factor of authentication than they do pretty much anything else. So in the average case, if you make sure that the thing is signed by someone who can at that time authorize with this typically more trusted account, then you have like leveled up the security posture and you can see the evidence of this when you see all of these attacks on things like MPM and PyPI where maintainer pushes to PyPI very infrequently. They have a different credential for it and they don't have any additional kind of security features on it. And so someone cracks the password and starts pushing things and they never notice because they are not actively monitoring the account because they only look at it when they go to publish a release. So that's effectively why six store is built on top of the OIDC thing because some people do it right with GPT keys and encrypted GPT keys and they're stored off device and they're rotated frequently. Most of us, even though it's like speaking from personal experience, most of us who try still get it wrong. So the idea is basically just to have an easier option so that more people will try and use it and root the trust in something that is typically best secured anyway. That's the kind of the pitch for OIDC. I cannot believe it, but we're actually at time. All right, don't clap for me, clap for yourselves. I was just kind of here to spark stuff. Thank you.