 All right, I think we are live. Hey everybody, welcome to episode eight of Chatloop Backoff. This is a program that dives deep into the realms of the cloud native ecosystem. I'm back yet again, if you've turned in before, you've seen me hopefully this time, little fewer technical difficulties than the last one, but I'm glad to be here again. If we've never met before, it's a pleasure to meet everybody and thanks for joining me today. My name is Jeremy Rickard. You might know me from Sig Release. I send emails once in a while when we do new Kubernetes releases. I'm also part of the CNCF and Kubernetes Code of Conduct Committee, so you may know me from there. If you've never tuned into this stream before, well, welcome again. This live stream is a sibling to the similarly named Clash Loop Backoff. An event that's been happening at the last couple of Kubecons, Chicago and Paris. In that one, two community members are pitted together in a competition, not unlike some cooking contest where a secret ingredient is chosen to accomplish some sort of technical challenge that's the host's choosing. It's meant to be laid back for the audience and pretty fun. There's some color commentary that goes on, but it's kind of interesting because going into it, the competitors don't know what topic they're gonna pick up, so they have no real preparations going on for that. This stream is a little more chooser-owned adventure version of that. The idea behind it is that we're gonna pick a topic and learn it live. So the goal with this is that I have done no real in-depth research into this topic. Instead, I wanna kind of go through my normal learning process, dive into this topic with a fresh set of eyes and kind of go through it as I normally would off stream. In doing so, I hope that we can all walk away and have learned something new. Today, we're gonna be looking at the CNCF incubating project, Keyverno. So you might ask, well, what's Keyverno? Well, that's what we're here to learn today. During this live stream, if you already know about Keyverno, feel free to jump in, answer questions that people might have, help me if I get stuck. I'll definitely be taking a look at the chat and try to answer as many questions as I can. Make it a little focused as we work through some of that stuff, though. If you have questions about what I'm doing, the project itself, feel free to jump those in chat and we'll try to work through them as much as possible. That being said, it is an official live stream of the CNCF and is subject to the CNCF Code of Conduct. So please don't add anything to the chat or questions that would violate the Code of Conduct. Basically, please respect all of the participants and me in this live stream. This video is being recorded, so you're here live, thank you for being here. But the folks that couldn't be here live, we're gonna make this available on YouTube so people can still learn along with us just asynchronously. If you've been here before, we've gone through some cloud native news. Today, I chopped it down a little bit and just wanna go through a couple of things so we can give ourselves enough time to dive through Keyburno. I think it's gonna be a stretch to fit it into the time. We may do a second episode of this coming up, but we'll see where we get. So first up, I'm gonna share my screen so we can just take a look at a couple of things. See if this works, share my screen. All right, so we bring up Safari. First thing I wanted to make everybody aware of, I've mentioned this last couple of meetings, last couple of episodes of this, Kubernetes 130 is gonna be here so soon. It's gonna be here next week on April 17th. So we're less than seven days away from this release happening. It's super exciting. I think kudos to the whole release team. I've been watching it as chair for Sig Release and it's been a really, really cool process to see unfold. Lots of great things happening, lots of cool content coming in. Definitely stay tuned for all the blogs and the feature blogs and the release notes and the actual release blog itself. It's gonna be really cool. Second thing I wanted to mention, and mostly this was kind of to remind myself because I asked this question yesterday and somebody said, oh, it's gonna be coming up in June. The second thing I wanted to highlight was that the CFP or call for proposals for KubeCon North America is open right now and it will close on June 9th. So it's a little bit away, but if you're like me, you may be procrastinate a little bit on getting these things ready. So I just wanted to put a reminder out there. I think it'd be great to see new speakers and new folks coming to this and bringing your lessons learned and all of your cool learnings and experience around Kubernetes and other things in the cloud native ecosystem. Okay, so real quick briefs, briefs through that set of news and kind of updates. I wanna dive into Key Verdo now. So we're gonna do that. Let me jump over to another browser tab here that I pulled up. And I guess this will answer our first question. What is Key Verdo? I see a question in the chat, maybe a question from Chris. Key Verdo versus OPA. I think we'll maybe dive into a little bit of that. And I think you know, maybe a little bit of context about what Key Verdo is, but this is gonna answer it for us. So basically right up front, Key Verdo is a Kubernetes native policy management tool. So what's a policy management tool? Policy management, or what's a policy, I guess, some set of rules. I think there's gonna be some documentation that we can dive into for that. That's generally gonna allow you to apply probably security policies to your cluster, controlling, shaping maybe some organizational policies that aren't necessarily security related, but are things you wanna apply at the cluster level. Okay, so we're gonna do, maybe a little walkthrough of docs first just to kinda understand things. I do have a Kubernetes cluster ready to go for this. It is a brand new Kubernetes cluster. We're using an AKS cluster. I work for Microsoft, but I have a free cluster, so I'm using that one today. So just to start off with, we've got, let's see what we've got running in, just to make sure that I haven't done any pre-work here. So we've got just an AKS cluster. There's no Key Verdo stuff running in this at all, so we should be ready to go. If this fails for whatever reason, we'll spend a Pekine cluster and try to do things locally, but I wanted to kinda go through a real exercise here and see what it's like to do it in a real cluster that allows us to expose things to the internet and do fun things like that. Okay, so let's click on learn more and start our journey. So we've got, Key Verdo is a policy engine designed for Kubernetes, that design for Kubernetes piece is in bold. I think maybe Chris's question, Key Verdo versus OPPA, open policy agent or OPPA, I think is a more general purpose thing. It works well with Kubernetes and I've definitely used it in the past, both as OPA with sidecar piece and OPA gatekeeper. So I'll be able to draw some comparisons as we go through there a little bit, but I think what's standing out to me here is that Key Verdo is a policy engine designed for Kubernetes. So this is probably built from the ground up to be a Kubernetes thing. Another big difference between OPA and I think this is that no new language is required to write policy. So in OPA and gatekeeper, you're gonna be using Rego or Rego, however it's pronounced, in order to do your policy authoring. So it looks like here we're gonna be able to just work with Kubernetes resources and not necessarily need to write policies in the form of some other languages like Rego that we need to learn. Okay, so that allows us to do familiar things like Qubectl get and customize to manage our policies. So that's pretty cool. We can do some GitOps, we can use the tooling we already are familiar with, we don't need anything special on the developer side or from our CI CD systems in order to take advantage of that. That's pretty cool. Next thing on here, Key Verdo can validate and mutate and generate and clean up Kubernetes resources. So this validate and mutate thing makes me pretty sure this is gonna be implemented as an admission controller or admission webhook for Kubernetes. Maybe we'll find some architectural diagrams here. If not, I can sketch something up real quick to kind of walk through what that looks like. But we'll go through the docs. This is a cool one, verify image signatures and artifacts to help secure the supply chain, the software supply chain. So this would be something like you're using Chain Guard and you're producing signatures using or six-door and you're using like cosine to produce signatures for your images. And you wanna verify those things or somebody that you're consuming images from is doing that. They're signing these images to provide some non-falsifiability that I produce this image, not the bad actor over there. And I only wanna allow things in my cluster that have valid signatures. So you could do that with cosine, you could do that with notation which is another CNCF project. So that's a pretty cool pattern that I've seen emerging. We're doing some of that in my day job today and it's pretty nice to see that that's a feature that's coming in key right now, that's pretty cool. Okay, so I wanna look at sample policies before we start trying to walk through the documentation a little more, just kinda curious what this looks like. So let's, there's quite a few things here. Maybe we can pick maybe a simple one. This one, I think it would be interesting, ad labels. You wanna add all kinds of labels to your deployments to be able to define them later on and kind of query for things, make other rules for those. This says ad labels, I see a bunch of ad things here. Those kind of speak to me as mutating things, making changes to the request that's coming in. So let's start with this one and just maybe take a look at what it does. Okay, make the text a little bit bigger and make sure everybody can see it. So labels are used as an important source of metadata. Yep, that's pretty straightforward. If you have done anything in Kubernetes before, you're probably familiar with labels. They're used for services and a whole bunch of other things, key value pairs, metadata like that. So there's a sample policy here and I think we can see our first little dip into the world of Keyverno and we have a custom resource here. So this custom resource is called a cluster policy and the API version for this is keyverno.io slash v1. So this looks really familiar to other Kubernetes resources. It's just a custom resource but it's following that kind of same pattern, right? We've got a spec here. It's very familiar to other Kubernetes resources like a pod or deployment or your service. A bunch of different metadata pieces up here, annotations. This is pretty cool. I think you'd be able to do some visualization of this or use it to organize or better manage your policies and kind of understand what's there. Metadata, the name of the policy. So this is all pretty simple. I think once we get down here, we're starting to see what some of these policies look like. So the rule here is called add labels and looks like we're gonna match any Kubernetes resource that is of type pod, service, config map or secret. And I think up here, this policy performs a simple mutation which adds a label of foo equals bar to pod services, config maps and secrets. So we see that here. We're calling out the specific kinds that we wanna use there, pod, service, config map or secret. If we wanted to add other things here, I think we'd be able to call them out here. And then the action I think we're gonna take is mutate. We saw that up here as well, performs a simple mutation. It's gonna do a patch strategic merge. And it's going to basically patch the metadata section of whatever resource we've got here to add foo colon bar to the labels. So we don't have any labels on this one but we've got annotations. You could imagine that this might also have a labels object on it as well. And then you could maybe add cluster policies to this to add labels to it. So, okay, that gives me a little bit better idea of what our policies are gonna look like. We're gonna have a set of rules inside of those rules. We're gonna have some configuration that lets us match against what we wanna apply that to. And then some actions we're gonna take. Okay, I'm gonna jump back to this page and start with the documentation, see if it gives us a little bit of a quick start to kind of walk through and figure out how we're gonna get going with this. But I think having that little bit of context in the sample policies is gonna be useful for me. Just before we go back, I think this is a good URL to take a look at, keyferno.io slash policies. I'll put this into the show notes afterwards, but there's a whole bunch of different things here. We've got some mutations, some validations, some image verify, some cleanup stuff. That seems like a pretty cool use case too. And then a whole bunch of different categories. There seems like there's a lot of things that you can do with this. This pod security policy migration stuff is kind of drawing my attention as well. Maybe we'll dive into that a little bit. Okay, so back to our docs. Okay, so we've got more info on what keyferno is. It's Greek for govern. That's cool, keeping with the Greek words, Kubernetes. It's a policy engine designed specifically for Kubernetes. Some of its features include policies as Kubernetes resources, no new language learned. So we just saw that, no regal policies to have to do. We can maybe look at a comparison to regal in a second. It can validate, mutate, generate or clean up any resource, validate container images. We saw that on the previous page. Inspect image metadata, that's pretty cool. Match resources using label selectors and wildcards. Validate and mutate using overlays like customized. That's pretty cool. Synchronized configurations across namespaces. Block non-conformant resources using admission controls. So that's what I would typically think of as like an admission controller where we're doing validation. We wanna make sure that our resources are complying to our policies, complying to whatever organizational rules we've put in place. That's a really, really good use case of webhooks like that. A self-service report. Self-service policy exceptions, that's pretty cool. Testing policies, that seems really, really useful. So there is a CLI, the Keyvenor CLI. I think on that other page it mentioned you didn't need any other tools, but I would guess that this CLI is gonna give us a lot more functionality like testing policies, validating resources before we apply them. So that's gonna be really nice. And then of course managing policies using code, as code using familiar tools like Git, customize. Any other tool you're using to apply things to clusters is probably gonna be a great way to be able to use Keyvenor. Okay, so how does it work? Great, there's a diagram down here. This is exactly what I was gonna look for. If we didn't have one, I was gonna go draw one real quick. But Keyvenor runs as something called the Dynamic Admission Control Group, excuse me. And then this links out to the Kubernetes docs. If you wanna get a little bit more in depth into this, talks a lot about what's happening with Webhooks, what they do, what you can do with them, how to write your own super powerful function to extend Kubernetes, to do more in the realm of, admission, validation, authentication, sort of things like that. So it runs as a dynamic admission controller in your cluster. So we're gonna deploy something into the cluster and it's gonna become part of the admission process. So when you type kubectl apply, you enter that admission process and there's a pretty good diagram of what that looks like here. Keyvenor policies match against the resource kind, name and label selectors and much more. So in that other example, we saw that it was gonna match any resource of a specific kind, but it looks like you can do a lot more. You can use label selectors, you can use names. I think hopefully we'll dive into a little bit more about those things. Mutating policies can be written as overlay, similar to customize or as JSON patches. So that's pretty flexible. You can do a lot with that. Validating policies also use an overlay style syntax with support for pattern matching and conditional if else processing. So that's kind of cool. You can do some conditional logic inside of those policies. We'll take a look at that hopefully too. Really curious to see what that looks like. Policy enforcement is captured using community events. Okay, that's cool. So you can get a nice list of audit events for what's gonna happen. For requests that are either allowed or existed prior to the introduction of a policy. It creates policy reports in the cluster. So that's great. We can get an ongoing kind of view of what's there. Okay, so what is a dynamic admission controller? So let's look at this pattern here, right? So you make an API request and anything you do with cube CTL is gonna be an API request, right? You do a cube CTL get pods. That's an API request. Behind the scenes, if you do something like this, you know, if you do cube CTL get pods, that's making an API call to the server and you can do something like v equals 999. And it's gonna show you all of the requests that are being made, right? So we're making a rest call basically here to the API v1 namespace is probably a little small, maybe bigger. So I ran cube CTL get pods and I turned the verbosity like way, way up so we can see what's happening here. But essentially, you know, we're making an HTTP call to API v1 namespaces default pods and then we get back a list of pods, right? So in this case, if we go back to our little diagram, we made that request, it went into the HTTP handler. It's authenticating to make sure that I'm able to do those things. You know, if anybody's making a call to the cluster without proper authorization and authentication, you know, this is gonna block them and kick them out. Then we get into mutating admission. And mutating admission is when we're gonna be able to make changes to that resource that's coming in. So, you know, think back to the sample policy we saw, we wanna add labels to that resource that's coming in. That label addition is gonna be a mutation. So it's gonna go through this mutating admission piece of the admission flow. So we see what happens here is that that request is bumped down to whatever admission controllers that are running, and then this diagram here looks like a diagram of what Keverno looks like architecturally. The pieces up here are really, from this box above, that's Kubernetes specific. So you can write your own mutating admission controller. You could write your own validating admission controller. You can write one that does both and you can plug it in here and register it with the cluster and be good to go and do whatever you wanted inside of that. So this is what Keverno is bringing to the party. So there's the actual admission controller that's gonna implement that interface for us. And then a policy engine, I'm assuming, a controller that's a web hook, something to renew certs and modify secrets, a report controller to generate those reports that we saw, background scans, admission reports, all that good stuff. And then a background controller that looks like it's doing things not necessarily as part of the admission flow, because we don't see any lines necessarily coming directly to it, but you can get a little bit more info here. So the web hook is a server that handles the incoming admission requests. Got that. It sends them to the engine for processing. It's dynamically configured by the web hook controller. So that's what's configuring this piece based off of that configuration. The web hook controller watches the install policies and modifies the engine for processing. So when you see those new policies, it's going to update the engine to do what it needs to do. The cert renewer is responsible for watching and renewing the secrets or it's Kubernetes secrets needed by the web hook. Okay, cool. So those kind of operational pieces. The background controller handles all generation, generate and mutate existing policies by reconciling update requests and the report controllers handle creation and reconciliation of policy reports. Okay, so that's great. So we, Kubernetes supports high availability, I would expect that. There's some really probably good details on this high availability page. Yeah, we can see a bunch of stuff there. So maybe we'll dive into that. We have a little bit more time, but I am kind of excited to get going with this. So let's install it into my cluster and see what we can do. Okay, so this section is going to be a walkthrough for us. So to install Keyburner, we want to install it from the latest manifest. I assume you can probably pick different versions if you wanted to do that, but for all of our purposes here, let's do it from scratch. Okay, so let me clear the screen off so we don't have all that stuff in the background. Make that a little bit bigger and what we're going to do is run this command. So we've got kubectl create and we're going to reference that YAML file that's on the internet. I actually want to grab that first and see if we can see what it looks like. So I've got a directory here that I'm going to store all the stuff in so we can make it available later, but let's W get that and see what we get. Okay, so we've got an index.yaml. I'm going to fire up Visual Studio Code, just so we can take a look at that before we install it. It's always a good idea to look at those things before you fire them up and install it in your cluster. Kind of the same way that curling a bash script and running it probably isn't the safest thing to do, but this will give us a better idea of what's going to happen here. So first thing we see is it's going to create a namespace for us, totally expect that in your cluster you're going to end up with namespaces to store the things you want. Like the Kubernetes specific things are going to be here. If I install some other thing like flux, I probably would get a flux namespace. So this is going to create a keyverno namespace for us. So that makes sense. We're going to store everything there and we'll see that the namespace is specified for all these things. We're going to make a service account, couple service accounts for those different components they're going to run so that they have their own identities. We've got a config map that's going to allow us to specify which resources this is going to operate against and so that gets most everything. Okay, and then we've got a custom resource definition. This is going to be the policy that we saw earlier, probably more than that one. We're probably going to see a few of those things. So we've got an admission report, yeah, that makes sense. We will also have down here another customer resource definition, background scan reports. Let's see if we can collapse this. Not a very good editor. Okay, let's just, there we go. So next one is the cluster admission report, okay. Then we've got our cluster background scan. So quite a few customer resources are going to come along with this. Makes sense, there's a lot of things that are going to be happening here. Cluster cleanup policy, yeah, again. But expect something like that. And we've got cluster policies, okay. So I think this is the one that we're going to be most immediately interested in. So this is going to give us the ability to write those policies in a Kubernetes like resource. It's a custom resource, it's not a core Kubernetes component, but it's going to follow the same kind of model of other things that extend Kubernetes with custom resources. Got a few of these in here. And then we can scroll down a little bit more and see what else we've got going on. I'm trying to find, where's the deployment that? There we go, kind deployment. So above that, we can see there's a service and another service. So each one of the components is going to have that. And then we're going to have a deployment for each one of our deployments or our components. So we've got the admission controller. Yep, the admission controller, there's the pod name. It's going to run Keverno v1.11.1 from GitHub Container Registry, bunch of different options that we're going to apply there. Okay, so this is quite an extensive file. We're looking at, yeah, let's go to the very bottom. We've got 45,878 lines of YAML. So there's a lot of things in here. If you're going to adopt this, I would definitely recommend getting more familiar with all the components and we'll take a look at them once we install this beast into the cluster. Okay, so we're going to run their install, right? So we're going to do kubectl create now. And we can see all of those things that we would just walk through. There are all the service accounts, there are all the cluster, custom resources, a lot of cluster roles and a lot of our back stuff being created here, because this is going to need to do a lot of things in the cluster. So it's going to need quite a few roles and role bindings. We see a service that gets created for the components we've got that we saw in the ML as well. And a couple of crown job things are going to run behind the scenes as our cleanup pieces and the things that are looking at things outside of the admission context. Okay, so now that we've done that, let's, we can verify we got that new namespace here. So we've got our key for no namespace. So I am going to change my namespace to use that by default for now. Okay, and inside of this namespace now, I do kubectl get pods. We can see that we've got our admission controller running, the background controller running, the cleanup controller running and the reports controller running, that's great. Let's just look at everything that's in here using the kubectl get all command. So we see the pods, we see the services that were created. This didn't create like an ingress. These are all cluster IP things. So this isn't directly exposed to the internet, which is probably a good idea. We've got our deployments. So the admission controller, the background controller, cleanup controller and the reports controller, the replicas that are controlling that and then our crown jobs with the associated schedule. So this is going to run every really bad at crown scheduling. I always have to look it up and see what that is. Crown job schedule, Kubernetes. Oops, making my window a little bit bigger. Okay, so let's get started with crown jobs here. What's a crown job? So the schedule or the pattern here is minutes, hours, day, month, day of the week. Okay, so we've got star slash 10. So this is going to be in the minutes. See if we're are 15 here. Yep, this kind of job can also be scheduled to run at a specific interval using the operator. For example, the following the slash operator. So this crown job will execute every 15 minutes. This one's going to run every 10 minutes for us. So this job is going to be running behind the scenes every 10 minutes and doing the cleanup of things for us. Okay, so cool. We've got that installed now. Let's go back to our quick start docs and see what we're going to do next. Okay, so in the validation guide, we're going to see how to use Keyverno as a validating piece. We want to ensure that the label team is present on every pod that gets created in the cluster. This is going to be, we've done, I did the things like this in my previous job using OPA. It's a pretty common pattern, especially when you're running like a multi-tenant cluster and you want to be able to do different things and different namespaces and automatically enforce some of those things as they're coming in, right? This is a pretty common use case. So it's really nice to see that that's here. We want to require that labels here. So this is a little different than what we saw in the first example we looked at with adding the labels. Here we're going to do a validation. This is not going to mutate things for us. This is just going to validate that the thing that's coming in has what we want. So again, this is going to match any resource of kind pod. So any pod that comes in is going to need to have the metadata label team with some value there. And if it doesn't, it looks like you're able to specify a message here, right? So let's do this. We're going to drop this over into our terminal window. So we're going to kubectl create with this stuff as an in-place file. We're on that. And there we go. We have our require labels policy. So now if we went to look cluster policy, we have our require labels piece. It's kind of cool. There's a validate action here. So I assume this would allow us to turn this off, turn it back on, maybe just warn on some of those things. We'll take a look at that next. Hopefully that comes up in the guy if we jump over to that. But if we get the YAML for this, it's going to look exactly like what we just did. Cluster policy, require labels. So there's our thing. So we can come back and edit this too, I think. One of the cool things that I see here is that we're going to see how many rules are inside of this that have been created. And I would guess that we're also going to see maybe some information about when this applies. So let's go on to this next step. So we're going to just deploy Nginx into the cluster. So we're just going to YOLO this Nginx image into the cluster and see what happens with that policy. So let's do that. So again, we're just going to do a QTTL create a deployment type, we call it Nginx and the image is Nginx. So there's no labels applied to this, right? This is going to be a very basic thing that's coming in. Kubernetes is going to default all the rest of the things and there will be no labels. So we would expect that Keverno is going to reject that for us. So let's take a look. Nope, it did not do that. So you should see this error. So I would guess that my Keverno, oh, you know what? I just created it inside of this namespace. Let's go back and do it in the default namespace and see if that makes a difference. QTTL, delete, deployment, Nginx, oops, Nginx. Okay, so I think we probably also want to do this in the default namespace just to make it clear. So we're going to QTTL, delete, cluster policy, policy, can't type today, require labels. Okay, cool. So now we should be back to our state. Yep, that looks right. There's no deployment for that thing. And I think we should have no cluster policies. Okay, so now I'm going to switch back to the default namespace and try that out. That doesn't work. Then we'll debug Keverno and see what's happening with the initial controller piece. Okay, so back to here again. We're going to create this policy one more time in the default namespace. Okay, so now get cluster policies. Yep, there's our cluster policy. It's ready to go. And now if we do this, hopefully we see that error. Yeah, okay, cool. So I think what I saw in this down here was that it is configured to exclude the cube system in Keverno namespace. So made a mistake by switching to that namespace and then doing everything inside of it because Keverno is ignoring that namespace. But that's great because that means you're able to configure Keverno to ignore certain things. You probably want to make it not do things to cube system because that's required for your cluster to run. Probably also don't want it to make changes to Keverno itself because it's going to be in your mission path and it's probably a bad idea. Any other namespaces that you probably want to not apply policies to, I think you'd be able to exclude them. We'll look at the config for that in just a second. Here we see exactly what the docs told us was going to happen, right? We were unable to create that deployment. So if we do a kubectl get deployments, that thing did not get created. So if we go back to our diagram real quick, our API request came in. Definitely I'm authorized to create that. We saw that in the first go-through where I tried to do it in the Keverno namespace. The policy we're doing right now is a validation, not a mutation. So it's going to skip this piece. It's going to go out a schema validation and make sure everything in the request is valid. It was a valid request. Kubernetes built the deployment for us with that command. And then it hit that validating admission piece and went through and the engine said, hey, you are in conflict with that policy. So we were not able to create that engine X deployment. And it was because of require labels. So it's cool. It's showing us exactly like what rule is responsible for that. And it's also showing where the validation kind of failed for that. So the label team is required. And you would look inside a spec template metadata label to find the team label. Okay. I would guess now they're going to have us created again with that label. Yep. So really easy to do this, right? We're going to just specify dash dash labels on the command. Okay. So here we go. We're going to do a kubectl run engine X, different little different command. We're doing run here instead of deployment or create deployment, but this is going to result in kind of the same thing. And specifically what's going to happen is the labels are going to contain team equals backend. Okay. So now our pod was created and now we do kubectl get pods. There's engine X. It's running kubectl get pod. Engine X. And let's just spit it out into YAML. If we go up to the top here, we'll see, yep, definitely we've got labels equals, all right, labels contains team backend. So any value of backend, of team would work there. We could run another one here. Let's delete the pod. Okay. We're going to run this again with just another value. We'll call it CLBO, we'll actually back off. And that went through just fine. Kind of curious if I could modify that policy and make it specifically require like a value for that label. So let's delete that. Let's delete the cluster policy and recreate it from up here. All right, so let's grab this again. And I want to change this wildcard down here to CLBO, oops. Okay, we're gonna hit enter now. Okay, so we've created that policy again. And up here in our rule, we've got team, so I think what will happen here is that the team has to be CLBO. So now let's try to recreate that pod again. And let's use that original one where we're gonna set this to backend. So my hope, if I'm understanding this correctly, is that this will fail that validation check. And sure enough, it did. I didn't change the error here. It needed to be a different message to make it clear, but it did fail if we wanted. So I think now if we do CLBO here, cool, it created it. So you're able to make basic changes like that to the policies and that's really great. Okay, so let's go back to our walkthrough and see what we're gonna do now. So this was compliant, we did that, made a little bit of a change and made that happen too. So now the pod exists, wait a few seconds longer and see what other actions it took. So we can run QCTL get policy report. Let's see what that looks like. Okay, so we've got a policy report engine next passed. I think that was the most recent one. If we do that again, let's do, let's change this to engine X2. Wait just a second, maybe it won't, maybe it won't. Okay, let's see. If you were to describe the policy above, you'd see more information about the policy and the resource. So let's see if it came through now. Nope, didn't, it's kind of interesting. If anybody has used Giverno and has experienced with those policy reports, feel free to chime in in the chat. I think it'd be interesting to know a little bit more about what gets created there. If you can explain anything that goes along with that. Okay, we're gonna describe the policy report. I think maybe I fell in the docks that it applied to successful admissions. So maybe the failure one doesn't show up because it wasn't successful because it was blocked. I would maybe think that that would show up in error or fail, but maybe this is because it's happening behind the scenes and not actually on the rule itself. Before we go on, I wanna do one more thing here. It keeps to get cluster policies. Okay, I want to get cluster policy require labels. Let's get the ML version. Okay, nothing shows up in the conditions or the status I think for what's happening here. It'd be kind of cool to see everything that's happened for this rule, but they would get messy and a lot of updates to the objects, so that makes sense. Okay, so we have successfully validated our first object so we can delete that policy now. Let's do that. Looks like next we're gonna try out mutation. Really excited to see that. Okay, so same kind of policy here. In this case, we are going to add a label. So similar to what we saw in that very first example, here we're going to add the label team with the value Bravo. So this will add the label team to any pod and give it the value Bravo, but only if a pod doesn't have this label assigned. Oh, so that's cool. So you could probably replace this piece with just team and it would automatically add that, but this plus notation is allowing us to define what should happen. This is that FN kind of logic that's baked into that. So that's kind of cool. Let's do that. Okay, so there is our policy created and now we will run Redis. Oh, that's exciting. So I'm gonna run Redis. Okay, our pod is created. Hello, Daniel in the chat. Okay, so we've got our pod Redis now. So if we do cubes CTL describe pod Redis. Yep, it's created. That's cool. And then, oh, hey, labels. We see team equals Bravo. So that's the mutating policy at play for us. Yeah, we could do this command too, I guess, but that's what I wanted to see. Okay, so we've got team equals Bravo. So this next example looks like it's gonna show us how to do, how that like conditional logic works. So we've got that plus team here. So if we specify a team already, it won't modify that for us. So let's do that. Back to the other window. Okay, so we're gonna do, oops, sorry, clear. We're gonna do cubes CTL, new Redis, image Redis, and then apply that label alpha. Okay, so that's created. And if we were to do the same command here, there's our new Redis with team equals alpha. So it didn't append team Bravo to it for us. That's super cool. That's a really nice functionality to be able to encode that. I don't think I would know what that meant without reading that piece of documentation up here, but I think that's pretty small hill to have to climb in terms of kind of understanding new syntax. Okay, so let's delete that one and move on to the next one. I think today we're probably only gonna get through this basic set of like what functionality is there. I do wanna go look at the image stuff in just a second. I think we will do another episode of this where we'll dive into a little bit more depth. There's a lot of content to kind of get through in this project that I really wanna see with everybody. Okay, so we've deleted that policy. So now we have no policies right now. We're back to our kind of clean unrestricted state. Okay, so now we're going to create a Kubernetes secret that will simulate a real image full secret. So let's do that. Okay, so we've got our image full secret created. Okay, next we are going to create a generate policy. So this looks a lot like the other policy resources, anything in the resource of the name kind. We're gonna sync image full secrets. So this sounds pretty cool. It has the ability to synchronize the resources that it's generated. That's really cool. So it's gonna watch those things and figure out what to do. We are gonna use Kieferno to generate an image pull secret in a new namespace. That's really cool. So you wanna apply defaults to a new namespace, especially in cases like where you're doing a multi-tenant cluster as a service for folks. This would be super, super useful to automatically populate it with some certain pieces of information that need to go along with that. Okay, so let's do this real quick. Okay, so we are going to create this new cluster policy called sync images. It's gonna apply to namespaces and it's going to generate a new secret named reg cred and whatever the request object is. So we're gonna make a new namespace and call it crash loop back off in a second. That's what would get populated here and the action that it's gonna take is cloning from the default namespace red cred. Okay, so let's do that. Okay, secrets created, the policies created. So again, Qubectl get cluster policies. Cool, they're sync secrets. So now let's do Qubectl create namespace crash loop back off. Qubectl create namespace crash loop back off. Okay, so our namespace is created. Good to go, can do Qubectl get namespaces and hey, we see our new crash loop back off namespace. So now, if we do Qubectl get secrets in crash loop back off, there is the reg cred one that got created by Qubectl, no, that's really cool. So if I do a Qubectl get secret reg cred in CLPO and let's get it as YAML so we can read it. We see a bunch of labels that were created on this for like from Qiverno, right? So here's the secret piece, that's what we created before, but this is all really, really cool. So this is managed by Qiverno. We see some information like where it came from, came from the default namespace and the source kind was a secret, the version of this thing. So if we modified that Docker config secret, I think it would probably regenerate this. I don't think I've seen that in Gatekeeper or in Opa. I haven't looked into the documentation super deeply, but that's a really cool piece of functionality, at least in my opinion, I think if I was building a platform for my internal developers to use, that would be a really, really powerful way for me to control and kind of help maintain the contents of what's gonna be inside of that. Okay, so we went through a little bit about the introduction piece. I would definitely recommend taking a look at this. If you're playing along at home, definitely, or in the office, if you're working from an office, definitely go through this and give it a try. I think this is a really good taster of what the functionality is there. I am interested in a few other things though. So let's go to the documentation. I'm gonna jump away from the introduction for a second. And I wanna see if we can find the image validation stuff. I don't know if that's gonna be under here, under security. If anybody has used Keyverno and knows the docs, I'm really, really interested in, okay, let's go back here. The verify image piece. So maybe it'd be under sample policies. You can see them there. Yeah, okay, so let's look at the sample policies for verify image. So there's a few different things that I see here, right off the top of my head that look really cool. So we have one here for requiring image vulnerability scans. Let's open that up real quick. I'm gonna take a look at that. Yep, so I mentioned cosine earlier, verify image with cosine, verify that there is a cyclone DXS bomb for that. So a software build materials. A lot of places are doing that now to comply with the executive order from the US government, but also just in general, to have a little bit more awareness about what is in the software they're consuming, starting to demand those things, and also generating them themselves. Also verify salsa providence. So this is a really cool standard for describing how a build happens, and including metadata about the build. So you have more visibility, more transparency into that build process. So that's cool. So let's take a look at those. I wanna see this image vulnerability scan piece first. So we've got a policy, you see that here, right? And it looks like it's gonna match pods, and it's going to verify images in a repo. Okay, so I can specify multiple repos there. And we can look for different attestations for those vulnerabilities. This is gonna use a six door to do, to attach the attestations to the image. So they're kind of like a connected object, not in the image itself, but another piece of metadata that's associated with that. So that's really cool. This policy ensures that images signed with cosine's keyless ability during a GitHub actions workflow have attested vulnerability scans not older than the week. So this is checking to make sure, oh, so that's what this predicate up here is. It's checking to make sure that if the image has a keyless signature from my repo, and the issuer is GitHub, that's how signatures are created inside of with cosine and the whole six door flow. It'll validate that against recore, which is their transparency log. It's just gonna say like, at this time, this thing was signed by that workflow to validate it. And then it's going to check the conditions that it has a scan finished on date, less than or equal to 168 hours. That's pretty powerful. I don't have time to dig into this one right now, but I am definitely gonna include that for round two of this, because that's a really cool capability. You wanna make sure that you're deploying things into your cluster that have gone through some vetting, have gone through some checks to make sure that they don't have known vulnerabilities inside of them. That's a really nice feature to be able to do at your cluster. If you're not running any of this today, that's a really cool thing to take a look at. I think this is another one. We're starting to see this a lot more. Kubernetes images, for example, are signed with Cosine. We do that as part of the release process today, and there are signatures for those images. So you could write a policy in your cluster that says that all of the Kubernetes pieces, you're probably not gonna do that because we're excluding our cube system where some of those are gonna run. But think about the same kind of policy for images coming from your internal dev teams or from EKS, maybe they're signing them that way as well. And what this is gonna do is allow us to do a couple of things that I see right off the back here too. Is this gonna mutate the digest? So if you make a request to your cluster or to deploy something with say Nginx V130.0, if that's the thing that exists today, I think this is gonna mutate it and set it to the digest because that's an immutable reference, that V1.30.0, that can change over time. So when this is coming through, this is gonna attest that this thing was signed to verify images with this public key, which is gonna be your signature piece. I think you could also do this with a keyless route. So we're gonna dive into these things more in the next episode of this, but this is really cool. So from the Keyverno repo in GitHub Container Registry, anything that's test verify image, we wanna apply that verification too. So you could do this and not require it for all things. That's a pretty powerful ability to do that. Yeah, I'm really excited to try this out next time. But let's take a look at what other things are in this policy category for things we might wanna look at next time. Anybody in the chat, in the couple minutes we got left, if there's anything in particular that you wanna look at, feel free to give me a shout out. PSP migration is a cool one to look at. Let's take a look at that. So PSP migration is a thing that I think, if you have been running Kubernetes clusters for a while, you've probably dealt with. PSP stands for Pod Security Policy, and we see that here. Pod Security Policy was deprecated and it's been replaced with a new Pod Security Admission Standard. And this looks like it would allow you to take care of some of that migration, some of that conversion for you. And this one looks like it runs in audit mode by default instead of enforce. So that's another thing we didn't really get to take a look at, but if we go back to our, let's create that policy again and see what it shows us. Okay, here's our validation policy back here. Okay, so a label is required. So if we do kubectl, get cluster policies again, we see that that sync secrets one is running in audit mode. So it's not gonna enforce anything on admission, but the required labels one is set to enforce. So that's what we saw, this one blocking things. We could probably change that to audit instead. I think that's a really cool functionality. So you could run these things for a while and allow people to come in and kind of gather audit information about who's in compliance and who's not with the things you rolled out and then enforce it without having to break people right away. So that's a really cool functionality here as well. I see a couple of familiar things like Valero. So we've got some things for backing up volumes. That's kind of cool. Let's take a look at the AWS policies and require encryption with AWS load balancers. That's a cool one. Best practices, let's see. Oh, that's nice. Check for deprecated APIs. Again, if you've been running Kubernetes clusters for any amount of time, you've probably dealt with the deprecated APIs. It's not as bad as it was. I think we have upstream, we have done some things like making beta APIs not enabled by default. So there's some less likely foot guns to happen, but this is a really nice one to have so that you can apply different rule sets to see what's going to happen and run this probably in audit or block mode. I think you could phase that in and have it run in audit mode, get an idea of what things are going to be removed and then have it run in block mode so that it can kick some of those things out. That's pretty cool. Down here, there's a list of in the two minutes we've got left for this one. We can see the different type of objects that you can create rules against and or whether there are some existing policies for it. There's a really extensive set of things here. So if you're going to look at Keyverno, I would definitely recommend coming to look at these sample policies or this policy library, I guess, because there's just so many things inside of here that will probably give you some capabilities to go with, right? Adding the time to live to jobs or restricting jobs, that's a really, really useful thing to do. Apply limit ranges, that's another really good one. Enforcing quota and things like that. That's a really great use case for mutating, right? Like you're trying to better manage the resources in your cluster and folks are not applying limits and requests to their workloads. You could totally use an admission controller, like a mutating thing like this to go in. If there's nothing there, apply a real small basic limit and then let your users complain that it's not enough and enforce them to kind of go and deal with that on their own. Okay, so just kind of wrap things up. What we saw today was how to install Keyverno in a real basic way. I don't think that's probably the right way from a long-term perspective. You know, under installation, there's a bunch more documentation that you can see here on how to install it, different installation methods. There's a helm chart if you wanna use helm, if you're using helm. If you are using customized, like you can grab the ammo like we did and apply whatever changes you want to it. That's really, really cool. We didn't take a look at testing things or actually writing our own policies, so maybe next time we'll do that. But we did see some basics on what the policies look like. We saw some validation and some mutating policies. So I think we got the basics here. We kind of know what Keyverno is doing, what it's all about. So join me next time and we're gonna dive into a little bit more about how to write our own policies and deep dive into a little bit more about some of that image verification, some of the supply chain security stuff because I'm a little bit close to my heart right now. Thanks for joining me today. I'm happy that you were able to join us and thanks for learning about Keyverno with me. See you next time.