 Hi, my name is Chip Zoller. I'm a technical product manager at Nermada, the creators of Coverno, and I'm also a Coverno maintainer. Today in this webinar, I wanna talk a little bit about automation as policy for platform teams with Coverno. So a brief agenda here. First of all, let's cover an overview of Coverno very quickly. We'll talk about what is it for those that may not have heard of the project, what does it do? And then use cases for policy management of which Coverno has many and look at Coverno across the life cycle. And then we'll actually dive into some of these specific use cases for platform automation teams by focusing on four such use cases. First, we'll look at copying and syncing of config maps. We'll look at refreshing environment variables and pods. We will clean up their pods that might be left in a cluster and we'll also show how to scale deployments to zero based upon an event. So for a brief overview of Coverno, Coverno is an admission controller that's purpose built for Kubernetes. It is not a general purpose admission controller. As a result of it being purpose built for Kubernetes, all of the policies and resources are written as standard YAML, which means there's no programming language required, either knowledge of a programming language or use of a programming language anywhere in the process of implementing or reasoning about a policy. Coverno is also the most popular by stars policy engine for Kubernetes. And it boasts many capabilities, several of which aren't found in any other policy engine. For example, validation, which is where most policy engines begin and end, is your quintessential yes or no response. Here's a resource, here's a policy which matches it, should it be allowed into the cluster, yes or no. That's validation. Coverno does that very simply and very easily. Mutation is the ability to change a resource. API server sends a resource to an admission controller and the expectation is that it's either going to be modified and sent back or it's gonna be allowed to persist as is. Coverno has very rich and mature mutation capability that's been there basically out of the gate. Generation is a capability that's endemic only to Coverno, which is an ability for Coverno to create all new resources in the cluster based upon a policy that you define. Coverno can also verify images, both image signatures and attestations on OCI images in a registry. This is great for things like software supply chain security as you can have Coverno validate these things before they're allowed to run in a cluster. And also, and this is a new feature as of Coverno 1.9, which was recently released cleanup policies. Coverno now has the ability to go into the cluster and remove resources on a scheduled basis based upon a very granular definition that you install by creating a policy in the cluster. So some use cases for Coverno, which are many in stretch from the command line, which Coverno has as a separate CLI utility, which can be useful in CICD pipeline all the way through in the cluster. Several of these are in many different categories. So for example, security, this is your pod security making sure that pods do not run as rude, making sure that they don't run privileged. Things like pod security standards, Coverno can very well and very easily enforce. Granular RBAC being able to do things like making sure that certain users cannot delete resources maybe with a certain label, being able to use labels and define labels on different types of workloads to make sure that they're properly identified. In the operations category, which we'll look at several of these today, this is where we're kind of focused on, even things like secure self-service provisioning of clusters, Coverno can really handle a lot of this when it comes to cluster registration. Label and annotation management, making sure that labels and annotations are there for a variety of purposes, making sure that names follow a given convention, maybe that's a rejects or something that's more simple. Custom CA management, which we'll look at a little bit today, and even things like time-bound policies, which we won't be covering today, but we'll probably do in a future session, the ability for policies to be activated and deactivated based upon a schedule. And from a financial ops perspective, which is becoming more and more prevalent, making sure that the decisions that you make are driven by financial reasons, things like having quotas, making sure that requests and limits are there, making sure that labels are there for a cost purpose, scaling, QoS management and more can all be done and driven through Coverno policies, and all written as standard YAML without having to go to a custom programming language. But if you look at this across the cloud native lifecycle, there's really something through all stages. So in the commit process, you can use the Coverno CLI to validate that the changes in your manifest that you're checking in, which may ultimately be deployed in a GitOps tool are valid and correct long before they ever hit a cluster. You can sign your images in those workflows and then have Coverno validate them again before they ever hit a cluster. And then in the deployment and running phase, obviously this is where an admission controller like Coverno shines, but being able to validate these things at time of admission. But also Coverno has a background scan capability where it will periodically scan the resources in the cluster and generate a policy report if any compliance has been deviated based upon what's been installed on a policy. So a lot of capabilities that it has throughout this cloud native lifecycle. And today we'll be looking at our platform use cases, which really falls sort of at the end of the spectrum, but this really is prevalent across all of those phases. So the first one let's look at here is copying and syncing of config maps. So here's the problem. Config maps like pods are namespace resources. For a pod to use a config map, it must be co-located in the same namespace. The challenge typically is you have a lot of namespaces and you may need to use one config map across a bunch of different namespaces. Now you could define that config map either multiple times in your GitOps tool of choice, or you could use other forms of automation or imperative declaration to ensure that you remember all of your namespaces and lay that down. But what Coverno can do for us here with its generation capability is allow you to define a config map in a central namespace. Perhaps it's called platform as we'll show in this demo and be able to drive that config map to both existing and new namespaces. But in addition to that, making sure that those config maps can all be kept in sync. This is great for platform teams because now it allows you to manage resources in just a central namespace and define the behavior that you want from an automation perspective as policy, as code that could be stored and deployed alongside all of your other resources. And so what we'll show here is that same type of paradigm. We'll have a namespace called platform, which we expect to define many resources that we need to consume across the cluster. And one of these will be a config map. And we want Coverno, once we install a policy, to lay that config map down across existing namespaces in the cluster. And you can see in this diagram that that's represented by a very creative namespace name called existing, but also new namespaces that we expect to create after that point in time. We want those to get the config map as well. So in addition to a brown field, this is also useful in a totally green field environment and a combination of both, which is probably what you're going to fall into. So let's flip over and see this in action. So I've got a standard config map here and it's called org CA. And this config map as the key denotes is a CA certificate. Now this is a certificate that's from my lab environment where I have an enterprise route certificate authority. And you may be doing something very similar where you might have a certificate that represents trust across your enterprise environment and you need that certificate to be consumed by a bunch of different pods in a bunch of different locations, maybe even across clusters. Although sure you could define that and perhaps you are doing that to a certain extent by building it into your container images, maybe you need to decouple that for one reason or another. And config maps are commonly used for storing certificates. So that's what we'll do here. And we're going to put this certificate in a namespace called platform. And now what we want is to just be able to manage this CA, this config map in our central namespace and have everything else be deferred to Coverno as a policy. So in our policy, and this is a standard Coverno policy and we'll just walk through it very quickly. Coverno has the ability to write a policy that applies across the entire cluster very simply. And in this case, this is a Coverno cluster policy which means it's doing just that. It's applying across the entire cluster. And we are going to ask Coverno to generate a resource for us. And this is the generate type of rule and we're going to generate for existing namespaces. And we're mapping, we're matching on namespaces. Now, once we match on any namespace, we want Coverno to generate this resource for us. It's gonna be a config map also named org CA and the namespace is going to be whatever namespace it matches on. And it's gonna clone from an existing resource that's out there. Another variant of this might be rather than defining an existing resource that's out in your cluster like we're doing here, this could be defined in line in the policy with what's called data declaration. Now, I'm not showing that here, but that is another variant where if you didn't want to define this in a platform namespace, you could define everything in the policy and Coverno would work the same way. But we're also telling it to synchronize which means that should any changes happen to the source resource that's here, then Coverno will respond and synchronize those changes down to every place that it has generated that config map. So what we wanna have happen is when we create this policy, existing namespaces, because we're matching on namespaces, get this config map instantly and should any new namespaces be created after this point in time, those new namespaces will also get that. So let's just try this out. First thing we need to do is create the config map. So that's what I'll do and we've created the config map in the platform namespace. And now we will create this cluster policy and we have created the cluster policy. So now what I expect to see and I'll show my namespaces and I've got quite a few of them here, but let's go into the existing namespace and let's see if we got a config map. And you can see here, we did get a config map and if we were to inspect it, we would find that it's identical to the one that's in the platform namespace. Now that's great, but the last part that's missing here is we need to be able to create new namespaces because this is a production cluster and we're gonna continue to operate this and have Coverno fire and manage that certificate, that config map for us. So let's create a new namespace. All right, so we just created a namespace called new and we will get config maps in this namespace and we should see that Coverno has detected that new namespace and has responded by cloning this new, this config map into this new namespace. And indeed we see that here. We see the org CA has been generated into this new namespace. So let's go ahead and clean up here. All right, so that's the copying and syncing of config maps that will flip back over. Now this works with any resource. It doesn't necessarily have to be with a config map, but commonly what we see that platform teams can really take advantage of are things like config maps and secrets and other resources like that, which are namespaced resources that should be present in other namespaces across the cluster. And by the way, even though I didn't do it here, we could certainly narrow the scope down and say, perhaps only namespaces with a specific label should get this or only namespaces that had these other criteria. There's a lot of flexibility there, but you kind of get the point. So that's the first use case copying and syncing of config maps. Let's move on here. Let's talk about the problem of refreshing environment variables and pods. So here's the problem in Kubernetes. You have a pod that consumes something like a config map or a secret in an environment variable. And later you need to update whatever that source is. Could be a config map, could be a secret, doesn't matter. In this diagram, obviously we're showing a secret. Now, normally if you do that and you're consuming a secret as an environment variable in a pod, after you update that secret, the pods have no knowledge of the update that you just made. There's no API that goes and refreshes that. If you did this in a volume, that would be another story, but very commonly these things need to be consumed in an environment variable. Yet changing that source does not affect the downstream pods. They don't know anything about it. This is where platform teams really can use Coverno to make their lives easier by installing automation that's defined as policy without having to write any code and maybe even eliminate some other tools in the process, maybe some of those being either bash scripts or maybe it's even handwork that's done. So in any case, what we wanna see here is we have a secret that's being consumed in a deployment and obviously that deployment is spawning pods. Now I'm not showing a replica set here. We understand that that is an intermediary controller, but that the deployment is responsible for pods and those pods are getting a secret. Now Coverno is going to watch that specific secret for any changes that may occur. If there is a change that's detected perhaps by a user by a process that makes no difference, Coverno is going to see that. And in this case, it's going to be able to respond by finding the deployments that consume that secret as an environment variable. And it's going to annotate the deployment within the pod template area. And the effect that this is going to have is it's going to create a new rollout, which means that new pods are going to get spawned. And as a result of those new pods getting spawned, they will be able to pick up the changes that were made in the environment variable from the secret. And as that happens it's going to, once the new pod is up and running, it's going to tear down the old pods. So the new pods with which we're left will have the new value from the modifications made to the secret. So let's flip over and show that. Let's go into our second one here. Now the first thing that we need to do is we need to grant Coverno some additional privileges. Coverno is very security conscious and follows the principle of least privilege. One of the things that we need to be able to do here is to be able to modify or update deployments because we need to be able to annotate them. Coverno makes this very easy because it uses roll aggregation. So rather than going and changing cluster roles, which may be a pain because if you're deploying Coverno as many folks are with a GitOps tool, that involves making changes in Git, which may not necessarily be desirable, at least changing the existing cluster roles. We can introduce a new cluster role here, that has the necessary labels, which aggregate to the cluster role that's responsible and can get picked up by the Coverno service account. So in this case, I'll just create an additional cluster role that will get aggregated and it will grant additional privileges that allows Coverno to update deployments. So I've created that. And now let's take a look at our original secret. So I'm gonna create a secret here and this is an API token, you can think of it as. And here's the value that's in the clear up here. So 0628 is the value encoded as base 64 and this is gonna be called blue secret and it's gonna go into our existing namespace. Now, I'm gonna label this with Coverno.io slash watch equals true. Now, this could be any label that you want and in fact, you don't necessarily need a label but for this demo, we wanna make this more dynamic in nature rather than focusing on a specific secret by name. So this will allow Coverno to be able to watch it a little bit more easy without having to define or declare a specific resource. So in any case, we're gonna create this secret as the first step. And now that we've created blue secret, we're gonna create a deployment. And now this deployment as you might have guessed is going to consume in an environment variable that token. And so it's gonna consume it in an environment variable name token and it's gonna fetch it from that blue secret that we just created in the key called token. And this is just a standard busy box pod. It's gonna sit out there and sleep so that we can, we could just wanna make sure that the environment variable is consumed properly. So we'll go ahead and deploy this. So we've created the blue busy box deployment. Let's go ahead and get podge for this. And let's just check and make sure that the environment variable that it got is as expected. So you can see here, it's picking up our token environment variable and we see the value 0628 is what I just showed a moment ago and also our other endpoint environment variable. So that's all well and good. Now we wanna get Coverno in the picture because this is where it can really help us in our jobs. So this is what the policy is gonna look like. Again, this is a cluster policy, which means that Coverno is going to consider this across the entire cluster. And we're telling it to watch on secrets that have this Coverno.io slash watch label. And again, this could be any label that you want. If you didn't wanna have a label, you could certainly watch by name, but we wanna watch by a label to make this a little bit more dynamic in nature because we may have multiple secrets that are consumed across multiple deployments in multiple namespaces. So we want to not tie us down. We're gonna watch specifically for updates. We're not interested in creations of secrets. We're interested in when a secret gets updated because that's when Coverno needs to snap into action. One of the abilities that Coverno has, several admission controllers can do things like mutation, but what Coverno can additionally do is mutate existing resources. And that's what we're defining here. In the target section, we're saying any existing deployment we're interested in, in any namespace. And now the magic is actually happening here where Coverno is going to check that the name of the secret is mounted or consumed by this deployment. And if it's the same one that's consumed by it, and that's what this tag does here, then it is going to write an annotation. And you notice that this is in the pod template area. It's going to write an annotation that I've just called corp.org slash random with an eight character length random string. Coverno has the ability to use a system called JamesPath. And within the JamesPath system that Coverno consumes, there are many filters that we have written and provided specifically for Coverno's use that aren't found in upstream JamesPath. And one of these allows you to very simply generate a random string based upon a composition of your design. And so you can see here with this regex, I'm just saying give me a string that's eight characters long composed of numbers and lowercase letters. And it can be done just as simple as that. And we're gonna put the value of that in this field called corp.org slash random. So this is an annotation. And the effect that this is going to have is it's going to cause the deployment controller to see that change and understand that the actual state now has diverged from the desired state. And in response, it's going to create a new rollout, which is going to give us new pods and those new pods should be able to fetch this updated secret. So let's go ahead and create this cluster policy in our cluster. And now that we've done that, now we want to change the secret. So we've already got pods that are out there running. Now we need to rotate our API token. And you can see above here, I've already generated the new base 64 for this and the new value of our API token is going to end in five echo two. So what we expect to have happen, and this is the same representation as the original secret, it's just we're modifying the value here. Covernor should be able to watch this. And since it has the same label, Covernor is gonna see it and it's going to find matching deployments that are in that namespace. And it's going to perform that annotation that I just mentioned a moment ago. So let's actually see what happens here. And now that we've done that, let's go back to our existing namespace and do a watch on pods. And okay, as we see here, we've got a new pod that as a four seconds ago is being spawned and this one is going into a terminating state. So this is the new rollout that's taken place and it's going to tear down the old one. So we should be able to get the environment variables in this pod and hopefully with luck, we will see that Covernor has done its job and the value of the new token environment variable has been updated to reflect the changes. And as you can see here, in fact, that has occurred. The new value of the token environment variable is zero five echo two, zero five echo two that corresponds to our new value. So you can see in this case, there's some really nifty capabilities that you can use as a cluster operator or if you're in a platform team already, this can really save you time and help alleviate some of the challenges that you might be faced with today. Or if that's not a challenge, this can give you some new ability that you didn't have today. In any case, this is an illustration of Covernor's mutation capability, but it's ability to mutate existing resources, not just resources that come in on the admission chain. Okay, so let's clean up here and we will move on to the next one. All right, so that's refreshing environment variables and pods. Hopefully you can kind of see this as a, hey, that's kind of a cool moment. The next one here, cleaning up their pods. This is a new capability that we released in Covernor 1.9, which gives Covernor the ability to delete resources, clean them up based upon another Covernor policy that you install. So Covernor has long had the ability to validate, mutate and even generate as we saw in the first use case. But what we heard was there's still gaps that need to be addressed when it comes to a lot of these, especially platform and automation use cases that something like being able to remove resources would nicely complement. So it came up with this ability for it to remove resources based upon a new cluster cleanup policy or a cleanup policy. So here's a challenge that in this use case that this solves. Very often when you're operating a cluster, we all run into problems during the course of operation. No matter how much you automate, no matter what you're doing in GitOps, there are always cases where a human needs to get involved and jump into a cluster and do some troubleshooting. Now, this could be doing things like ping check, name resolution, curling to another pod just to make sure that either the network is good or you have services that are up and running, but as commonly happens, we tend to forget about some of these things once the job is done, we put down our tool tools and we go home. So bare pods are oftentimes used for this type of break glass or troubleshooting scenario. And the word bare pod refers to a pod that's not owned by a higher level controller like a deployment. A bare pod is oftentimes created imperatively using something like a cube control create or cube control run command. And once those pods have done their job and users and operators have exacted into them or done whatever they need to do, they might still be running out there. And in cases where you might be running Kubernetes in a public cloud environment, this can incur additional costs because you multiply this by multiple teams and multiple namespaces, it's not uncommon to see maybe many of these pods that are running out there and that could become fairly cumbersome and introduce a lot of clutter. So what we could do is use Coverno to help us solve this by scouring the cluster and finding these bare pods and if they exist, deleting them. And it can do this on a scheduled basis rather than just running an imperative command one time. So that's what we'll show here. We've got a bunch of different namespaces with these bare pods and we're going to create a new cluster cleanup policy which we'll look across the cluster and find all of these bare pods across these namespaces and it will remove these for us. So let's flip over and show this. So similar to what I talked about in the second demo, we need to grant Coverno a little bit more privileges here. Specifically, we need the ability for it to remove pods and it's necessary for us to list and delete these. And you'll notice again here, we're not having to modify one of the main or the main cluster role. We have role aggregation that's enabled. You can create a simple cluster role like this and as long as the labels are installed, it will get aggregated to the base cluster role and Coverno will be able to do its job. So let's first give Coverno these privileges. Done that. And now let's create some bare pods. So you've noticed here, I like to use busybox as a very simple container to illustrate a number of things. And we've got a number of busybox pods that are just gonna go into another sleep state across a bunch of different namespaces who are ing namespace, our platform namespace and our existing namespace. We're just gonna simulate some bare pods. Now, we noticed here, these are just your standard bare pods. They're not owned by deployment. And so let's go ahead and create these and we have some bare pods that are now out there running. Now let's take a look at the new cluster cleanup policy that comes with Coverno 1.9. So in this cluster cleanup policy, this is a new custom resource. You notice the previous ones were cluster policies. Well, we've introduced a cluster cleanup policy, which is similar to other Coverno policies except it is only specific to the cleanup ability. And what we're doing here is matching on all pods and you can, if you're not familiar with Coverno, you can see kind of a common theme here that the way that we declare policy is all standard YAML using constructs and patterns with which you're likely already very familiar on account of having to do these probably on a daily basis for more than just policy for things like pods, certificates, even ordering whole clusters you can do with simple paradigms like this. So we wanna be able to use those same constructs and policy. And so we're just matching on pods and also we're going to look into those pods. And now what we're doing here is we're looking at the owner references key and the owner references object that's in a pod is where a pod would declare another ownership. So for example, if you had a deployment that was spawning pods, those pods would have an owner reference back to a replica set, for example. And these could be owned by something else or even multiple potentially, but we're specifically looking for pods and target refers to all of the existing pods that are found out there, not new pods that are coming in. It's only looking at the existing ones. It's returning all the ones that have empty owner references because those are the only ones that we're interested in. And then our schedule here, which is a common cron format, we're going to run this every minute. Now I'm only doing this for the purposes of this demo. This is probably not something that you wanna do in a live environment, but in the interest of time, I will go ahead and create this. And what we expect to have happen is from the moment that I create this, it'll start its countdown. When the schedule elapses, Karuna is going to kick in, it's going to look across the entire cluster for pods. It's going to crack each one open and evaluate this. And it's going to make sure that it does not have an owner reference. If it does not have an owner reference, it's gonna gather them all together and then immediately delete them. So let's create this and see if it actually does that. So I have created our clean naked pods cluster cleanup policy. And so let's go and look at these bare pods. And I've assigned a label to them for easier tracking. So let's get pods across all namespaces that have run busybox associated with them and we will watch them. And since these are all in a running state, these have done their job admirably. They're no longer needed, yet they're still sitting out there and running. So once the schedule has elapsed, Coverno should be able to find just these pods, but not any other pods because these are the only ones that are bare and it should call out to the cleanup controller and have it remove them. So what we expect to see as, which is what we're just now seeing is these pods go into terminating state because Coverno has ordered their deletion. And once this termination state finishes, these pods will be removed from the cluster. And if we check again, they're still in terminating state, but in just a moment here, once those processes exit, these pods should be removed from the cluster. And so we'll trust that that'll happen here. And there you go. So pods that are bare pods were just removed, but the rest of the pods that are owned by controllers, they are still present. So let's clean up here and let's flip back over for the last use case. So anyway, as you can see with this use case, this can be super useful for cluster admins and platform teams, because it can remove a lot of cruft from your cluster that you may not be interested in. And operators and financial teams typically like this because this oftentimes has a real cost savings that's associated with it. Now I'm only doing a, I only showed a very simplistic use case here with cleaning up pods, but you can imagine this could be any resource and even multiple resources across your cluster with some very complex conditions that you use in order to reduce the number of matches down to only the ones that you care about. So hopefully this was an enlightening use case on how you can maybe operationalize these types of cleanup policies. So let's look at the last use case here. Last platform use case here is scale deployments to zero. Now this isn't really about things like event driven auto scaling. This is more about from an operator's perspective, you getting automated help that you need to be able to know what to do next or just to assist in some of your day to day jobs. So here's a common problem. You have a deployment that's managing pods and something happens in those pods continually restart. They go into crash loop back off. And as you probably know, Coverno is going to continue to try and restart those on a periodic basis. Very often though, depending on the problem that happens, no amount of restart could be potentially useful. Of course that depends on the situation, but in some circumstances, they're just going to endlessly come in to be scheduled, go into a running state maybe immediately or at some point in the future crash. And then that cycle is just going to repeat ad nauseam. What we can do is help use Coverno to help us identify those. And if that situation is happening on a threshold which may be too much, we can have Coverno scale that deployment down to zero and also tell us about it in some way, marking that we need to do some additional troubleshooting. But the reason why this is especially helpful is that it doesn't create that additional pod churn. And if this is happening across multiple deployments in a cluster, that pod churn could be fairly significant. So if this is happening, if this restart process is happening too much, Coverno can observe that. And that's exactly what we're going to show here. So we've got a deployment that is spawning one or multiple pods. And something happens in that pod, there's a problem and it's restarting. And in this case, I'm just going to set the threshold to three. You can imagine this could be anything, but the pod, if it restarts any more than three times, we know that something's gone wrong with it, further restarts aren't going to help it. We need Coverno to help us out here so that we can take action on a later date. So Coverno is going to be deployed and it's going to observe that event happening. And when that happens, it is going to scale that deployment down to zero and also annotate it so that we know from a platform perspective, and we might pick this up in our monitoring tools, hey, this is now with zero replicas, it's because something's happened, let's take a look at it. So let's flip over and do that. Let's take a look at this. Now again, we need to grant Coverno some of those additional privilege that I mentioned because just like in the second use case, we need to update deployments. So since I've removed those previous resources, I need to put that cluster all back in place. And again, same thing with the role aggregation, that's what we're doing here. So we've given it those permissions. And now I'm going to create a deployment and back to our ye old busy box. This time it's going to sleep for 10 seconds. So after that sleep cycle is done, it's going to cause the main process to exit, which is in turn going to have Kubernetes try and restart it. So it's just going to sit out there and run and we're calling it distress busy box. It's going to sleep. And then after 10 seconds, it's going to restart. And so here's our Coverno policy. Now this is a little bit more involved. So I won't walk through all of this, but this is just an illustration of some of the power that's capable of in Coverno. And as you can see, there's no programming language that's involved here. If you're familiar with things like variables, and as we showed with previous policies, some of the existing constructs and patterns with which you're likely using today, then you can probably parse this. So we'll just a couple of things to point out here. Coverno has the ability to look at sub-resources. So we are looking at the status sub-resource in a pod. We are checking for updates to it. And here's where we're going to define our update count. We want to see any restarts, anything that's greater than two is going to trigger this policy. Now a pod is going to define an owner as a replica set in the case of a deployment. And we need to be able to identify the deployment. So we're going to use what are called context variables in order to ask the API server to build that chain back up to the parent. So we're going to find the replica set name and then we're going to ask the API server, hey, give me the deployment that corresponds to that replica set name. That'll give us the deployment. And then from there, and that's stored in deployment, we will be able to once again, mutate existing deployments, not new deployments because we already have a deployment that's out there and running. We will mutate any existing deployment that is met by this. And we are going to put the replica count to zero. And we're going to write our sre.corp.org slash troubleshooting needed annotation set to true, which will allow us to pick this up in a monitoring tool or some sort of reporting tool that we might be using to identify, hey, it's not just set to zero replicas because something like Keita, for example, was in the cluster and it scaled it to zero because there wasn't anything to do. It's because there is an actual problem and somebody really needs to look into this. So let's go ahead and create this cluster policy. And now that that's in place, let's create the deployment, which is going to sit out there and run a busybox container in a deployment and it's going to sleep and therefore restart every 10 seconds. So what we expect to have happen, and I'm just going to watch the deployment here, we're going to watch the deployment. We've currently got one replica. And now what we expect to have happen is this deployment count getting set to zero and that's going to indicate that Coverno has observed the three pod restarts and it has backtracked, found the replica set, which is defined as part of the pod and then looked up the deployment that corresponds to that replica set, scaled the replicas by setting that field to zero and also added this annotation to it. So let's just watch for a moment here more. As you can see, the oscillation between one of one and zero of one represents those pod restarts when our sleep period of 10 seconds has ended. And so therefore, Kubelet's going to attempt to restart that and give us another sleep for 10 seconds but it's just going to do this over and over again. And on the next one, once this occurs, it's going to observe that and as you can see that's just happened here, it's now set the number of replicas to zero. So there should not be any pods that remain from this because it's scaled it to zero. Let's just check. It could be in the process of terminating them and it's not at scale them to zero. Now let's just check one last time. Let's take a look at the distress busy box deployment and see that we actually got what we expected. And so let's scroll up here. And as we can see, we got replica count of zero and we got our annotation informing us that somebody needs to get in here and do some troubleshooting. All of this happened in an automated fashion that you define as policy without having any code that's defined in there and Coverno was able to take care of this. So with that, that is the end of the demo and the end of this recording. I hope this has been useful and how you can use Coverno to help you in your platform engineering jobs and save some trouble and also maybe make your lives a little bit easier, maybe eliminate some tools. And all of these capabilities can be combined in a lot of really interesting ways so that you can get even larger, more complex use cases out of this. Thanks for attending and please hit me up if I can help you out. Thanks very much.