 Hello, hello, everyone, and welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Taylor Doulizal, head of Ecosystem at the CNCF, where I assist teams as they navigate their Cloud Native journey. Every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer your questions. In today's session, I'm stoked to introduce Charledoir and Jim from Nermada, who will be presenting on showcasing new features and capabilities in Kerberna. Openness is a great policy, but not always, so it should be fun to dive into this one. This is an official livestream of the CNCF and, as such, is subject to the CNCF Code of Conduct. Please don't add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful to all of your fellow participants and presenters. Just be excellent to one another. With that, I would love to hand it off to the Nermada team to kick off today's presentation. Thank you, Taylor, and thanks everybody for joining. This is Jim Baguardia, co-founder, CEO at Nermada. Yes, so to start with, I want to do a quick background and introductions for on Kerberna itself, and then we'll dive right into some of the new features. So let me go to the documentation screen for Kerberna, and we'll look around in there in terms of getting started and how you can use Kerberna if you're not already deploying it in your clusters. So first off, Kerberna is a policy engine designed for Kubernetes. And why that matters is with Kerberna policies are Kubernetes resources, they don't really require a new language to learn. And Kerberna can be used to validate, mutate, generate, and even as Charlotte Weil will demonstrate with 1.9 cleanup resources to do some garbage collection on your clusters, as well as verify software supply chain security. So a lot of growing use cases, feedback we're getting from community, from users, et cetera, expanding that set itself. So looking at how Kerberna works and what exactly happens once Kerberna is installed in the cluster. So Kerberna itself, it runs as an admission controller. It's also available as a command line tool. You can run in your CI CD pipelines to verify policies outside of clusters, as well as it does background scans. Now, once Kerberna gets installed, and let me switch to the latest version of this docs. I'm on version 18, so I went to main and we'll go back and we'll kind of look at the documentation for the architecture diagram because it's changed with 1.9. So here, as you see, there's a few more components we're introducing. And the process of evolution for Kerberna has been it will still be a single install, a single help chart, but there's the controllers which were embedded into a single binary are now being split and decomposed into separate processes, separate, which will become eventually separate deployments, which you can manage and scale independently. So in 1.9, there are two separate deployments, but then moving forward, we have plans to further decompose and bring these controllers outside of Kerberna itself. But once you install Kerberna through a help chart or through the command line YAMLs, it registers with the Kubernetes API server and X as a dynamic admission controller, which means it has the ability to get any API request and to be able to act on that API request based on your configured policy sets. So Kerberna can, for example, validate the API request, block resources if they're not compliant, or can audit and provide reports on that, or also it can mutate and generate resources based on triggers that you can figure. So these could be events that are used to trigger policies. They can also be other resource create, resource mutate type of admission requests, which can be used to trigger policies. And then Kerberna also has some background controllers like the reporting controller, which is responsible for generating policy reports, as well as the background controller for update and mutate policies itself. So just quickly going on the reporting section and one exciting thing to point out over here is the policy report that was created in Kerberna is now being proposed as a standard. There are several other adapters that the policy working group has built on this policy report. And we're also in process of proposing this as a standard Kubernetes API, so other tools can also leverage this policy report and produce results based on this. So that's a quick overview of what Kerberna does, how it operates in the cluster, and let's take a look at some example policies before we dive in into the 1.9 features itself, right? So I'm gonna show a very simple example of a policy. And this is just to disallow the latest tag. Since the latest tag is mutable, it's not considered a best practice to have that country allowed because the actual underlying image may change. So this policy, as you can see, it's a few lines of YAML, which is the bulk of the policy body. There's just other metadata that we typically put with all of our sample policies, but to identify the type of policy, a description, et cetera. But here, we're doing a validation failure action of enforce, you can hear it's doing a background scan. By the way, one thing that you might have just noticed is because Kerberna is Kubernetes native, help, et cetera, it's just built in, it works with Visual Studio. If you are running the Kubernetes extension for Visual Studio Code, right? So pretty cool to kind of see help from OpenAPI v3 schema, just pop up and it will tell you if there's a syntax error, et cetera. But here, we are kind of running this in background mode as well as admission control. It's enforce, which means the cluster or a container which is not compliant would be blocked. And then it's checking for the, you're the tag. And as you can see, it's a very simple check which is using wildcards to make sure that either latest or an empty tag is not allowed on this image. So that's the structure of Kerberna policy and we'll see a lot of these in action. But the basic structure is very similar. And then there's of course policies to mutate, to do things like image verification and to cover some of the other new features that we'll talk about. So going back to my browser, what I wanna focus on is the Kerberna release nine and we will demonstrate some of the key features from here. We, Kerberna typically puts out minor releases every two to three months. We are already, as you can see, I made progress on 1.10. There's some interesting features scheduled for that. But for the topic today, we are gonna cover a lot of the new features. So there's a release candidate for 1.9. We're on RC4, which just got published earlier today and we'll cover, so if you wanna try out these features, you can install that, make sure you use the minus minus DEVEL for development flag on the help install command if you're installing through help and you can try out these features. So Charlotte is gonna talk about the cleanup controller, which is one of the major features. We'll talk about distributed tracing, which is extremely with open telemetry, some pretty cool stuff you can do in terms of understanding how policies are working in your cluster. And then I will cover policy exceptions. So these are the three major features we're gonna demo. There's a lot of other minor changes as hundreds of bug fixes and enhancements which have gone into the release. But this is what we were planning to cover today in terms of features. So with that, let me hand off to Charlotte who's gonna talk about the cleanup controller as well as distributed tracing and he will introduce and demonstrate these features. Thanks Jim. Hi everyone. I'm going to start by the distributing their tracing feature because it doesn't apply only to the Kivano admission controller and it applies to the cleanup controller too. So it makes sense to start with the tracing feature and we will discuss the cleanup controller after that and we will see what the cleanup controller actually does by looking at the traces because tracing is also embedded in the distributed tracing controller in the cleanup controller, sorry. So let me share my screen documentation for the main branch again because it's not yet it's coming. As Jim said, we released our C3 yesterday and we are today. So it will be available in one hour or so, just probably after the live stream. And one of the features that we added in Kivano 1.9 is the capability of tracing. So tracing is inspired by this distributed tracing. Actually, currently in Kivano, it's not very distributed because Kivano is a monolithic application right now but things are changing in 1.10 and it will probably change even more in the next versions. We are introducing new controllers and potentially we are also supporting HTTP calls directly from Kivano engine to other services running in the cluster. So progressively traces will become more and more distributed by nature, even if today it's not that distributed. Anyway, tracing is still quite useful. You can see below a trace for an admission for an admission request for review. And we will see that every step that the admission request goes through, entering the admission controller, then entering the engine and processing every policy and every rule per policy will be detailed and measured in the traces. So behind the scene, we are using open telemetry to create the traces. So this is the client. The client is an open telemetry client. And we also instrumented all HTTP clients. So HTTP clients and therefore Kubernetes clients will create spans in the traces. What that means is that it's available in most of the tracing backends. So we have added some tutorials in the documentation to set it up with Grafana tempo, which is a tracing backend developed by Grafana. And another tutorial to work with Yeager, which is another backend. So today I will demonstrate the Grafana tempo backend because it's the easier to set up. For that, I created a simple cluster with just the values make commands. We have in the Keverno repository, then I created a cluster. I deployed what we have called the dev lab, which is just an instance of Grafana, Prometheus, Loki values tools we use to observe the Keverno deployments. And I finally deployed Keverno and the couple of policies in the cluster. And I also created two namespaces, one for tracing and one for cleanup. So right now we will be using this one, the tracing namespace. So finally, I just have one Grafana running, Grafana running with a tempo data source. This tempo data source will provide traces and when clicking on a trace, we will be able to observe the details. But let's try to create a couple of resources first. So for the tracing demo, I created two resources. One I called good pod and one I called bad pod. The good pod is just an NGINX pod with nothing special in it. So it should be accepted to run in the cluster. And the bad pod is the same pod but with host network set to true. And I installed a policy that will prevent if I just list the policies installed. I installed a number of policies. And for example, the host namespaces should forbid the creation of pods that run in the host network. So if we just create, what is my script? If we just create the good pod, the good pod is actually created correctly in the cluster. If I go and I get pod, tracing is there and it's creating. And if I try to do the same thing with the bad pod, the bad pod is rejected. So the policy disallow host namespace refused the creation of the pod because sharing the host namespace is disallowed, et cetera. And it's what the policy is supposed to do to reject pods that use the host network. And we have still only the good pod running. Now, the interesting thing is to find what happened in the Kiverno engine, in the Kiverno controller. And for that, we can use a couple of tags that exist on the different spans of our traces. So if I look for tags, admission.request.name equals good pod, I see one trace. This is the trace that was generated at admission time when the pod was created. So we see that we have first an HTTP request and we have a middleware that produces metrics. We have another middleware that filters incoming requests. And finally, the last middleware is responsible for validating the admission request itself. We have a number of tags available to search for traces. And for example, the admission.request.name was good pod. We have the namespace, we have the operation. So it was a creation. We have a couple of informations related to the kind of the resource being admitted. So in this case, it was a pod belonging to the API group version V1, et cetera, et cetera. We have informations about the user that issued the call. So in this case, it's me. I'm the Kubernetes admin and I'm part of those groups. And below that, so this trace is about the... Oh, sorry. Is about the validation. We have different spans and all those spans cover different policies installed in the cluster. So when I listed the cluster policies installed in the cluster, we see that we have approximately 11 policies and we are going to find those policies here in the list. And below each policy, we will have the different rules. So for example, the disallow hostports policy has three rules. One which is called hostportsnone. Another one called autogen hostportsnone and a third one, autogen crone job hostportsnone. In our case, we can see that the first rule took 174 milliseconds. The second one took 1.78 milliseconds and the third one took almost zero milliseconds. And we have the details for every policy in the list, every policy and every rule. So of course, most of the policies here are about pods. So most of them were applied to the pods themselves. And that's it. We can do the same thing with the bad pod. The bad pod, admission.requestname equals bad pod. And this time, we found only one policy, one admission request for the creation of the bad pod. And if we go and look at the attributes this time, we can see that for example, admission.response.org allowed was false. And this is consistent with what we add here when we try to create the pod and the creation was rejected with the different information here. We have all the same information, okay? The result message is truncated because graph and a tempo has a limitation on the size of the tag values. So we can't just store the full message. So we are truncating it at 256 characters but we will find all the relevant informations here. So it's the same thing as we could have admission.response.allow We could have admission.response.allowed was false. And if we look or if we search for not allowed admission request, we can easily filter and find the rejected admission request at this point. Okay, for the next demo, we can do the same. For now it was just some, what can I say? Some validation policies. We can do the same with an image verification policy. So I just installed a policy that is going to verify the signature of the image. The image used in this case is an image coming from GHCR, Kiverno named test verify image. This is an image we created ourselves to test that the feature is working correctly. So we have for this image we have different flavors of the image. We have one which is tagged signed by someone else. And if the image is tagged by someone else, we expect the creation of the code to fail. In this case, if I try to keep Cthulhu run the image, the verification failed because it says the it didn't find a matching signature. On the other end, we have another image signed with the signed tag and this one is expected to run. So applying this one will produce a pod being accepted. And if we go back to tempo, we should be able to see both requests and to check admission.request.name.side. Okay. And we have a code that we can use for this image. Okay. And we have both calls. The first one is the mutation, the admission request and the mutation. This time we have the first entry point is the HTTP request and we have the middleware creating metrics, another middleware for filtering. And finally, we are in the mutation part of the Qiverno engine. The mutation part looks up the verify image policy. And in our case, this verify image calls the verify image signature in cosine. Cosine in turn will call to GHCR. So we can see that it will be calling GHCR.io slash V2. It's probably to get the token. Then it will go to check the manifest. So here we have the, where is it? We have the signed by someone else tag we were using. Finally, we will go, this one is probably to get another token. I don't really know what happens in the cosine code, but it allows to dig and look at the different HTTP calls that are performed by the code. And finally, more calls will be done to fetch some manifests and to actually verify the different signatures. So in this case, it was the first test that was using a sign by someone else and the response that allowed was false again because the admission request was rejected. And finally, if we look at the other request, so this one is about the second test where the image was signed by someone else. By the correct certificate. So this one was accepted. We just see that the response is this case is true. So the mutating web book allowed the admission request. And then after that, we have the validation web book for the same pod. And this one was also allowed. So with the name, for example, with the name tag in the request, we are able to filter and to search for every web book that were applied to this particular admission request. So in the case of, in the second example, we have both one mutation web book called and the validation web book called too. Okay. And basically it will allow users to get better information about what happens in Kivano at admission time. I know a couple of users are trying to understand better why the policy can introduce a latency or understand better what happens behind the scene. And this is a good tool to know better what was actually done by Kivano. We can clearly follow the links between one admission request and the goal to GHCR because we were using image verification and cosine as to call GHCR and so on. It's not visible here because we don't have a policy that calls the API server but if a policy has a context variable that is created from a call to the API server, we will see in the graph the call to the API server. So we will be able to say, okay, this policy called the API server to list ingresses and things like this. Yes, lots of good details there. Maybe Chaladva from a time check perspective, let's spend about 10 minutes on the cleanup policies and then we'll do the exception demo. Okay, let's do that. And I think if there is any, I did see a couple of comments in the chat just in terms of making the screen a little bit bigger if at all possible, but did want to call that out as well for some of the folks that were viewing today. So the screen, you mean the graph and dashboard? Oh, correct, correct. Okay, I can try that. Awesome, thank you, thank you. Okay, so let's talk about the new policy type that is introduced in one line. We now have the possibility to clean up resources in a cluster. So this is not strictly speaking some validation or mutation of policies, but it's more related to automation. We have automated tasks that dilate things or things like this. This can now be expressed in the form of policies. Those policies exist at the cluster level or at the namespace level. So you have cluster cleanup policy and just cleanup policy, which is the namespace version of the cleanup policy. What will define a cleanup policy? It's very similar to what we have in standard policy. We have a match and exclude close that can specify which resources are targeting by such a policy. And we also support conditions. So conditions will be evaluated on a per resource basis to say that, for example, we don't want that. So this cleanup policy, for example, targets deployments. And we don't want to dilate cleanup. We don't want to dilate deployments when replicas is below two. This is very, very similar to policies that we are used to write. And finally, those policies will also have a schedule because they will run continuously in the cluster based on the schedule defined here. For this demo, I just created a simple policy which is very similar to what we have here. So for now, I will keep the labels and the operator. So if we create this policy and on top of that, I created two deployments. One of the deployment has can remove set to no. The other one has can remove set to true. So this one should not be considered by the policy we have here because this policy specifies that we are only considering deployments that have the can remove true label. This one should be considered by the cleanup policy. So let's apply this. So now we have good deployment and we have also one cleanup policy running. Of course, for now, it's not going to do anything very useful because it's running but the policy we have defined here will not delete any of the deployments because they have a number of replicas that is above the two here. So if we switch and use the model, the greater than operator rather than the less than operator, this policy should now start delete deployments. It runs every minute, so we have to let one minute pass. And after that, it should delete at least the yes deployment because the yes deployment has the can remove label set to true and it also has number of replicas above two. So this worked, the condition here was honored. As the spec.replicas was above two, the deployment was deleted. In the same spirit, if we just remove the selector, it should now, it recreated the yes deployment but this one should go away when the policy runs a second time and it should happen in one minute. Yeah, so some of the use cases for this is to configure even time-based leases or some garbage collection for resources in the cluster, things like that, right? So some interesting possibilities. Definitely, so both deployments were deleted because we removed most of the constraints in the policy. Of course, we probably don't want to do that in a real cluster but at least for demonstration purposes, it's just how things work. And yeah, it was a very simple example based on the number of replicas but it could also use the age of the resource and let's say I don't want to delete resources that are younger than one day or a few hours but if the resource is older than one month, okay, I want to delete it or things like this, it's completely possible to have such conditions. You have all the necessary functions in Jamspass to do that so you can get the creation timestamp, compare that to the now timestamp and say, for example, if the resource is older than three days, okay, I accept to delete it. And I can combine it with different other condition and the schedule is of course a nice solution to implement time-based condition. Cool, thank you. Okay, I hope it's clear. I think there's a couple of questions, Taylor, if you want to, should we answer them now or? Yeah, I've got one from a little bit ago. I saw this come in the chat. Hello, what is the status of potentially making this a standard tool within Kubernetes? How long would it take and how would it be better than some existing policy engines like Rego and those kinds of solutions? Good set of questions. So certainly whatever can be standardized through the policy working group and other Kubernetes SIGs as well as working groups, we are proposing for standardization like for the policy report itself. Kibirno of course, it is an add-on in the Kubernetes clusters. So it's not likely that Kibirno, that the entirety of the policy engine would be standardized and the idea is to allow flexibility for admission controllers there. You know, in terms of how is it better? I think it seems like Adam answered the question himself. So of course, Rego has some complexity and there's a learning curve for it, Kibirno, because it's focused on Kubernetes, offers a much simpler experience and also a wider set of use cases like Charlotte wash out the cleanup policies. We have also policies to generate resources. There's very powerful capabilities of mutating as well as other sort of things that we're looking at in terms of extensions. So yeah, I mean, we're always looking at expanding the use cases, integrating as natively as possible and staying focused on providing the best experience possible for policy management on Kubernetes. Awesome, awesome, thank you, Jim. I saw another set of two questions that came in the chat and then I've got a couple as well after that. If you have any questions and you're watching right now, please throw those in the chat and we can get some of those answered. The question that I saw come up was the first of the two questions was are you planning to support app signature? For example, Port 5432 should accept only Postgres signatures, those kinds of use cases. So you're not sure if I fully follow the question there, but if that can be, so if there are ways, like whether through network policies, Kubernetes or higher level network policies like Celium, et cetera, if that can be configured, then yes, Kivarno can verify those configurations, whether they're custom resources or native resources. Kivarno does not intercept network traffic or do anything at the kind of a layer four or a layer seven request level. So it deals with admission controls and configurations. But yeah, if you can configure this with a network policy, then Kivarno can validate and enforce that certain things like these configurations are allowed. Gotcha, awesome, thank you. The next question I had that I didn't follow, but might make more sense to you when will this release be adopted by ACM 270 or 280, et cetera? I believe the ACM reference here may be Red Hat Advanced Cluster Management, RACM or Red Hat ACM. So yes, the ACM supports Kivarno. I am not sure on the schedule of picking up 1.9, but it's fairly quick as soon as it's available. By the way, Kivarno is also available, both the enterprise distribution of Kivarno from Nermata, as well as the open source distribution are now available in the Red Hat and OpenShift marketplace and the operator hub as well. Awesome, awesome. Moving on to some of the questions I have for you. Regarding tracing, what are some of the supported backends right now? Regarding tracing, as I said earlier, we are using open telemetry behind the scene. So any backends supporting open telemetry protocol is supporting, so that's Grafana Tempo, Yeager, Datadog and probably others. In case the back end doesn't support open telemetry protocol directly, there's always the possibility to deploy open telemetry collector, which the open telemetry collector will receive the traces in the open telemetry protocol and is capable of transforming them and transferring them in another format. So it can do the conversion on the fly. Awesome, awesome. And then I had two other quick questions and then can shift back to Jim here. Both these regard tracing as well. What are some of the supported sampling strategies for tracing? Yeah, currently we are either tracing or not. So we are sampling 100% of the traces. This can have an impact, especially on cost because sending all traces to the back end can be costly. So in this case again, using open telemetry collector can be a good option because you can have tail-based sampling strategy and you can say, okay, I'm going to sample traces only if they have errors or things like this. And this kind of strategy cannot be done with a head-based strategy. So in any case, we don't have any tutorial for that yet, but we plan on adding a more advanced tutorial around tracing to create a more advanced scenario where open telemetry is used to have a better sampling strategy. Awesome, awesome, thank you, thank you. That's really illuminating and really awesome to hear that you can capture 100% but also to kind of include that consideration as well. Everything is trade-offs, always a fun problem to solve. In terms of tracing, does that add latency? And if so, can you give us kind of a general sense as to what that looks like? Not really. Of course, it takes a small amount of time to create the trace itself, but anyway, transmitting the traces is happening in the background. So it's very lightweight in the end. Awesome, awesome. I remember reading a humorous post about TLS and like somebody who's like, we don't use TLS, it takes too much time just as a comedic funny thing and because it added more time but the end result is so much better. But again, trade-offs. Enabling or not enabling tracing is not going to save latency. It will be the same. It's just that in one case, it won't be transmitted. But for tracing, you need to instrument the code. So it's not magic. So there's some instrumentation going on and this instrumentation is very lightweight by design. So it's just a couple of function codes and it doesn't cost almost nothing. Gotcha, gotcha. Thank you so much, Charlotte Ward. Really appreciate it. Was that, Jim? I'd love to turn it over to you for the final demo today. Awesome. Yeah, so the last feature we want to showcase is the policy exceptions feature, which is new in 1.9. And this, by the way, was done by Eileen Yu, who was one of our LFX Linux Foundation mentees for the last term. So very excited to be able to demonstrate this and thank you, Eileen, for all the great work here. So this feature, what it does is it decouples the life cycle of managing exceptions or how you can exclude certain resources from policies from the policy definition itself. So here I'm showcasing, typical Kivana policy has match and exclude blocks and you can exclude based on many factors, including names, namespaces, labels. But now what you can do at 1.9 is you can pull that exception into its own new custom resource named policy exception. So this can be, it's a namespace resource. You can put anywhere in your cluster. And as you can see over here, within the policy exception, you can say which policy name, which rules should be excluded. And then you can do a match on any resource itself, right? And again, this match has a lot of flexibility for the demo I've just done test, right? So that's how simple it is now to configure exceptions. And of course, you can manage exceptions through RBAC, you can manage exceptions to Kivana policies because Kivana policies operate on any custom resource, as well as Kivana itself has a few nobs to make sure that you don't misconfigure these policy exceptions, right? So the first thing I'll show is if I just go to the Kivana deployment, let's just edit the deployment. And I wanna show within this, there's a few new flags you have to kind of think about. So first of all, it's an opt-in feature because it's a new feature, you have to enable policy exception, it's not enabled by default. And then secondly, you can optionally configure a namespace for policy exception. By default, this is Kivana, but you can put any namespace that you wish and then secure that namespace through again, RBAC and other mechanisms for manage your policy exceptions. So that's how I have this deployment configured. And if I look at my, right now in my cluster, I have a few pod security policies. So if I just do get C-Pol, I see I have a bunch of pod security policies configured. So if I try to run a names, just say, let's say, Nginx, I'm gonna try and run it in this namespace test. It's gonna, should deny that because of my pod security policies. So let's say for some reason, I want to create this exception on my cluster. So what I'm gonna do is I'm gonna just say kubectl apply, and I'm gonna create this exception. I'm gonna try and create it on the Kivano namespace, which actually should be a no op, right? Because, well, it's allowed, but it's giving me a warning saying that, hey, this doesn't match the defined namespace for policy exceptions. So that's a good safety check. You wanna make sure you configure it in the right namespace. So let's delete that. And we will create this exception again, now in the policy exceptions, a namespace, right? Because that's what we configured Kivano to look for where to kind of pick up these exceptions. So let's see if we do that. So now there's no warning, which is good. And if we go ahead and run that same pod again, on that namespace, what I'm expecting now is that for that pod to be allowed with no errors, right? Because in my policy exception, I had requested that all of these rules, which were previously failing, are not checked for this particular namespace. Now I could make this more granular by labels, by pod name, by other kind of mechanisms, but in this case, I just choose to exclude this particular namespace. So that's the basics of how policy exceptions work. And as I mentioned, you can secure this further using Kivano itself. So one other kind of thing I wanna quickly demo is if I, let's say if I now require that, Charlotte washowed how to do image signing and verification, but let's say if I wanna require that policy exceptions have to be signed for approvals, right? So I'm gonna apply this policy called required signed exceptions, which I'll show you what that looks like. That policy is checking and making sure that every exception I have configured actually is signed by a particular key. And of course, you can associate these keys to identities and things like that for an approval workflow, right? So at this point, if I try to now run that same, let's say, part again, or, you know, because so let's say I'm gonna delete my policy exception and then I'll try and recreate the same exception. Let's see if that's allowed. So let's delete this exception in the exceptions namespace. And then what we'll do is we'll apply that same exception back with this new policy. And what I'm expecting is it's not gonna allow the unsigned exception to be configured, right? So you're, it's telling me that, hey, this exception requires a signature, so you can't do that. But I do have a signed exception, of course, for the demo. So if I now go ahead and, you know, instead of the unsigned exception, let's do exceptions signed YAML and this one should be allowed, right? So if I go ahead and configure this, now it allowed that to be created and my exception is created. One of the cool thing is if I go and, you know, kind of try to edit this policy exception, because it's signed, it will not allow, you know, any tampering of that policy exception itself, right? So let me, let me clear my screen and I'll go back to the top. And if you do kubectl edit and here I'm editing the policy exception in that namespace, which should be my signed exception, right? So you see some of the signatures up on top, but let's say now for some reason, I want to allow, you know, this instead of namespaces test, let's say I put test, you know, and I'm trying to, you know, kind of tamper with this policy inside my cluster. So what happens right away as soon as I try to save it is Kevurno checks and says, hey, that, you know, because you made this change, the signature of the signed manifest does not match the signature I'm expecting and it's going to reject that change and not allow that policy exception to be configured, right? So a lot of interesting possibilities now by decoupling policy exception management from the life cycle of the policies, you can store exceptions in a different Git repo, you can build your approvals workflow with GitOps, you can sign your YAMLs and make sure that Kevurno is only allowing trusted policy exceptions to be configured. And, you know, of course, you can also use RBAC as well as other additional Kevurno policies for how you want to do governance and compliance on these policy exceptions. So this is a first release of this feature. We're very interested in further feedback, further, you know, refinements. So please do try it out, let us know what you think and, you know, how we can, you know, continue to improve and enhance this feature itself. So, but it should, you know, immediately also start solving a number of use cases that were previously raised as challenges with managing this kind of exception management for policies. All right, so the last thing I want to kind of talk about is, you know, also a little bit about what's coming next in Kevurno. And I'll also give a few, you know, kind of hints on how you can join the community and provide feedback. So Kevurno 110 is our next release. We have a few other additional major features. In fact, the bulk of this release is going to be internal, you know, kind of decomposition and re-architecting Kevurno for more scalability across, especially around the background controllers, right? Because we see those with cleanup, with mutate and generate on existing resources. There's a lot of, you know, background activity which we want to decouple from the webbooks and you'll be able to scale those independently. Other key features, so inter-service, like service API calls. So Kevurno can now delegate some, you know, processing to another service in your cluster and can also look up data for policy decisions from other services. So this brings a lot of flexibility. And I'm pretty excited about that feature. And then Notary V2 support. So as a lot of you may know, in software supply chain security, Notary V2 is another emerging standard. Kevurno 110 will support Notary V2, as well as, you know, being able to run notation-based plugins through this, you know, external service API calls features. So that's what we have planned so far. And there'll be, of course, a lot of other fixes, enhancements as the release progresses. And we're targeting about a two to three month, you know, timeframe for this release itself. So lastly, last thing I just want to quickly mention is, you know, if you go to the Kevurno docs, go to the community. We're very active on our Slack channel. We do have weekly contributor meetings and we are gonna be kicking off some, either, you know, a set of office hours or some end user meetings. So please do, you know, pop in, give us feedback on these or if there's any other things you need, you know, from Kevurno, feel free to reach out. There's also, you know, in terms of folks looking at contributing, we are kind of, we have a lot of documentation we have created. You can go to the Kevurno repo and even look at the, you know, development markdown file there to get started. And we will be, you know, kind of also continuing to enhance that experience to get new contributors, new, new folks into the project. So with that, let me hand back to Taylor and see if there's any final questions before we wrap up. I took a look through the chat and didn't see anything that was super urgent. It looked like, and then I think we're just about at time but really wanna thank both of you for coming on today, chatting with us about Kevurno and really excited for everything in 1.9 and what's coming in 1.10. So thank you again. Thanks for having us. Take care. Awesome, awesome. Thanks everyone for joining us for the latest episode of Cloud Native Live. We really enjoyed the interaction and questions from the audience. Thanks for joining us and we hope to see you again soon. Thanks everybody. Have a good one.