 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Annie Tolastro, and I'm a CNCS ambassador, as well as a Senior Product Marketing Manager at Communda, and I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work with Cloud Native Technologies. They will build things, they will rate things, and they will answer all of your burning questions. So you can join us every Wednesday to watch live. So this week, we have Kuberna to talk about what's new with Kuberna, which is really exciting. So, and always a bit of housekeeping to kick us off. So this is an official live stream of CNCS, and as such, it is subject to the CNCS Code of Conduct. So please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants as well as presenters and speakers. So with that, I'll hand it over to the speakers to kick off today's presentation. Okay, thanks, Annie. Hello, everyone. Let me quickly share my screen, and then I'll start the presentation today. Okay, let me know. I cannot see my screen like the slides back. All right, I guess that's a yes. Hello, everyone. I will come to this CNCS webinar for Kuberna, and we're going to talk about upcoming features in Kuberna 160 release. And today, just a few background about the speakers. Today, we have Chip Zoller, and I'm shooting out. We're both Kuberna maintainers, and Chip is the one focusing on contributing to all different phases of Kuberna, except for developing. And I'm currently leading the Kuberna project on the development side. All right, so today we're going to have, this is like then overview of the topics for today. We're going to have a quick introduction of Kuberna's architecture and the structures of the Kuberna policies. And then we'll jump directly to the new features in Kuberna, and we have a whole bunch of different demos today. And we'll show you how that new features work in the Kubernetes requesters. And we'll take the questions at the end, but if you have any questions popped up in between, feel free to pause us there, and I will help you to help. All right, so for the Kuberna high-level architecture, as you know, the Kuberna is the Kubernetes policy engine, which is native to Kubernetes, and it runs as the emission controller, which registers the mutation and validation webhook configurations in order to like inject the thoughts and validate configurations based on the emission reveal data. And based on the policy decision, Kuberna can either reject your resource creation or operation, or just like simply let it pass and audit the results to the policy reports. And besides that, in the mutating webhook, Kuberna also verified the image signatures, as well as the testations. And you can also look up the data from the external registry and to further utilize that in your policy. And in addition to that, Kuberna also runs this generate controller, which is triggered on the emission webhook, and generates this intermediate resource or a CR called generate request. And the generate controller will pick up the generate request later and apply the generate policies on the trigger resource. And other than that, Kuberna also generates the policy report, which is just like audit the policy results in this policy report CR. And then you can have the aggregate view of the results of the policy decisions. And other than that, Kuberna runs a monitor in the webhook server, which keeps monitoring the webhook status, and as well as managing the Kuberna secrets and config maps. So basically this is the high-level architecture of the emission controller of Kuberna. And Kuberna also provides the CLI-2 that allows you to apply policy without the cluster. And you can also use the CLI, automate that in your CSV pipeline, and take or validate configuration. Okay. And so this is like the structure of the Kuberna policy. And a policy here is just the Kubernetes CRD. And you can create like multiple cluster policies of policies as the Kubernetes CR. And then in each policy, you can define one or multiple rules. But then each rule you will have like this match and exclude block to select various resource based on different information. And then in its rule body, you can specify one of those core type of rules to mutate, validate, generate resources, as well as verifying image signatures and the test stations. Okay. So this really is just a quick entry of Kuberna. There are a lot of like good resources out there for the introductory and how we can use Kuberna to apply the basic, mutate, validate policies. And today we're just gonna like mostly focus on the new features in 1.60 since this is the big release after the KubeCon last year, October. And this release includes a lot of new features and critical enhancements. So we're gonna highlight them, some of them in today's session. Okay. So the first thing I wanna talk about is this image verification policy. With Kuberna 1.5, we have this type of rule enabled to verify the image signatures that are signed with or without the keys. And you can use this tool called co-sign to sign the image. And it has this keyless signing ability so that in Kuberna policy, you just need to create the root or provide the root CA so that Kuberna can validate or verify your image signatures using the provided certificate. And I wanna, like what's new in 1.6 is this use Kuberna policy to verify image attestation. And there was a recent new feature added to look up image data from external registry. And then the image, the background scan is also enabled for this type of rule so that you can have the policy report based on the configured image verified policy. Right? So before we jump to the demo, I wanna, like, you know, just give you a little bit background about the attestations. So we use this tool called in total. And it is like a Linux foundation project, which has a like a standard attestation format. And as you can see here, the then of attestation statement, you can either like customize payload or define some of the feed-in payload and later use that in Kuberna policy to validate configurations, right? So the attestation is just something you can sign that and attach to the image and then stored in the registry. And Kuberna by default, if you have such policies configured, it'll look up from that registry and get all the attestation statements. Okay. That's a little bit background about attestation. Let's switch to the policy I have for today. And we have a few demos for this image-verified type of rule. Okay. So the first thing we have here is a Kuberna validated policy. And as you can see here, I set the validation failure action to enforce, which means I want to reject the resource creation or operation if there's any violations on my incoming resource, right? And then in its rule body, I'm just trying to specify this verify image rule and it's matching the image coming from my own registry and it provides the public key to verify the image signatures. And this attestation attributes, I'm just saying that, okay, I want to verify my image is coming from the right branch and the reviewers of the image has to be in this, like the listed reviewers. And before we do that, I want to show you the image, like the testmate image I've just viewed. As you can see here, it's a simple POS container which is used in Kubernetes. And it has the signature ready because I've signed the image with the private key and push it upstream. And then after that, I just want to verify the attestations in the image. So far, it doesn't have any. So let's see what happens if I create, if I install the policy and create the resource or a pod using the image that doesn't happen at attestations. Okay, so like I'm running a Kubernetes 123 cluster and currently I have Kiburnal installed. Like this is the latest 1.6 release candidate. Let me just check the tag on that. Right, it's running Kiburnal 1.6.0 RC1. And then so far I don't have, I don't think I have any policies installed, right? So C-Poll here is just the short name of the cluster policy. You can either like do kubectl get cluster policy or kubectl get C-Poll. It'll return you the existing cluster policies. All right, so let's create the policy first. Let's say kubectl apply, check review. In that case, I want to verify the policy's been installed and it's becoming ready, right? So it indicates this policy is ready to serve the emission request. Okay, so now we have the policy in place. And let's see if I run a just a standalone pod using my POS container image, which does not have any attestation information, right? So let's create that pod. And ideally I should see Kiburnal blocks that resource creation because okay, the signature doesn't match, right? So this rule is actually blocking the resource creation. And then this, remember this co-sign C-O-I-2, it also provides the attest command for you to sign the attestation and attach it to the image, like the container image and then push up a screen, right? So let's involve, let's run that command called attest. By the way, before do that, I want to add these attestations to my image and saying that, okay, this image was viewed from men branch and this is the reviewer who reviews that image, right? It's just the simple information here. And then if I run co-sign attest with the key and then I attach the entire review chasing, which is the attestation with this flag and I'll attach the container image here, then it'll sign the attestation and attach it to the container I have, right? Just entering the password for the private key. Okay, so now our container image should have that attestation. Let me quickly refresh that. Okay, good. Now you can see there is another signature here, which has this dot ATT that's indicating the attestation has been added to your container image, right? And then let's run the pod using the same image again. And in this case, because the image is coming from the men branch and it's reviewed by bot, then our key vernal just allows that pod creation. Okay, so this is just like an example of how you use key vernal to verify the image attestations. Let me just quickly pause here and see if we have any questions. Okay, I guess. Not so far, but hopefully we will get more and more as we go along. Okay, I think I can just proceed from here. Okay. All right, and the next example I have is to look up image data from your image registry and use that in the validate or the mutate policy for further policy application, right? So again, I have a cluster policy here, which is matching the pod. And simply I'm just filtering out the delete request. And with this validate rule, I'm going to iterate through each container images and trying to calculate the image size, right? As you can see, I have defined the context attributes here and it's trying to use this James pass operation to get the size of that image and stored in this image size policy context variable, right? And this variable is later consumed by the deny rule. And here I have a condition say that, okay, if my image size is greater than two gig, then block the resource for me, otherwise allow the creation, right? So let me clean up the previous cluster policy and then delete the workloads. The pod I just run like a pod test and then apply that like block image policy to the cluster. Okay, let's check whether the image is ready or not. It's not ready yet. Okay, now it became ready. And then I'm going to create, first I'm going to create like a busy box pod. You know, apparently busy box size is this, you know, there's no chance it exceeds the limit, right? So I should see the busy box creation should be allowed. And let's just execute that command. Say demo dash dash image, busy box one of 28. And in this case, you're going to look up the image data and tries to calculate the size of the busy box image. And after that, you just see the pod creation pass through. And then I'm going to create another like a pod using this large cluster, sorry, large image, which its size apparently exceeds two gig. And, you know, ideally, Kirono should block that resource creation for me. So let's run that command. Now you can see that, okay, my policy block large images. Just block the demo block creation because the image size exceeds two gig. All right. And next I'm going to have another like a policy leveraging the image data, but it's being used in the state policy. And it's trying to just trying to solve, replace the image tag to the resolved rep. Again, here I have a cluster policy matching a pod and trying to filter out the lead and the request. And there I have this for each mutate rule defined. And again, trying to iterate through container images and store that resolved image information to this policy context variable. And this variable is later used in this mutate patch strategy merge pattern. And it just tries to replace the image by its reference. Okay. So let's apply the policy to the cluster, set resolve, and then, okay, let's do that pos image again. So if I run, do I have that? That's right. Okay. It took a while. All right. I don't have that. That's great. So you can see the pot has been created. And if we inspect the pot like image, you'll see that the image should be replaced by its resolved reference instead of like the image I specified in the command. Okay. This is an example of the, like, how to use image data and validate and mutate policies. All right. And going back to the slides. There was a question from the audience as well. We got some answers in the chat already, but if you want to expand a bit. So there was Santosh asking about, sure, they got details. Thank you. And then there was Amir asking about Jin's path, which is a query language for JSON. So if you want to expand on those, yeah. Okay. I saw them are already answered and some of the links are provided, but if you want to learn more about Jamey's path, so that type of policy, feel free to, you know, browse our, like, urinal website, and we have all the documentations ready, like on this writing policies page. And also we're working on, like, one-sixth release, trying to, you know, publish everything along with the urinal one-sixth release. And to just expand on the Jamey's path, because this is the theme that comes up pretty frequently in the urinal policies, just a real quick overview. You can obviously go and read the link, but Jamey's path is just a query language for JSON that we use internally to perform filtering and selection on complex data structures. And this has been official because it's not a, you know, you can get many of the powerful things that you need to by policy without having to write a programming language. So by tapping into a filter language, we can allow some pretty complex expressions to be built to do pretty much anything or most things for sure that you'd need to do with policy. So a little bit of background on Jamey's path, and you'll see that across many coverable policies. It's not a requirement, but some things you may need to use Jamey's path for. Others, just a simple pattern statement. Perfect. And then there's an extra question. So Amir, continue. What if we want more complex mutations? Yeah, I mean, it's like with most use cases, if your use case gets more and more complex, then you may need to involve some Jamey's path. But for simple mutations, you may not need any. If you do need to do complex mutations, whatever the definition of that is, that may be, well, number one, stating what your needs are. And then if you look at the documentation, and we try and keep the documentation pretty up to date and pretty descriptive of all the capabilities, you can see examples of how that would be done, including in the policies which Shitting is showing. And I'll show these a little bit later, but we've got an easy way that she's pointing out to filter, mutate so that you can see how that's done to solve either your use case or in many cases, it may already be done. You just pick it up and use it. Right, exactly. Currently, we have 22 mutate policy samples here available on the website. But feel free to browse it. But if it cannot address your use case, feel free to reach out and we'll definitely add that to our sample post. Perfect. And then there's one additional question more. So I'm going to continue. So we cannot use scripting languages, like who are something? Yeah, right now the scripting language that we're enabling is James Path. It's not a bring your own scripting language. We are evaluating some ways to maybe bring your own language, like maybe JavaScript or something. But as of right now, it's really just one. And quite honestly, we found that with James Path, there are, I mean, at least in my experience, and I keep to a lot of what the community is doing. They're having, I haven't run across much, if anything, that can't be accomplished with Coverno policies in James Path. I'm sure there are some use cases, but they're probably not fairly, they're not prevalent. Right. And we're also avoiding like doing external cause, if we enable like the JavaScript, there is no control on that, right? So currently Coverno can look up data from existing clusters through config map, through API cost. And this image, like data lookup, it just recently added. But that's all like for the lookups of Coverno. So that's one of the other reasons, we're just kind of hesitating adding that to the Coverno. Perfect. Thanks for the extra info. Keep the questions coming up, everyone. We will answer as we go, as well as in the end. Thank you so much for the questions so far. Okay. Thanks. Sounds good. Let me just continue from here. All right. And there are also a few other critical enhancements we added to Coverno 1.6. One is related to the Coverno performance, as we've seen that in Coverno, prior to Coverno 1.6, the memory usage grows if you have a large scale of clusters. And the reason for that is we use this dynamic informers in the background controller. And by default, the Kubernetes informers maintains the memory cache for that. So if you have thousands of those resources installed in the cluster, Coverno memory would grow in that case. And with 1.6, we kind of moved away from dynamic informers in the policy controller. And it's been verified that the memory usage was reduced from around like 400 mech to 200 mech with 80 crown drops and 10k config maps scheduled in the cluster. Right? So just in general, we don't want the Coverno memory usage to grow, especially when you're running it on the large scale of clusters. And we're also in the release candidate of 1.6.0, we're trying to collect more data on that and keep monitoring the memory usage in the future. Okay. And another related is like how you use Coverno, like how Coverno deals with failure scenarios, right? We've heard of like some of use cases from the community that the user shuts down the entire cluster, at least the data plan. At night and then restart the cluster in the morning without terminating Coverno gracefully. So in that case, when you bring back your cluster, there could be a chance that Coverno is not recovered yet. So the admission webhook configurations will block all the resource from recovery. Right? So in that case, in 1.6, Coverno enabled this namespace selector by default for you to exclude the namespace dynamically, especially for the namespace like Coop System and other default namespace, you may want to exclude the workloads in those namespaces. And you can also configure the failure policy for webhook configurations per Coverno policy. And by doing that, for example, if you set the failure policy to fail, in that case, if webhook is not responding or if there's any arrow of the admission webhook, Kubernetes will just reject the admission request on that. Otherwise, if you have it set to ignore, then after the timeout, Kubernetes will just allow everything to pass. Okay. And we also have this dynamic webhook, which is introduced in 1.5. So this is something a Coverno will automatically manage the webhook configurations based on the installed policy. That is to say, when you don't have any policies installed in your cluster, then Coverno won't impact any of your like the cluster workloads or the resources. Okay. One last demo from me for today is the namespace scoped failure action. So as you've seen in the previous demo before, some of the resource creation got rejected because I have the failure action set to enforce. And in that case, like especially for a cluster policy, you can only control the failure action across entire namespaces. And with this override ability on the failure action, you're able to make exceptions for Kubernetes namespaces and simply either like to enforce or audit the policy violations. Right. So the example for that I have for today is this Validate policy. So let's look at the rule body. It's trying to match the pods and verify the pods has to have this label app defined under the labels block. Right. And here by default, the failure action is set to audit. Well, I have this overrides defined. And then I'm saying that within the prod namespace, I'm enforcing the workloads and the pods must have this label with a key app. Right. By the way, Qerano automatically handles the pod controllers like deployment, state for set and others by adding the additional autogen rules to the policy. So let's just quickly apply the policy. Let me just clean up the previous one because I don't want my resource be blocked by any of the previous policies. Okay. After that, let's apply the policy to the cluster, say overwrite failure action. And then once I have it ready, it's ready now. So I'm going to create two pods, one in the default namespace and another in the prod namespace, both without the label app. And the expected behavior is that the default pod, like the pod in the default namespace should be allowed about the pod in the prod namespace should be blocked. Okay. So let's say kubectl run, what should be name I should use, test, failure action, and it's from nginx image in the default namespace. So apparently you can see the question's been allowed. And if I try to schedule it to the prod namespace, so in this case the pod creation was blocked by kubectl. That's how you can control, like make exceptions on the namespaces. So a couple of questions that came up here shooting the label pattern. Is that a regex? And can you explain what the label pattern does? Right. So yes, indeed it's a regex and it's just trying to say you must have this label defined on your incoming pod or workloads. And the question marks, it's indicating that you must have at least one. And the star character is trying to match any of the patterns. And what are the possible action types that are available? Right, the action type. So currently we support creation, update, delete, and connect. Connect is something like when you try to exact into a Kubernetes pod, so the admission request is send with the connect type of operation. So that's four types of operations. And for the policy, if the question was more around what are the action types that a policy can take, there's really two, there's audit, which means that the resource will be allowed, but it'll be reported on in a policy report and there's enforce, which means that the object will be immediately blocked. Yes. So enforce and audit are the two options. Right, the failure action only supports enforce and audit. Okay, cool. All right. Okay, I guess that's all the demos from me for today. Let me quickly stop here and I'll hand off to Chip. He will talk about like a few new JMS pass operators and the updated to rental policies. Yeah, thank you, Shitting. Let me go and put this in and I realize I hit the wrong button, but one thing I'll mention that reminded me from Shitting's last slide that may not be fully appreciated is when she showed the screenshot here of the validation failure action overrides, you'll notice that this is a cute control explain command and everything in Converno, all of the CRDs are open API v3 schemas, which means that you can do an explain on any aspect of any portion of a Converno policy or report and it should tell you what it does, what it means and how to configure that. And that's a super helpful thing, as I'm sure you all know, but you don't have to necessarily go and dig them through the documentation or go look at PRs, just cute control explain whatever you want and it'll tell you how to go about writing that policy object. And Shitting and folks, if there are any questions as I'm going through this, either just stop me or we'll get them in the end. So I want to keep going. We're shifting left off as far as some of the new things that are coming out in 1.6 and also show some demos around these. First thing is we've got several new operators that make it even easier or in some cases allow some new policies to be written. And for those that aren't familiar, operators are just a way of when you're building an expression in a Converno policy, rather than stating what the pattern, if it's a simple pattern that works well in a lot of use cases, but in other use cases, you need to be more expressive around what things should be checked, how they should be checked, and in what set they should be checked. And so these operators are useful in preconditions, which are a way to determine whether a policy is actually going to fire. It's sort of an intermediary in between a match statement and the actual policy body. So you can further refine with precondition based on these operators and an expression that you build, whether this policy should actually apply to an incoming object. And also in validate policies, we have a type of validate that is a deny. So a validate rule is just saying, I want you to make sure that it looks like this. And a deny just inverts that and says, go ahead and block if it looks like this instead. And so in both cases, we can use these operators. And we have got four new ones here, and it's any in, all in, any not in, and all not in. And let me flip over and demonstrate how this works. So I'm looking at a policy here that uses the all in operator. And so in our validate message here, we're doing a foreach and just looping through all of the containers, the ephemeral containers and the internet containers. And we are interested in knowing if any of these two values, so in this case, we're looking at add, the objective here is to drop any pod that has any container that's attempting to add both of these capabilities together, not just one or the other, but both of them together. So if you see a container that has in the add statement, net raw and net bind service together, deny the pod. So using the all in operator, we can do that. And we're going to put those two values in the key statement, and we're going to check and see if all of those are in, not just one, not the other, but all of those are in the value of whatever specified for the add statement in a pod. And so if we look at an example of a bad pod here, I got a pod that does add both of those. In addition, it adds a third one, just shown, but it also adds the two that we don't want. So I should be able to block this pod. I'll create the policy, make sure the policy is ready, and apply bad pod. And we can see as expected, the all not in operator has blocked this because it sees that the two values were in the list of add, the add array for this container. So it blocked that. And by comparison, if we look at a good pod that has one of the ones that we didn't want, but both of them, and the operator is all in. So we should be able to apply the good pod. And as expected, that's allowed. So all not in, be straightforward. All of these operators are really helpful in building either allow lists or deny lists, depending on how you build the expression. So that's all not in. And taking a look at, or that's all in, and then if we take a look at all not in, it's basically an inversion of that statement, where we can check and ensure that we deny. So it's a similar type of expression, but we're saying deny a pod if these two, in this case, capabilities with a drop statement, are not in what's in the drop. So an inverted check there. So we should be able to look at a bad pod. First, I'm going to get my policies, and I'll apply this policy. And so bad pod says that we've got cis-chone and net buying service, but the policy said make sure that these two, all of these are not in that. And as expected, that is been denied. And then by contrast, if we look at a good pod, it does have both of those. So all of those, those are contained that value. And then that's created. Yeah, and there's a question from the audience. How can we be notified whenever an audit action holds the pod to be blocked? And is there an alert system like the one in communities or something similar from Amir? So yes, by default, and we mentioned this a minute ago, the validation failure action can be either enforce or audit. If it's an enforce, that means that the pod is going to be blocked, or whatever the resource is, is going to be blocked immediately. If it's in enforce mode or in audit mode, it will be allowed, but it'll report in a policy report object. So you can see these if you do, for example, C-Pall R is the alias for cluster policy report. I don't have anything there, and I don't have anything here. But there's nothing reported, but if I created that bad pod and the policy was in audit mode, I would be able to get the policy report for this namespace, because the pod is a namespace resource, and see that one of them had failed the validation. And so that's a way for you, or if you wanted to decouple that functionality and say, since it's just another custom resource, you can delegate that to like a security group, and the security group has the ability to read those policy reports. You can get that information that way, which is a really helpful thing for your Coverno administrators to be able to create policies, and then some other group to be able to just simply see what's going on. But the metric angle, we also do have Prometheus metrics that are exposed, and yes, it will show in those metrics. And I don't remember what the exact one is offhand, but it will show a number of resources that have failed. You can go to the documentation, and if anybody wants to post a link to that, there's a whole page on monitoring with Prometheus and all of the metrics that are exposed. And we've also got a really cool project that's out there. It's basically a front-end, a policy reporter front-end that'll show you a lot of these things that are going on in a nice GUI that doesn't entail you having to roll your own. If you want to install that, it's an optional component. If you want to install it alongside Coverno, it'll give you a nice web-based dashboard where you can go in and take a look and see all of your policies and what your audit status looks like. So, several good tools there to help. Perfect, so we're getting all of those links to the chat. And I also want to say thanks so much for everyone who said hi or reading to the chat. Hello to all of you as well. Lovely that you joined today. And yeah, perfect. Cool. So, I'm going to skip over a couple of these just in the interest of time. But so, talked about the first two. Similar type of story with the other two. You know, the gist of these is we're providing much more granularity for you to be able to select exactly what it is that you want when you build an expression by saying either these things can, any of these things can be in or all of these things can be in and vice versa. And so, super powerful. They'll help unlock new possibilities of either writing new types of policies or making existing policies better. And in the last two things here, we now have support for integer ranges and also some of the existing operators like greater than, less than, etc. Now support duration and simver. Couple of these things aren't exactly in 1.6, but they bear, they bear fleshing out or mentioning just because they're super powerful. So, and looking at the range, I guess first. So, in this policy, we want to be able to check that a host port, so if you define a pod and it has a host port in it, this range operator is a simple way for you to express a large range of ports. I mean, you could use it for any type of integer, but we've got 5,000 to 6,000. You can just simply write 5,000- which is a range operator 6,000. And any pod that fits this, again, this is a change path expression here. It'll just be blocked. And so, if we take a look at example of a bad pod, this is using host port 53. That should be denied. And so, if we apply the policy that just had a minute ago, saying collect all of the containers, including ephemeral and init containers, look at the ports, and look at the host ports. And if any of those are not in this range, deny it because this is a deny rule. So, we're going to ask basically, 5,000 to 6,000 is our green range. Anything that falls outside of that is bad, so block it. And as you can see, our bad pod here specifies host port 53. DNS is generally not a great thing that you want users to expose on a host port. So, let's see if we can apply a bad pod and skirt around that. And no, we can't. And by contrast, let's see what a good pod looks like. This specifies quad five, 5555 for the host port. Can we apply this one? Yes, we can because this falls within the allow range. So, that one is allowed. And on the greater than, so just quickly on these, we'll go and show demos, but on the duration comparison and the Simber comparison, Coverno now natively understands Simber. So, you can have something, again, another James Pass expression, which is going into a pod because that's our kind, going to go look at the labeling version and going to get the value of it. And we're going to be able to compare that using the greater than. And it's going to natively understand, is this greater than 1.4.0? Well, you don't have to write any complex, substitution logic or scripting logic. You can just ask, hey, is the version greater than 1.4.0? If it is, block it. If it's not, allow it. So, Coverno understands that and the value. And it also understands duration. So, in this case, the use case for this one might be, if you, particularly in audit mode, if you want to be able to be informed about pods that maybe have run out there for over a week or a month or something, and the images are, you want to have refreshed over time and you want to rebuild those pods, you can have a label or whatever the key is that expresses time. And Coverno with those operators now understands time. So, 12 hours, it'll understand that 12 hours is greater than 11 hours. And you can express it in minutes as well. So, a couple new capabilities to the existing operators, in addition to the new operators that make it, make it possible to write new types of policies as well as existing policies be much smoother and even consolidation of rules, because that's another thing that, you know, you want to be able to express your intent in the most efficient manner possible. And in some cases in the past, it might have taken several rules to do that, even though those rules have been expressed, you know, fairly simply, in that you're not programming what the rule is, you're just simply writing your intent. But now you can collapse those into one rule, oracle rules. Yeah. And there's an audience question once again. So, I was asked, any recommendations on patterns and best practices around organizing and structuring policies for large multi-tenant clusters? Yes, a great question. We should have a section on the website that kind of talks about what the organizational strategies that may be useful to you. But in general, you know, a Coverno cluster policy, well, a Coverno policy regardless of whether it's a cluster policy, which is a cluster scope resource or a policy, which is a namespace scope resource, is really a container. The thing that matters are the rules. And so we see in some cases that some users like to simply write cluster policy, like one cluster policy that has all of the validate rules, and maybe another cluster policy that has all of the mutate rules. Or depending on, you know, what they're trying to do, maybe there are a bunch of rules that they need to express for a specific type of resource, like, for example, you have a bunch of policy that you want to wrap around pods, but then you have another set of policy that you also want to wrap around like services, for example. Well, you could create two cluster policies and have your pod policies, one, your service policies and another. And a combination of that if you wanted to, and also the same type of thing when it comes to a namespace. So you can have a set of cluster policies and also if you're delegating, especially in multi-tenant environments, where you are going to delegate some of these functions to your namespace administrators or however you call that, you probably want them to be able to set some policies, but they need to have the ability to find those. Well, you can define yours in a cluster policy. They can define theirs in a policy that you don't collide because one policy can't override another. They just simply extend whatever the conditions are. So there's a lot of ways to go about doing it, but at the end of the day, Coverno aims to be flexible and also simple with that flexibility to give you whatever the control that you need for your environment and for your customers, your users. You should be able to do that. Perfect. And then there was another one as well from over a year. Can we also generate other resources on validation such as generating CVS or creating deploy for example? So a generate rule is a specific type of rule in Coverno and it's a different type from a validate policy. So a validate rule says it's basically a yes or no answer. Like, here comes a resource. We want to take a look at it. Is it allowed yes or no? That's a validate rule. A generate rule is here comes a resource. Based on that and the criteria that's in the rule, what other resource should we create in response to that? So there are two separate rule types. They have to be expressed separately, but that's the purpose of a generate rule is to generate a new resource, but it still has to match the criteria in the rule. So if the criteria isn't met, it doesn't deny the resource. It just says, all right, the pattern isn't what I'm looking for to create that resource, so I'm not going to do anything. Perfect. And okay, so we're a little behind here, so I'm probably going to have to skip over these because we want to leave some more time for Q&A. So we talked about James Path and what James Path is, just a quick recap on that. It's a powerful JSON query language. If you're familiar with JQ, JQ's filtering application, James Path is similar to that, but it has a lot more things that are called filters that are built into it. And in this release, we've added a whole bunch more. And so I did want to show some of these that were highlighted here, but since we're getting a little bit on time, I'll just mention that, in addition to the existing filters, and you can go to the James Path website and see all the ones that they have there, which are super helpful. We've also created a whole bunch of new ones, which allow you to do common operations, like being able to divide and have Coverno understand what the resources are in Kubernetes terms. That's one of the great things about Coverno is, since it doesn't require you to program, we try and build in as much of the logic as possible. So you can divide one resource, like 100 MIDI bytes by another resource, 50 MIDI bytes, and Coverno just does the math behind the scenes and also the unit conversion. So that's similar with all of these others. Time sense, the ability to look at a timestamp and an object and compare that to now or something else and make decisions on that. Without you having to write a bunch of logic, we've already done that. Path canonicalize, the ability for you to look at a path that's specified in like a host path and have that normalized. So there's just like one forward slash in between each path separation. So don't have the time to go into these, but urge you to go and look at when it becomes available with documentation, try these out. But these are all new James path filters that are coming in Coverno one six that are not found in the upstream James path library. Also for each enhancement. So I'm just going to talk to these real quick because we're running out of time for each is an ability in Coverno that we have to allow you to loop over objects that are in an array. And one of the things that is coming in one six is the ability to use JSON 6902 patches to be able to loop through objects and afford a loop and do things like remove. So strategic merge gives you the ability to do like a customized style patch. But with JSON style patches, you can do things like specific removes. And so that's now supported. There's an element index variable, which allows you to refer to what the specific index is that's being operated on. This kind of goes hand in hand with that JSON 6902 patch. Element scope, the ability for you to take a look at what the high level object or the field is in a certain resource. And if you need to operate outside of that, you can set this element scope field and that allows you to basically pick and refer to any other field that may be in that resource. Also, context loops. Shitting showed this a little bit in some of her sample policies, but context can now be provided inside of a loop. So the use case that she showed was looking at variables from an image, from an upstream OCI register you want to look at metadata on an image. You can put that as a context inside of the loop and just iterate over those things. Um, policies and the, uh, the popularity sample policy. So for those that didn't know, um, Coverno has over a hundred sample policies that you can download and use right now. And they're of all sorts of resources of all sorts of across all of the Coverno rule types. And many of them are even outside of the Kubernetes core resource type. So we've got some there from traffic, some from, um, I think we've got some from Valeros or from Start Manager. Um, so there's over a hundred of, uh, policies that are out there that you can either download and use right now or at the very least, they're great for teaching and giving you some concrete examples of how to build the policy that you want. And if you don't find the one that you're looking for, you can probably easily customize one that's there. Um, and if all of that fails, you can open up either a request or come to our Slack channel and tell us about what you're trying to solve. And we'd be interested to figure out how to do that. But there are even more coming in 1.6, uh, with over 20. And then, uh, lastly on this, the Kubernetes PodSecure Standards, this is a set of standards that explain what types of controls should be in an environment. Coverno has built these, uh, quite a while ago, but we've now refreshed these with the capabilities that are in 1.6 and also that align with what the PSS is upstream. So, uh, 18 or 19 new policies, over a thousand test resources for just these PodSecurity Standard policies. Unfortunately, I don't have time to, to show how these can be tested in the CLI, but, but these are made available. And, um, they also cover, uh, they also cover the ephemeral containers, which is now, uh, turned on by default in 1.23. And also one last note to mention that, uh, for those of you that, uh, have worked with Coverno policies, specifically Validate policies, if you have a deny rule, uh, the AutoGen Controller, the nice capability in Coverno that allows us to take the rule that you write Pod and have that automatically translated for other Pod-level controllers, like deployments, staple sets, cron jobs. Well, that now works for the deny rules as well. So that's another example of, uh, us doing more of the work on your behalf and you being able to simplify your rules to just get what you want automatically. So, um, just to, just to quickly wrap up here, um, the, uh, really the summary of this release is adding more features that solve more use cases through policy. But, you know, Coverno's motto is, we want to do things easy. We want to do things intelligently and we want to eliminate the burden of you having to express policy. Just get on with the job, make it easy. We're trying to do more and more of that in 1.6. There's no programming required anywhere. It's very quick to get up and running. And as time goes on, we try and look at use cases, solve more of those problems and build more capabilities into Coverno that allow you to do more stuff in less amount of time. But also the other two themes of this release are less resource usage. You know, we know that clusters are becoming larger or people are using it. There's more stuff being thrown at them. So we want to reduce the resource usage to make it, make it make it less memory intensive and also more robust for your production use cases. You should be able to trust this. This is an element of security. So we're trying to put a lot of work into figuring out how we can make it more resilient, how we can make it more robust in the case of these types of failures and other events that occur. And so there's a lot of work that's been put through in that regard. So yeah, that's really the summary of what we have for you. And if there's any time left, be glad to take some Q&A here. Yeah, we have about one minute, but we can run through a quick question or so. So essentially this is the last call for questions from the audience. So if you have anything just ask now quickly and we can get to it. And there was one question that got answered with a quick note. But just if you want to explore it further, there was a question from Amir. Are generated resources also validated? Are generated resources shifting? Do you want to take that one? I think you're on mute, maybe? Okay, just yeah, it's just on mute. Yeah, you can verify the Qverno generated resource by adding another validated policy to your cluster. And there you can select on that specific resource using the label, because Qverno add labels to its generated resource and use that to validate whatever configurations you want. Perfect. Now we are not getting questions, but we are getting compliments. So great job, beautiful video, beautiful priorities for the team. So great job from everyone today. So if there isn't any last questions and we are perfectly on time here as well, we did answer a lot of Q&A throughout the webinar as well. So there was a lot of great interaction. Thank you so much everyone for that. But yeah, let's start wrapping things up. So thanks everyone for joining the latest episode of Cloud Native Live. It was great to have Verna here talk about us, about their latest and greatest U.S. things. So we, I really have to say there's so much interaction from the audience. Amazing to see that. Thank you so much for all of the great questions. As always, we bring you latest Cloud Native Code every Wednesday, so you can tune in every week or further as well. And we have a great session next week as well as well. So thank you for joining us today and see you next time. Thank you, Annie and Tyler.