 Thanks for joining us today everyone. Welcome to today's live webinar with CNCF, What's New with Open Policy Agent Gatekeeper. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our code of conduct and then hand over to Jaydip and Millik, both software engineers at Microsoft. A few housekeeping items before we get started. During the webinar, you're not able to speak as an attendee but there is a chat box down the right hand screen. Please feel free to drop your questions there and we'll get to some if we can in the middle but we'll leave most of them for the end. This is an official webinar of CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct and please be respectful of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They're also available via your registration link you use to get into the recording today and they'll be available on our YouTube playlist online programs. With that, I will hand things over to Jaydip and Millik for today's presentation. Cool. Let me see. Let me share my screen first. Okay, I hope you people can see the screen. Yep. Yep, looks good. Cool. Hello everyone. We are delighted to have you all here with us for the CNCF webinar. Today's focus is on the Gatekeeper project and it's exciting new features that our community has been working on since our last update. We're thrilled to share the new features that we have added. My name is Millik Sodari. I work as software engineer at Microsoft Azure and I'm also involved in contributing to Kubernetes and other projects in the ecosystem and I'm here with my colleague Jay who is one of the contributor to the Gatekeeper project and will be presenting alongside with me. We have a few topics on our agenda for today's presentation. Firstly, we'll delve into the Gator CLI and its practical application. Next, we'll explore how we can utilize the external data to interface with various external sources using the provider base model as to how it's going to work and we'll see, you know, how we can use it. We'll then examine the validation of workload resources and how the expansion template works. Additionally, we'll also discuss the concept of mutation in Gatekeeper. As an update, we have recently added a new feature that enables user to consume violations with the pops of models. We'll take a look into that. We'll also touch on cell or, you know, common expression language and how it relates to Gatekeeper before wrapping up with some of the other updates. Okay, remember the agile bank from the last Gatekeeper webinar? I'm not sure if you have attended the previous webinar or not, but we have this agile bank that was using a Gatekeeper as a policy engine in their bank and well, they're back and they got some exciting news. So as the developer of the agile bank, we have decided to implement a new policy that requires a valid label on any Kubernetes resource. So if you're trying to deploy anything or any Kubernetes resource, what we want to do is we want to make sure that we have a valid owner label on that resource. And guess what? We're going to use the latest features available in Gatekeeper to make it happen. So let's take a deeper dive into the details. So we'll first look into the Gator CLI. So if you're familiar with the Gatekeeper, you may have wondered if it's possible to test constraints before applying them to the Kubernetes clusters or how, you know, incorporate these policies into your CI-CD process. So for example, you have an, you have your existing CI-CD process and you want to make sure that, you know, the many developers on your team are developing the policies, but is there a way to test them before we can put them into the Kubernetes cluster? And you know what, Gator CLI helps you do exactly that. So these tools allows you to perform shift-left validation testing by verifying policies prior to their implementation into the KX cluster. So you don't have to always deploy policies onto the cluster and then see how they behave or how they work. Or, you know, if you want to make any tweaks and turns on those policies. So with Gator CLI, you can validate policies with ease and ensure the smoother deployment process. So Gator CLI has like a many different subcommands, but I want to focus on two subcommands that are like important. So let's delve into those two important Gator subcommands that are essential for any developer and a user of a gatekeeper. So these subcommands are called Gator Verify and Gator Test. So when you come to the Gator Verify a little bit later. So let's take a, let's take an example scenarios from before that, before where we, you know, we need to create a policy that requires an owner label. So as a seasoned developer at Agile Bank, I know how to write a policy. And as you can see from the directory structure, I have, I have the constraint template and constraint in place. So if you're familiar with the gatekeeper and the policies that we, that gatekeeper has, so it has a constraint template, which defines the rego or what essentially the policy is, and then you define a constraint as to on what resources behaves and what parameters my policy will have. And the typical directory structure looks like this. And by the way, the example that I'm taking here about the owner level is already present in our gatekeeper library. So you can, you know, go ahead and try, try it out or, you know, feel free to browse the other policies while you are there. So, so we have this directory structure with the policies. However, I still need to check if the policy that I have written actually works. And this is where Gator Test comes in handy. So you can use the policy that you have written, and you can run the Gator Test against the resource files and you will immediately receive an error if the pod doesn't have an owner level. So if you see here, right on the, on the right hand side, I have like the pod.yml, which is like a simple, simplest pod definition it could be. And it doesn't have a, doesn't have the owner level. So if you run the Gator Test against the, against this resources with the policies that we have written, you will get an error. And we'll see the demo in action in a bit. So Gator Verify helps us validate whether the constraint template and constraint we have written are correct. As a good developer, it's always important to write tests for your code, right? So this is where, this is where the, the test driven development of the Gator say, like, and you know, comes in handy. So if you can see on the right hand side, we have something called a suit.yml. The suit.yml file serves that purpose of testing over here. So if you look at it, we have a different things like, you know, hey, what is our template? Where is our template? Then what constraint we are trying to test? And there are some assertions that you can define. So like whether you're expecting the violations or not. So with this suit.yml file and the Gator Verify, what you can do is you can make sure that whatever policies that you have written is correct. So with Gator Test, you can test whether the resource that you are going to create will be allowed or not. With Gator Verify, what you can test is whether the policy itself that I have written is correct or not. So here, like with allowed and disallowed examples that we have provided in the direct structure, right? So Gator Verify, we can use the Gator Verify commands to ensure that the policies we have written is correct or not. So and if you see at the end here, right, where the Gator Verify, like if you run the Gator Verify, it will tell whether it's passing or not. So let's quickly look into the demo of these commands. So we'll first see the Gator Test demo and let's say how it works. So as I was mentioning, right, so we have this direct restructure where we have policies that we have written. So samples will have like allowed example or disallowed examples and constraint. The suit.yaml file, as I was mentioning, will define the test for this whole policy and template is nothing but your actual regular template. So let's see what we have in these resources. So there is a part that the YAML file and I'm going to use or I intend to deploy this part into my cluster. So I want to make sure that this part has an owner label. And as you can see, it does not have an owner label. So we should expect an error. So let's see what happens. So when I do get our test and provide the direct restructure, I mean, so when we do get a test and we provide the policy path and then the resource or the file that we are going to test this against, we get an error saying that, hey, all parts must have an owner label. So this is typically without a Gator seller, typically you would have to deploy this policy onto the cluster first and then you will have to actually try and create a part after which you will get errors similar to this. But what this allows us to do is like it essentially helps us to shift left and make sure we do everything right even before we deploy the policies. Now let's take another example where we have a part which has the part label. So if you can see here the only difference between the previous YAML file and this YAML file is, it has an owner label. So it says like the owner is nilec.azuelbank.table. So when we run this resource file against our policies, we should not get an error. Cool. And as expected, we don't have any errors. That means whatever resources or part that we are going to create into the cluster will, this particular policy will allow this resource. So this way we are very much sure that our policy works as intended. Let's quickly also look at the Gator Verifier. So remember the Gator Verifier allows you to verify the policy itself. So it's mostly allowing you to write tests for your policies and make sure that whatever the new policies that you're creating has been tested correctly. So again, let's start with the directory structure. So it's the same directory structure that we have and let's quickly see what we have in the suv.yaml file. So again, it's the same file that we see in the demo. So what basically it does is, again, it allows you to define what you're going to test, what constraint you are going to test this against and write some assertions, whether you are expecting the violation or not. So now if we run the Gator Verifier against this entire directory, it will tell you whether it passes or not. So again, so this is very much helpful in order to sort of define what the policy will be and whether my policy is going to work or not. Cool. So that's all about the Gator CLI. But let's look into one of the different features that we implemented recently. So continuing with the same example of the required label policy that we saw earlier, let's explore how we can leverage the external data feature in this scenario. But before we delve into that, let's briefly discuss what external data is. So Gatekeeper offers various ways to mutate and validate Kubernetes resources. But in many cases, this data is either built-in, static, or user-defined. With external data feature, Gatekeeper can now interface with the different external data sources, such as image registries or any other external data or any data that is not available within the cluster. And we do so with the provider-based model. So this model establishes a common interface for extending the Gatekeeper with the external data. So leveraging external data brings several benefits, including addressing common patterns with single provider and facilitating the authoring of the constraint template and data sources. So returning to our previous example, right? Suppose a user specifies an owner label on the resources. We want to validate if that specified owner actually exists in an agile pack. If we just say that here is an owner that has specified onto the file, but owner can be literally garbage.agiledemo.demo or whatever that is. But that garbage user is not an actual real user. So we want to make sure that it is actually a valid user. And since this data is external to the Kubernetes cluster, an external data product can assist us in validating this. So there is no way we are going to have all the users or all the employees that are there in the company present in the Kubernetes cluster or have the data available in the Kubernetes cluster. So if you can see the last line in this example here on the right-hand side, it says like send external data request. So in this last line here, we invoke the external data provider and provide it with owner's information. So the line before that we are grabbing all the owner or the different owners that we may have and pass it to this external data. The external data provider then verifies if the owner actually exists in agile banks active directory and then the results returns the results accordingly. So let's quickly look or see how it looks in the real life. So let's first see what we have in the constraint template. So it's the same template that I was showing on the slide earlier. And if you can see here, like we are grabbing the owners and then we are sending the request to the external data. So this is what our template is going to do when this policy is in effect and it is being applied against the end resources. Let's also quickly look at the constraint that we have defined for this. So if you look at this, what we are saying is it's the same template or it's the same typical constraint of the gatekeeper where we are saying that, hey, enforcement action is deny and I want to test this or validate or have this policy run against the against parts basically. And now what we're trying to do here is we're essentially trying to create a what we're trying to do is we're trying to create an image and if you can see from the command over here, the label that we have defined is not nileq.agilebank.demo. So although we are defining a label, not nileq is not an actual user or the employee of the agile bank. So ideally what this will do is when this policy is being executed, it will grab this owner not nileq.agilebank.demo and it will send this string or this label to the external data provider. Then external data provider will actually validate whether not nileq presents in the active directory or not. And if it does not, then it will send an error accordingly. So let's say that in action. So as far as you're seeing here, we got the error like the expected thing that a user not nileq is not found in agile banks directory. So this way we are bringing in some real information or we're validating against some dynamic information for that matter. Now let's try to run the same command with maybe a proper owner, owner level basically. So now we have the owner as nileq.agilebank.demo and we know that nileq exists in agile bank or it's an employee of the agile bank. So Active Directory will have that information and it will allow the resource to create. So there you go. So it just give a json because I'm running that in drive-in mode. But what it says is like it validated that nileq exists. So we are going to allow this resource. I mean, there is another real-life example of the external data provider with ratify project which focuses on the container image verification. So if you know about the ratify, it's the ratification framework which makes sure that your container image is signed properly and things like that. So the real-world example could be like whenever you, let's say, try to deploy a part and you mention, hey, there is this x, y, z image. But you also want to make sure that that image is signed or not. So the ratify external data provider can come in handy and this is, this provider is maintained by the community. And it's one of actually several external data providers that are available, including there is one for the cosine. So there is a cosine provider which is also maintained by the community. And all this information is, by the way, available in our gatekeeper website. So if you go on to the gatekeeper website, there is a section for the external data which talks about various different external data provider that are available today. And there is also a template on a github which can let you create your own data provider by just creating another repo from that github repo template. So coming back to the ratify example, right. So as I was saying, so like when you try to deploy the pod, it will say, hey, this is the image and it will call, it could call the ratify data provider or external data provider. So ratify will make sure that this image is signed or not. And then it can send the results back to the, back to the gatekeeper according to saying that, hey, you know, this is allowed versus this is not allowed and things like that. So this gives a real nice interface to interact with the dynamic system. And which helps us actually write meaningful policies or far better policies than what we could do with the static data. With this, like there are, there are these like different features that we are, that we have seen so far. I would like to hand over now to my colleague, JD, in order to explain some of the other features that we are seeing or that we have, that we have for you now. Jay? Yep. Thanks, Neelak. Cool. So next in line is validation of workload resources. So workload resource is basically a resource that creates another resource, just like deployment or a job, right. And now gatekeeper can be configured to reject workload resources that might create a resource that violates a constraint or a policy. For example, we could configure gatekeeper to immediately reject deployments that would create a pod that violates a constraint instead of rejecting the pods. Now, this feature can be enabled via unable generator resource expansion flag. To achieve this, gatekeeper creates a mock resource for pod, runs validations on it and aggregates the most resources, aggregates the mock resources violations onto the parent resource. To use this functionality, we need to create expansion template that tells gatekeeper on how to and what to mock and what resources to expand into mock resources. So any resources configured for expansion will be expanded by both validating webhook and audit. Now, this feature will only work if expansion template is created for a targeted resource that exists on the cluster. For example, on the right, this expansion template indicates gatekeeper to expand deployment and replica sets onto pods. But there are cases, right. So some of the policies, for example, known state-based policies cannot be enforced accurately. What do I mean with this is that policies that rely on transience metadata such as requesting user or time of creation. Now, this is because metadata of that kind won't be present until the time of creation of the resource. So those kinds of policies cannot be enforced accurately. Now, I'm a developer at a child bank, right? And why is this feature useful to me? So we have talked about the honorable policy, right? So with that policy, I'm trying to create deployments that will create pods on my cluster that do not have honorable, right? And without expansion template, if I create those deployments, gatekeeper will accept those deployments without giving me any errors. But admission webhook will reject the pods. And I would need to look at the deployment and replica set to see what went wrong. But with this feature enabled, gatekeeper can mock the port that will be created by the deployment that I'm creating and see that the port that is getting created will not have only label and it rejects the deployment right away. So I don't need to look into deployment and replica set to debug what went wrong. So now let's look at the demo. So in the demo, I'm going to walk through what is there in the gatekeeper namespace. So all the things that are created with installation of gatekeeper, gatekeeper audit manager and other stuff. We are making sure the expansion is enabled with the display, right? This is the constraint template that I'm using, which is the same as we displayed earlier. And then there is a, and this is the constraint that enforces the constraint template, basically indicating that every port that is created in load well answer namespace must have an owner label. So now let's look at the, now let's apply those and look at the deployment. So this deployment here basically will create a port which will be missing owner label, right? And so far we have not created any expansion template that uses this feature. So now let's try to create, let's try to create and this this deployment and it was accepted and it will be in violation of a policy, right? So now I will need to check if the port is up or not. And as we can see, there is no ports running with this deployment. So I'm going to describe the deployment and see what went wrong, right? I see that replica set is scaled up to the desired one, which is one. So I'm now looking into replica set to see what went wrong. And there I can see the message that admission were booked denied the request because it was missing the required owner label, right? So now I'm trying to delete the deployment. So this is the same expansion template that we saw earlier, which indicates gatekeeper to more ports created by deployment at replica set. And now I applied the same deployment and it tells me that my deployment is not acceptable because the owner label was missing, right? So now let's let's move ahead with the next feature. Cool. So next one in the line is mutation. So this feature allows gatekeeper to modify Kubernetes resources at request time based on customizable mutation policies. Mutation policies are defined using a specific CRD that is called mutator. There are four types of mutator available with gatekeeper for different purposes, right? So first and foremost is assigned metadata and we will get into the example of the same in the right-hand side variation. So assign metadata is a mutator that modifies the metadata section of the resource. Metadata section is a very sensitive piece of data and certain mutation could result in unintended consequences such as updating name or namespace, right? So this mutator has been limited to only modify labels and annotation. And in that also, right, the assign mutator, assign metadata mutator do not have capabilities of modifying existing label or annotation. It only has capabilities of adding those things. The next one is assign mutator. It is useful to make any changes outside of the metadata section such as setting images, full policy for all containers to be always. Next one is modify set mutator and that can be used to add or remove entries from the list such as container, a list of container arguments, right? And then the last one is assign image which is specifically designed for changing components of image strings. All of these mutators can be divided into essentially three listing sections. That to say that all of these mutators have three different sections that can be that are in the, that are the part of the specification, right? The first one is extent of changes which basically defines that what needs to be modified, essentially the match section of the spec, then the intent of the changes defining where the changes should happen. That is basically the location part of the spec and then conditions under which the mutation should be applied, right? So that's a parameter part of the spec and that basically defines the whole mutator spec. So on the right hand side, we can see that this assign metadata mutator will be applied to a board that is, that have nginx label on it and it will modify, it will add the label which is honored and assign the value admin, right? So now let's, now let's look at the next slide. So this is the example of what happens with the previous mutation, a mutator in place, right? So this is on the left hand side, we can see the board is there, that's before mutation because there is no other label there. And then once you apply this board and the mutation is in place, we will add this on the label. Let's, let's look at the demo for the same screen again, right? So before we do this demo, right, let's, let's define the use case that why is this feature useful to me working at a gel bank, right? So I'm trying to deploy resources on the cluster, we have the same policy as owner lab, there has to be an owner label in, in the resources we create. I'm using shared YAML files and I'm really annoyed that I have to kind of modify those YAML files every time I'm trying to create resources to add this owner label. So now I can use the mutation to basically create my own policies, mutation policies that, that adds the resources, adds the required owner label for the resources that I create, right? Now let's look into the demo, right? Cool. So this is the deployment I'm going to create, which miss, which doesn't have a owner label or which will create the board that is missing from our label. So we get rejected. This is the same assigned metadata that we saw earlier. I'm going to apply it and I'm going to try creating the same deployment once more to see if my board gets accepted, accepted, right? And there we can see that the owner label was added by the mutation feature. Cool. Let's move ahead. So the next one is PubSubModel, right? So this is the very recent change that we did where it allows consuming all of the violations of audit using PubSubModel, right? So right now Gatekeeper uses constraint to bubble up this audit violations where you could find the violations on the status of a constraint and get the information about which resources in violation of your policies. But due to ETC delimits on how large an object can grow, Gatekeeper is kept at reporting maximum of 500 violations per constraint, on constraint statuses. With this feature, Gatekeeper is now able to publish all the audit violations over a channel as and when violations are caught. Since messages are not stored on cube object, there is no cap right now and Gatekeeper is now able to bubble up all the violation using this PubSubModel, right? And consumer can subscribe to those violations by subscribing to the channel where Gatekeeper is publishing. So on the right hand side, there is an example of a config map that essentially defines how and information on how to initiate the connection and using what provider, right? So the config map on the right essentially says that PubSub audit should use dapper provider and then in the config section, it defines all the necessary information to open and maintain the same connection to publish the messages. On to the next slide. So right now, here is the dapper example, right? Because right now we have created a driver that works with the dapper and can be utilized to use dapper's PubSub functionality to publish the messages. But the interface is extensible. So the solution, ultimate solution really supports any PubSub tools such as Revit MQ Kafka, but you need appropriate driver to use these tools. Let's look at the high-level architecture, which remains the same for all the tools and see how this feature works, right? So I have a dapper runtime running in Kubernetes system. I have a Gatekeeper audit port with the right configuration that is dapper sidecar injected with the audit port. Whenever audit gets the violation, it will publish the violation. Dapper sidecar will publish the violation on behalf of audit to our channel. Subscriber app describes the intent of subscribing to that particular channel and then dapper injects the sidecar into subscriber app as well. And whenever there is a message on that channel, our sidecar container will let the application know and forward that message to that application so that subscriber receives all the violations that are published by audit port. So now, what is the format of this violation? What information will subscriber get? So this is the same as audit log violations where a subscriber is getting this information for every violation that occur in the audit run. Audit ID, details, violation message and all the information that can help locate the resource that is in violation. So why is this feature useful to me as a developer or a maintainer working at the channel bank? So we have this honorable policy in place and we generally get a high amount of violation per constraint in our cluster. But the maximum violations I can get for a constraint is 500 with the current solution. But I want to get more than 500 so that I can potentially fix or raise further concerns about these violations and the resource that are in violations. And so I enable the pop-up feature with Gatekeeper and get all the violations on a channel instead of a resource and then I can get notified by the subscriber as in violations are there. Let's now let's look at the demo and how this works, right? So let me share that again. Cool. So I'm going to walk through a whole setup of this pop-up and what is there in each and every related namespaces, right? So for our purposes, when we are using that we kind of have to, we kind of have four namespaces that are of our interest. First and foremost is DapperSystem. And in DapperSystem, the Dapper runtime is running like every, they have five ports for Dappers, different functionality. Then in the default namespace, I have Redis running, which is essentially a broker that will act as a queue for messages. Then we have a subscriber subscribing to the channel where audit will be published in messages. And we can see in the subscriber, there are two containers running. One is the application container. Another one is the one with DapperSiteCard. And then in the Gatekeeper, I think changes except now Gatekeeper's audit port is running with two containers. One is injected by Sidecar and a service that is created by Dapper to publish the messages. Cool. So on the left, on the right-hand side, I'm subscribing or I'm tailing to the ports of subscriber application. And as you can see right now, there is no ports, but if violations occur and are published, we will see those violations received by the subscriber application, right? So on the left-hand side, I'm trying to create the resources. I'm trying to create the same policy and constraint that we have been talking about, which is honorable one. And then we'll see audit running and we getting the violations in the subscriber in a moment. So yeah, audit ran in the background. And then on the right-hand side, we can see that we got the messages or violations for the port that were in the violation. Cool. And that was it for this feature. For the next one, I'm going to hand it over to Millek. Cool. So let's discuss the multi-engine support in Kubernetes 1.26, which introduces an alpha feature called validating admission policy. So this feature enables declarative in-process validation of policies against the admission request. The motivation for this is to help us understand when to use what and what's the difference between the tool, like what's the difference between when we want to use the validating admission policies and when we want to use the gatekeeper. So firstly, the validating admission policy is an entry native policy, eliminating the need for an additional hop required by the typical admission webhooks. This has the benefit of reducing the request latency and enhancing the reliability and the availability. With the absence of the extra hop, you can now implement a fail-close approach. This addresses a significant issue with admission webhooks where the requirement of an extra hop can impact the request and results in many webhooks failing open. So ensuring the policy enforcement while maintaining the cluster availability is crucial and the validating admission policy allows you to fail close without concern about availability. The burden of operation is also reduced since there is no need to maintain additional webhook. And as we know, the embedded language that is used by this validating admission policy itself. So that's cool, right? That's what validation admission policy does. Now, let's take a look at the gatekeeper and you might be wondering in which scenarios you would actually need to use gatekeeper. Well, a gatekeeper provides the audit functionality that validating admission policy does not. So just now we saw JS demo where we even added this pops of feature which allows us to swiftly consume all the violations, right? So this functionality currently is missing from the validation admission policy. In theory, you could get to your API server and examine the logs, but with gatekeeper audit, you can have all the violations conveniently accessible. And as I was saying, like PubSub makes it even more convenient. If you have integrations with audit, you can generate other reports and compliance reports for the cluster operator. Another aspect to consider here is referential policies. So what do I mean by referential policies? Well, let's say you have a policy that needs to ensure the uniqueness of the English source, for example. Achieving the uniqueness requires examining the incoming request and then comparing them against everything already present in the cluster. This type of policy, we typically call them as referential policies and goes beyond what validating admission policy can accomplish today. Furthermore, when it comes to the external data, as we saw earlier, there is a high probability that you have a data source located outside the cluster while validating. Yeah. So while validating admission policy focuses primarily on the data within the cluster, the inclusion of the external data provides the additional capability for the scenarios which requires information from the external data source. This expanded functionality offers greater flexibility as per your specific needs. Additionally, gatekeeper not only assists in validating policies, but also facilitates mutations. So we just now saw the demo of how the mutation works. And so it goes with the Gator-CLA. The Gator-CLA allows us, like the shift validation, as we were saying. So you can do all the functionality that people can do even before we get actually implemented into the cluster. Moreover, the OPPA offers remarkable capabilities, allowing you to define highly intricate rules that surpasses the limitation of the validation admission policy. So with Rego, you can define different types of expressions and conditions that may not be possible with validating admission policies today. Furthermore, the numerous community policy libraries are readily available, which offers a wide range of pre-existing policies that can be effortlessly deployed in your cluster. And we have the whole gatekeeper libraries that you can browse and just use it. So this eliminates the need to create a policies from scratch. So with gatekeeper multi-engine support, you can leverage OPPA or other engines and which allows you to write the policies in Rego or any other language. So that's good. I mean, they both have, they both have different purpose, but you might still wonder, how you can achieve the best of both worlds? Or is there a way to accomplish that? Next slide, please. So yes, so there is a way to sort of see if we can get the best of both worlds. And that's where the concept of multi-engine comes into play. So we are currently working on this concept to enhance the gatekeeper. So as of now, gatekeeper relies on the constraint framework that we have in the OPPA. However, the goal is to create an abstraction layer that simplifies the user experience. So this abstraction layer allows the user to write policies in the preferred language while the operators deploying these policies follow the same deployment process. In the essence, the multi-engine enables multi-language, multi-target policy enforcement. So you can use the languages like Rego or Cell targeting different platforms such as Kubernetes Admission or Terraform, or if you can come up with any other such platforms. This approach allows for portable policies, meaning the same policies can be used across different CICD pipelines or reinforcement mechanisms. Furthermore, the gatekeeper already supports multiple engines, enabling you to leverage the strengths of different engines that are available in the community. So we believe that this approach can be beneficial for the gatekeeper and OPPA. I mean, the gatekeeper and OPPA are significantly more mature compared to the entry validating admission policy, which is still in the alpha stage. So the goal here is to bridge the gap and provide a solution that can combine the power of both gatekeeper and OPPA and the validating admission policy. So by utilizing the gatekeeper and the Gator CLI, you can not only obtain the audit capabilities, but also perform the shift of validations for the new validating admission policies with no additional cost. So it's literally like comes free with all the existing feature that we have in gatekeeper. So this integration also allows us for comprehensive policy enforcement and validation throughout the development process. So this is the vision that we are going with and we would want to make sure that both this world can come together and coexist and provide a good end user experience, whether you're developing the policies or whether they're deploying the policies in your cluster for your compliance or the other purpose. So this sort of view will give you an idea of what we are doing in terms of a cell or like the multi-engine support and things like that. Before we wrap up, we do have some common updates that we want to talk about. So I'll let Jay again talk about those updates. Thanks. So we have some other updates to share, some features. First and foremost is namespace exemption with suffix. So now namespaces can be exempted based on the suffix that is a flag. The flag is exempt namespace suffix. And this is useful when namespace is in the form of tenant-something and in cases where you would like to exempt certain namespaces for all tenants. The next one is open-sensors and straight driver exporters. Two new metrics exporters are now added to a gatekeeper alongside Prometheus. And there is also extensible exporter interface in case anyone wants to add any other exporter or it makes it easier to maintain a fork that adds more exporter as well. So then the third one is emitting events in involved object namespace. So now with two new flags, each for admission and audit, which is emit admission events and emit audit events. Audits, events essentially can be audited now, sorry, emitted into audit violations can be emitted as a Kubernetes events. And this flag is in alpha stage set to false by default. And then there are other flags also, which basically mentions that if emitted events should be in the namespace of the object that is responsible for this violation or in the gatekeepers' namespace. And for the cluster scope resources, it's always in the gatekeepers' namespace. And then the last one is ability to validate sub-resources. Now resources such as pods, log, poll eviction, replicas at scales, node proxy can also be validated with gatekeeper. And that's it. Cool. Yeah, thanks, sir. Yeah, let us know if you have any questions. We can help and try and solve those questions. And I would like to take this opportunity to also thank all the gatekeeper maintainer and the contributor to help implement all these features. Feel free to drop in onto the Slack channel. So we have a Slack channel called Gatekeeper on Open Policy Agent Slack workspace. So feel free to drop in there. Say hi. We have our community meetings every other... I mean, we do it every week on Wednesdays. So come there, say hi to us or feel free to open an issue on the GitHub. Awesome. Thank you all very much. Everyone still with us? If you have any questions, go ahead and pop them into the chat now for Jaydeep and Malik. And we'll see what we get. And if we have done, then we'll just wrap up a little bit early. Okay, here we go. You mentioned open census. Is open telemetry also covered? I am not sure about that. I don't think so, but I kind of need to check. I'm sorry. They said, I thought open census and open tracing merged to become open telemetry. Yeah, I think we can look into it and see what exactly is being implemented. Perfect. Thanks, Bridget. Anyone else have questions? One more minute just in case someone's typing. Okay, thank you. They said the demos were excellent. Well, I think that have y'all put your handles into the chat so that if anyone wants to follow up with you, they can find you. I know it was in the end of your slides. So we'll get, I'll make sure, if one of you can send me that final deck right after we hop off, I'll make sure it's attached when we load the recording. But otherwise, thank you so much, Jaydeep and Malik for a great presentation. And everyone join us next time for another live webinar with CNCF and have a great weekend and rest of your week. Cool. Thank you. Thank you both. See you, folks. Bye.