 I'm really impressed to see a full room here on the late shift. I'm sure a lot of you have got flights and things this afternoon, so I did a talk at this time last year and it wasn't quite so well attended. So this is a bit of a surprise. So yeah, hello. I'm Charlie. I work at Styra on the developer relations team. I work closely with the OPA project and this is Rita. Hey, everyone. My name is Rita. I am a OPA gatekeeper maintainer and I'm also a chair for Kubernetes sake off as well as SRC. I'm very glad to have you guys here. Maybe just begin before we begin, I would like to kind of pull the room to see how many folks are using OPA today. Awesome. You don't. And then how many people are using OPA gatekeeper today? Awesome. And how many people are using com test? Thank you. And how many people are using OPA outside of Kubernetes? Or perhaps for any other use case, microservice authorization, something like that. Not many. Okay. So yeah, quickly a little overview on the agenda. Brief overview and introduction to the open policy agent project and tool. A couple of project updates and a little preview of some upcoming items on the roadmap. And then Rita's going to talk about similar set of topics to gatekeeper. So what is open policy agent? It's a CNCF project. It's a graduated CNCF project. You probably heard about it seeing as you're here, but that's how it fits in. I like to think of the open policy agent project and community as a collection of these things. So what brings it all together is the rego language, which is a domain specific policy language for writing any kind of policy requirement that you might have in your stack, whether that's Kubernetes admission or microservice authorization or you name it. As long as it's policy decision, it's relevant for the rego language. You bring that in combination with a policy server. A policy server is able to evaluate this rego language and make policy decisions for you. In addition to reloading of policy and logging of policy decisions. Couple that with some language SDKs. We have a native SDK for go, which is how some of our community integrations are built. For example, gatekeeper and Conf test and a number of other SDKs based on web assembly for different programming languages. You bring all of that together with some tooling and the community and you get the open project. So this is how you might use open and distributed system where you're using OPERS authorization server. You're providing some information to OPER in order to make a decision. OPER is loading in policy rules and perhaps some extra data and just returning a decision result. You can also use OPER within a single application via one of the SDKs where perhaps some part of the application is calling another part of the application and module perhaps, which is using OPER within that application. So there's two different use cases are fundamentally sort of how anybody who's using OPER is using it. So yeah, like I mentioned, it can be an application that you've built and you're calling it via the REST API or using an SDK. It might be the Kubernetes API server that's calling your OPER or gatekeeper instance and making an admission check or mutating some resources as they're landing in your cluster. It might be a CI CD run, which is making a check against some configuration that's about to be deployed perhaps to a public cloud. Perhaps it's some internal configuration file format instead. Or it might be another common integration we have is with on-by-proxy and the external authorization integration there and we have a specific plugin related to that too. So yeah, based on some of the responses we had at the beginning, like most people might be somewhat familiar with the language already. This is a simple policy that I could write or I wrote in order to sort of briefly outline how Rego works. So we have a package here, a policy package and it has a single rule defined, which is called allow and it will only allow in the case that the information that's provided to the policy at evaluation time where the user's role is an admin. In all other cases, the default is evaluated and the result will be false. So how this might look in the server use case, you make a post request to one of the endpoints on the REST API on the OPA server with some data. The policy is evaluated as part of that request and the response is, in this case, a Boolean value true. You can imagine if the role were a different value or the role were missing or that would be a different response. So that's a little intro into OPA. Quick overview on some things that have been going on in the community since the last KubeCon. We've got as part of our documentation we have a list of integrations of the OPA project. OPA is a general purpose policy engine. We have high hopes for it to be used in all sorts of different places. We've added 25 new integrations since the last KubeCon in Detroit. There's some fun graphs there to show you how the number of contributors across our projects are also increasing and there are also more people using OPA as a library. And I think this is really exciting because it shows how people are trusting OPA or interested in using the language in more places beyond the existing tools that we have or the server use case. We also have six new public corporate adopters on the adopters page. If you're using OPA and you're not in the adopters file please open a pull request and get your name on there too. You can add a little link to your website and how you're using OPA. It's really, we don't get a lot of signal back sometimes so even little bits like that in the adopters file are really interesting to see. And we've also got loads of people using Slack. The QR code at the bottom of the screen that'll take you to sign up to Slack. We spend a lot of time there. Rita spends a lot of time there as well. We've got a gatekeeper channel for queries related to gatekeeper. Some of my Stara colleagues are regularly answering questions. So yeah, we spend a lot of time there. So if you're having any trouble please do come by the Slack. So yeah, we're releasing the open policy agent around about once a month and we've had six releases since the last KubeCon. Obviously the .50 release is a big release for us. So it feels like a milestone. So we're very happy about that. And yeah, going forward we're continuing to release around about once a month. Brief public service announcement. We used to push these rootless tagged images. Quick show of hands. Does anybody know if they're using the rootless flavor of the OPA images? One hand. So this guy, please, if you could update, bear in mind going forward, everything is rootless now. If you want to use root you should configure your container to run using root via different means. All of the images are now rootless. So yeah, a few quick project updates. We've merged over 250 pull requests since the last KubeCon. Most of the things that we've added are related to the language. And I'm going to dig into two of those in detail. We've added these functions for validating of JSON schemers or objects against JSON schemers. We've got objects.keys which allows you to get the keys from a key value object. We've added some way of formatting time. The GraphQL schema valid function as well and the net side is valid. So various validation things. I'm going to dig into the JSON schema verification briefly after this. I think that's one of the most exciting ones. We've also added what we call refs in rule heads which doesn't really make much sense there but I'll try and show why that's interesting as well. In addition to language related features we have support for the AWS signature API or signature sorry, signature request signing API. So that's another new function which is useful if you're loading bundles over the bundle service API from S3 or something. We have a new way of defining which decisions are logged based on policy. We've added some short hands to run OPA with an example or a remote bundle to make examples of using example bundles easier. Various monitoring related updates around logging of unauthorized requests against the API and various other performance improvements under the hood. So yeah, I'd like to see who's paying attention or see who's listening. I don't know if anybody can point out the mistake that is presented here in this policy. So we have a policy which is trying to block input where someone is using an example.com email address. Does anybody spot the problem here on the slide? I think you were first or you were the first one I saw. Exactly, yes, there's an S missing. So like, and this I think is a good example too. I've created this example specifically to show how the JSON schema validation works. So the idea here is that I've got an example.com email address and I'm providing some information about a new contact. And I've written a policy which I'm hoping would stop this request from coming through. But if I click evaluate, you see that it's actually been allowed and allow is set to true if the deny set is empty. And the reason is, as our friend here pointed out, that I'm providing contact and my policy is written expecting contacts. So if I try and delete some of this, here's one I prepared earlier. So now I'm going to instead, I'm gonna add a new deny rule and I'm going to make sure that the input that I provide match is a schema. And the schema provides some sort of information you might expect, but crucially it requires some of these top level keys, which includes contacts. And so now when I evaluate my policy, we should see that allow is false. And also I've got a new error message back saying that contacts is required at the root of the object that I was supplying. So I don't know how to quite get back to this, but I'm gonna go back this way. Yeah, maybe I'll try that one next time. I actually don't have any other demos, so I'll be safe I think. So yeah, that's how the JSON schema validation works. I encourage you to go and check out those new built-in functions. Refs and rule heads. It's not a particularly catchy name. This is the name of the issue and this is the name that we've gone with when we're talking about it. You often you might want to define policy in this way where the input method for example, this is a good example I think to highlight it. So where the input method might be get or post against a particular resource and to use the information from the context of the request to dig down into different policies. So in days gone by you needed to do this, you would create a package with, I think there's actually an error in these examples. I'm not sure those curly braces on the first lines are meant to be there, but you would have to define packages in separate files where you'd have rules.get.pets to define the get policy, rules.delete.pets in a separate file to find the delete policy. It's now possible to provide all of that policy in one file, so you can use these and that's where it's called refs in rule heads or dots in rule heads. You can see how that fits together. Just briefly have a quick dip into some of the items on the road map, I think one that's fun is the ellipsis operator. So the idea here is that we'd be able to do instead of doing matching on specific key or indexes within a list or within an object, we can say does it match in this way and then it's kind of, I don't care about the rest of the list is the idea. So that's one thing which is on the road map and coming quite soon. I've talked a bit about schema validation already, but something that we're planning to do is to allow you to validate the input that's provided without needing to use the built-in functions. So to annotate a given rule and say the input must conform with this schema for this rule. What's exciting about that is you don't need to, so in my example previously you saw I added a new rule which validated it against schema. Now with this change it would be possible to just annotate rules and say this rule is related to this, and it requires this schema and it saves you from bringing in all those extra rules. So that's upcoming and I don't know who here is using open test or who's actually testing their policies. I don't know whether. There are a few people who test their policies, that's a relief. So yeah, there's anybody who has used it may be familiar with this slightly challenging output that you get from running open test anyway. The idea here is that we're gonna be able to provide a more user-friendly way of presenting the output from tests and showing how that, exactly what went wrong. Okay, so that's all I had on the core open part. Now I'm gonna hand over to Rita who's gonna talk about how you use this in Kubernetes and how the Gatekeeper project works and what the latest news is. We are at KubeCon after all. Indeed, indeed. All right, so for those of you who are already familiar with OPA, you're probably using it in your organization and you may have users or your customers may be asking, well, how do I use OPA with my Kubernetes clusters? How do I make sure the workloads that are deployed to my Kubernetes clusters are compliant to governance as well as maybe company policies, right? So that's where the Gatekeeper project comes in. So it is a customizable Kubernetes emission webhook. It actually uses OPA engine embedded into the Gatekeeper as part of the webhook and it is used to enforce policies and enhance governance in your organizations. And as Charlie just mentioned, sometimes actually you might be using OPA Gatekeeper under the hood, maybe you don't even know that you're using it. I know Google Anthos actually has Gatekeeper embedded in their policy engine and Microsoft Azure also has this as part of their Azure policy feature. So what does that mean? What does it look like, right? And why do we even create this project in the first place? For those of you who actually built your own emission webhook, you probably know how hard that is, right? Anytime that you need to make a change, you would need to recompile and deploy to the cluster. So the whole concept behind Gatekeeper is it's policy as configurations, right? So essentially you have different personas in the company where the people who write the policies may need to know Rego, but the people who are operating the clusters and enforcing the policies may not even care about the actual policies, right? They just wanna need to make sure that the cluster is secure and the workloads are secure, right? So that's the gist of the Gatekeeper motivation, right? Is to control what the end users can do on the cluster, but the users don't need to write a single line of go code. And again, many large companies wanna make sure their clusters are conformant to some company policy and also many companies including Microsoft, right? How do you actually push out policies in the organization without impacting your production workloads, right? Let's say you're trying to enforce some policy and introduce them in your company, the way you wanna do it is slowly roll them out, right? And one way to do that is by introducing these policies across your clusters in different phases so that you can ensure you can actually audit and make sure the policies are secure and make sure people are addressing these issues before you start enforcing them, and i.e. not break down your workload and your own calls will have to get the call in the middle of the night. So yeah, so how do I actually do this without sacrificing developer agility? Again, policy is code and Gatekeeper is validating a mission webhook as well as mutating so you can use Gatekeeper to enforce these policies as well as mutating so that people, so that you can ensure that when things are deployed to the cluster, they're actually deployed with company policy, whether that's updating the deployment yaml or adding some labels or annotations. And also, it comes with audit, so there are times where maybe you don't want to strictly deny deployment on the cluster, so you might wanna see, I wonder how many people are running, I don't know, with containers that are pulled from Docker Hub as opposed to my company's registry, right? So how do I actually get a compliance report of what is actually running in the clusters? Gatekeeper provides that audit capability. And also, what if you have, instead of catching problems at deployment time and even run time, right? How do you make sure that your developers are doing the right things? So in Gatekeeper project, we also have a Gator CLI, so the idea is you use the exact same policy that you're enforcing in your production clusters but you're putting them as part of your CI CD so you can break build, right? So you can catch the problem at the source. So your developers are intentionally deploying the right things to their clusters. And last but not least, we also now have the external data feature. We heard from a lot of customers and users who are saying, you know what, I love enforcing things in Kubernetes with Kubernetes resources, but what if I have things in my policies that reaches things outside of the cluster, right? Say if you want to check if your container images are signed, right? Or if they're using, let's say, owner labels, right? You might want to use the user's LDAP properties, right? How would you actually be able to pull the external data from outside of the cluster? So external data allows you to do that. Sorry, last but not least, we also have the community policy library. As I mentioned before, a lot of companies already, you know, run this as a managed feature as part of their offering. So a lot of companies already wrote a bunch of really great regos. And so they all basically donated them as a community policy library. So instead of you having to figure out how to write those regos, chances are they already exist in this policy library. Here's the, just kind of give you an example. Here's an example of a policy in the community policy library. And as you can see, this is one policy that someone actually contributed based on the ACV to enforce external IPs. It's a man in a middle attack by ensuring low balancers are not using external IPs. And this is a project update. So since the last KubeCon, we've had two releases. So the current release is V312. So what are the updates that we introduced? Well, external data is now beta and we've ensured now TLS and MTLS is enforced. It's required to communicate between gatekeeper and the external data providers. And Gator CLI is now beta. Now it provides a lot of capabilities that again you can add to your CI CD pipeline. And then there's a new assigned image mutator. So again, I'll go into depth on the following too. And we've added multi-engine support. How many of you guys already heard of the Kubernetes validating emission policy? And are you here to wonder like how does this, how is it gonna work with gatekeeper? So we'll be talking about that. So multi-engine support was added so that we can integrate the two. And also in the violation events, this feature was added a long time ago, but in 3.12, now you're able to get the events generated in the namespace that the violating object is in. And then now you can also exempt namespace with a suffix in addition to prefix. And then also the expended resource now have generated mock names. So when you see the violation, you actually can see the names of the generator resources. And let's say you have annotations on your policies, you can now see them in the logs. All right, so I talked about assigned image earlier. So here's an example of how you can mutate a resource that you're deploying to your Kubernetes cluster. I know, I don't know how many of y'all had to deal with the recent switch for Kubernetes images, going from the old registry to the new registry. But let's say in your company, you have people who are pulling from Docker Hub and you have a mirror in the company and you wanna make sure when people deploy their container images, they're actually pulling from your image, your registry and not Docker Hub. Here's one way to do it, right? So you deploy this to your cluster and you say, hey, for all pots, for all the images, I want you to replace the registry with my company's registry. And by doing this, once again, you're ensuring that people are not just pulling from random Docker Hub repos. All right, so let's talk about multi-engine support. In 126, Kubernetes 126, an alpha feature called validating emission policy was introduced. It basically is a declarative and in process way of validating policies against your emission request. So you might ask, well, that sounds a lot like gatekeeper, right? And it does, right? So the motivation for this is to help us understand when to use what and what's the differentiation between the two. So first off, validating emission policy is a entry native policy. So it doesn't require the extra hop that a typical emission web hook requires. The benefit of that is it reduces the request latency, right? And because it doesn't require the extra hop, you now have more reliability and availability. And because of that, you can actually fail close, right? So one of the key issues with any emission web hooks, right, is because you're requiring an extra hop. And if that hop is not available, in fact, you're actually impacting the request. And therefore, a lot of web hooks actually fail open. So for policies, this is a big problem, right? You want to make sure you're enforcing the policies, but at the same time, you don't want the cluster to have availability issues. So with this, you can now actually fail close without worrying about availability. And again, operation burn is reduced because you don't have to worry about maintaining another web hook. And the language that's embedded is cell. It's actually a, I think Google started cell. And now let's look at a gatekeeper. You might ask, like, wow, this sounds really great. So when do I, what are the scenarios where I actually need to use gatekeeper? Well, gatekeeper provides the audit functionality, which validating emission policy does not. In theory, you could go to your API server and look at the logs, but gatekeeper audit will just basically have all this violation as part of gatekeeper audit. And again, if you have integrations with audit, chances are you could create audit reports and compliance reports for the cluster operator, right? And another one is referential policies, right? What do I mean by referential policies? Well, let's say you have a policy that needs to check for Ingress host name uniqueness, right? Uniqueness requires you to look at the incoming requests and compare against everything that already exists in the cluster. And that's what I mean by referential policy. And that is something validating emission policy cannot do, right? Again, external data, chances are you have some data source that's outside of the cluster. Validating emission policy is very much everything in the cluster. So this again gives you extra capability if you need it. Mutation, gatekeeper again helps you mutate in addition to validating. And shift left, right? Now with gatekeeper, you can use the same policy but within your CI CD with the Gator CLI. And also because OPA is very, very powerful, you are able to write very complex rules that simply cannot, right? And once again, there are a bunch of community policy libraries that's already out there. So chances are you could just literally pop it and deploy it in your cluster. You don't even have to write a single, you don't need to write a new policy. And gatekeeper, because we add a multi-engine, the idea is that it supports OPA and more, right? And then again, which means you get to write your policy in rego, in cell, in whatever language that your users are comfortable with. Now you may ask, well, that's cool. We get the best of both worlds, but sorry, that's cool, they both do different things, but how do I make sure I get the best of both worlds? Is there even a way? Yes, and that's this multi-engine support concept that we're working on. So today, gatekeeper is dependent on this framework called constraint framework as part of the open policy engine org. So the idea is that we can create an abstraction layer that simplifies the user experience, allow the users to actually write the policy in whatever language they're familiar with, but the operator that are deploying these policies are basically deploying them in the same manner. So again, the concept is multi-language, multi-target policy enforcement, rego, cell, together, targeting Kubernetes emissions, terraform, or whatever target that you need. And portable policies, right? Same policy, you can use it in CI CD in whatever enforcement mechanism. And furthermore, you can actually, this is already in gatekeeper, so this is why we're adding it, and enabling multiple engines, right? So again, use the best part of the different engines that are currently in the community. And what we believe how this can help is because gatekeeper and OPA are way more mature than the entry validating emission policy, which is still alpha. The idea is that we bridge the gap and together with gatekeeper and Gator CLI, you're able to also get audit and shift left validations for the new validating emission policy for free, right? So yeah, so just an example, we're adding validating emission policy based on cell to constraint framework, and someone in the community also asked for star luck support. So what does that even look like, right? Just to kind of give you a graphic understanding. So let's say you have an emission request when it comes in, right? It goes to the API server. And the API server is just gonna go through all the emission controllers in addition to all the custom web hooks, right? So it's gonna go and look at all the resources in the cluster, you got the validating emission policy, and then the binding, and then based on the binding, the policies will basically run, right? So an example would be require owners label on everything. And then you have, similarly, you have the emission web hooks like gatekeeper, and the query will basically go to OPA, and then OPA will look at all the constraints and constraint templates in the cluster, and then returns basically the decision back to the API server, right? An example of that would be unique ingress host name. So again, the future that we're envisioning is gatekeeper could be the front end for all the Kubernetes policies where everything could be defined as constraints and constraint templates, and then the gatekeeper controller will, when it sees like, hey, the engine is Kubernetes native validation, what that translates to is validating emission policy, binding and validating emission policy resources in the cluster. And again, when we talk about audit, gatekeeper will continue to be the audit engine, and depending on which engine the constraint template is using, if it's using OPA rego, it's gonna call the OPA driver. If it's using Kubernetes native validation, it's gonna call the Kubernetes native validation driver, right? So same front end, but talking to different engines within gatekeeper, and it's all seamless to the user, right? The users don't care. All they care about is the language itself. So yeah, so what's next? We wanna adopt Kubernetes native validation policies with minimal change, and bring it to audit, shift left, and then update the gatekeeper library to now support Kubernetes native validation policies, support these new policies for older Kubernetes versions, so what if you got customers that wanna use it before 126? How do they even do it? Well, with gatekeeper, now you can, right? And gator support for Kubernetes native validation based rules, so now they can actually use it in their CI CD, and of course, more engines, right? Tomorrow there's probably gonna be some other engine, and as a company, you probably don't wanna have to deal with it every single time in a different manner. So that's pretty much all the updates, and I just wanna say thank you to all the contributors for everything you've helped with the last few releases. Yeah, thank you. Thank you.