 So hello everybody. Thanks for coming and conscious is getting towards the end now. So, yeah, appreciate it. And good to see a good turnout. My name is Charlie. I work on the developer relations team at Styra and on the open project. I split my time between working as a open source maintainer and doing things like this where I come to events and meet all you lovely people. Yeah, let's start. I can introduce himself. Hello everyone. My name is Sir touch. I work at Microsoft. I'm an engineer manager there and then I'm an open maintainer. So the agenda for today's session is I'm going to give a short overview of the open project and then go into a little bit, a little bit more detail about some recent updates that have happened in the project since maybe you last watched a talk like this. And then so that's going to give an update about what's happening on the gatekeeper side for Kubernetes admission. So if you're not familiar with the open project, you may have seen the logo before in the CNCF landscape. Open is a graduated CNCF graduated project. And the idea is the open is a domain agnostic general purpose policy engine which you can use for a variety of different policy use cases. So the intention is that you can decouple policy evaluation from where you may be enforcing that policy. For example, you might have applications enforcing what users can do make making changes to a particular state and so on. You may have Kubernetes admins making changes in different clusters, different environments, different namespaces. You also may have automated processes updating clusters or making changes to infrastructure as code stacks. And any of these different places in addition to just controlling the messages between services in your infrastructure are places where you may want to enforce policy. And OPA is their general purpose policy engine to run policy in all of those different places. So in a little bit more detail, technically, this is how I like to think of OPA. OPA is a combination of a few different components. Crucially, it's a domain specific policy language called Rego. This is the language of OPA. If you're writing policies for OPA, you're writing them in Rego. And I'll give a brief overview of a Rego policy in a second. OPA is also a policy server where you can run a long running policy server perhaps next to your application or in a number of different architectures, I suppose, in order to perform policy evaluation. But it also has functionality to reload policy and log policy decisions as they are happening. And that's the most common way that people use OPA today. We also have language SDKs, so you can build OPA into Go applications natively, and we have WebAssembly modules. So it's possible to compile Rego to WebAssembly and run OPA within applications that way, too. You bring that together with projects like Gatekeeper for Kubernetes Admission and Conf Test for working with configuration files, running policy over configuration files, and you bring that all together with the community and you get OPA in the OPA project. So this is just a little example of a policy that I'm showing in the playground. The idea is that you can create rules based on the input or based on the data which is provided to the policy. The decision is made by OPA and returned as the output. So here we've got an example where we're allowing an admin but not allowing anybody else defaulting to false. So in a distributed system, this is how typical architecture might look. You have some business application which is making a call. It may provide information about a request that it's just received to OPA. OPA then receives that request and runs some policy, performs a policy evaluation. It will do this using the policy which it has loaded into it at that time, in addition to any additional data that's been loaded in. For example, in this example, it might commonly be some information that describes what users and their roles are allowed to do. So bringing that together, the policy rules, the data and the information about the request, it makes a decision and returns that to the business application to enforce the decision. So OPA is the policy decision point and the application is the policy enforcement point. And this is a very similar example of how you could use OPA within an application to have much the same effect. Where a particular part, perhaps some module that's responsible for serving web requests, makes a function call to an authorization module in the same program, which is based on OPA. And you can also use external data and or additional data and policy that's been loaded into that module in any way. So what are some common policy enforcement points? We've been talking about OPA, the policy decision point. Your application is a very common policy enforcement point. You're receiving requests and you're trying to work out whether or not a request is allowed or what message should be displayed if it isn't and so on. That's a particularly common policy enforcement point. The Kubernetes API server. So that's going to talk about the gatekeeper project, which is a native OPA integration for working with the Kubernetes mission control integration. We are at KubeCon after all. This is another important policy enforcement point for a lot of people here, I'm sure. So you've also got Conf test. This is a common tool that people use in CICD pipelines to run policy against changes to Terraform stacks or other standard configuration files. And the Envoy proxy, we have a native integration for that too, if you're using that. Just a quick update on some community milestones and stats from last year at KubeCon North America. We've had contributions from 26 contributors from 26 new companies. We've had over 1300 people join the OPA Slack. The OPA Slack is our main channel of community support and discussion. The QR code at the bottom of the screen takes you to the sign up page if you're interested. On average, each week, two new repos are shared on GitHub, which contain Rego source code. And we've also in the OPA core project at least merged over 570 PRs. That's not including the pull requests merged projects in the OPA community. So I'm going to dig into a little bit more detail now about some recent changes to OPA, the language and functionalities of the OPA server. But first, just a public service announcement. We're no longer going to be publishing the rootless flavor of OPA images for the next release. The current release is 0.58. This is the last release that will feature this flavor. So if you are, and if you are in particular using latest, latest dash rootless, you should just switch to, well, perhaps you should switch to a versioned OPA image. But crucially, you will no longer be getting updates on that channel. So yeah, please, we are now, all of the OPA images are not using a root user. So you should feel confident to use one of the standard flavors instead. So we release OPA once a month at the end of the month. Our last gave a maintainer update in Amsterdam. That was when we were on 0.52. We're now on 0.58. So I'm going to give a summary of the updates since that point, including one update that happened before, which the North American audience may have missed out on. Starting with some updates to the regular language. We have two new features I'm going to talk about today. We've got support for what we call general references and rule heads, which is perhaps a bit of a mouthful, but I'll try and explain why it's useful in a moment. We also now support the default keyword. I don't know if you remember in the example I showed earlier, we had default allow false. That was the rule. We now support that same keyword for functions too. I was going to click the link and show it in the playground, but we've got a bit of a system with the screens going on. So I'm going to try and explain it instead. The idea here is that when we're talking about a general reference, a general reference is a token which has got multiple variables contained within it. So this rule, which I don't think you can see my cursor unfortunately, but the users by role rule has got role and ID are both variables, things which are unknown prior to evaluation. What this allows you to do in this example is restructure what is a list of users as a sort of nested structure of objects grouping users by their role and ID. And this is something that was previously quite difficult to write in regus. This is a cool new feature. We've also, as I mentioned, we've got the default keyword. I was going to show a little example, but hopefully you can see how it would have worked. The idea is that we can define default outputs for functions as well. So by default, someone's nearest KubeCon is KubeCon North America, but if the user's city is in Paris, then they get shown the KubeCon U as being their nearest KubeCon. If their time zone offset is within a particular range, they get shown North America. And if they are on a different planet or celestial body, then by default we show them KubeCon North America. So we've also made some updates to regus built-in functions. We have a wide collection, wide range of built-in functions to make writing policies easier. The functions are focused on things that people tend to be writing policy about. So a lot of security-related functions. The HTTP send function is a function which you can use to call external systems in RegoPolicy. We usually recommend that people try and bring the data to OPA to make the policy evaluation faster. But if you must call an external system as part of your policy evaluation, we've made a change to this function, which now allows you to exponentially back off while using it. The numbers range step function is useful for generating ranges of numbers with particular increments. This is useful to sometimes iterate over a collection of different cases or iterate through different data structures. I won't go into all the details on all of them, but we also have crypto HMAC equal, which is for securely comparing hashes. And JSON verifies schema and match schema, which are the two which they're older than the last six months, but I thought I would remind the North American audience of them as well while we're here, which are useful for doing run time or doing JSON schema validation at run time. So again, this was going to be a little demo, but hopefully you can spot it. The idea here is I don't know if anybody can spot the mistake in this policy. The input is on the right. The input that's been provided to the policy. And on the left is the policy that's defined at the moment. Allow will be set. Maybe we find that allow is true when we were expecting it to be false. We're providing an example email. I don't know if anybody can spot the mistake in the policy. I got a hand over here. Yeah, that's exactly right. So it's some data that looks mostly correct, but actually because the data that's been provided isn't in the expected format. In this case, we would allow a request through that we didn't expect to. You can, I won't click the link because I can't see the screen, but to show you correctly, but the JSON match schema function is a good way to make sure that the data that you're operating on in one of your rules is actually as you expect. So a few other features that are in the open that we've added to the open server. We've added support for open telemetry trace and span IDs in decision logs. So decision logs are what we call, I suppose, our audit, audit logging of policy evaluations. As policy evaluations are happening, you can collect decision logs and send them to an additional external service. We support open telemetry trace and span IDs, if that's something that you're using. We also now support the setting of a decision ID in the OPA SDK, the OPA Go SDK, which is the recommended way to use OPA within a Go application. And it's also possible to drop certain decision logs based on a Rego policy, as you may have expected. OPA test now supports watching the changes in files. So if you're editing files, editing tests, OPA tests can now automatically rerun your tests as you're working on them. This is great for developer workflow. And we also support the schema flag, which is something that's been available in OPA run for some time, but it's now in OPA test as well. We've added or improved the output from the profiler to show generated expressions. Sometimes the rules can generate more expressions or will be working under the hood in ways that people don't expect. So this is useful for debugging in those cases. And we have a new authentication method for the OCI bundle downloaded too. Another shout out to the rootless images. This is the end. We've been talking about this for quite some time. This is the end of the rootless flavor of the images. Say goodbye and please update. So yeah, getting to the end of my section now. If you're using OPA, we'd really love to hear about it. We hear from users all the time. But it's great to showcase the users and how they're using OPA. OPA is a general purpose. It's really exciting to hear about all the different use cases that people are applying OPA in. So we have this file in the main OPA repo called adopters. If you're using OPA, please feel free to open a pull request and myself or one of the others will review it and we'll get that merged in. Now that would be really great. If you're building on OPA, if you have a product or open source project which is based on OPA, using OPA in some way or features some OPA integration, we've been doing a lot of work on the OPA, what we call the OPA ecosystem page on the website recently where we're trying to group different integrations based on OPA together so that we can showcase, I suppose, all of the different use cases where people have applied OPA. Yeah, go check that out. It's been updated in the last couple of months. If you've got something where you're using it within one of your products or you have even if it's a side project, please do get in touch. We'd love to hear all about it. Finally, at Stara, where I work, we've been working on a linter for Rego. So I work in developer relations. It's important to us that new users in particular have a good experience when they get started with OPA. This is one way that we are trying quite hard to help users before they even ask for help. In the playground, we now feature the linter output. So that's the easiest way to try out the linter. But do if you have Rego files in some of your repos, do run Regal there as well. We've got a GitHub action integration out of the box, but it's a relatively simple program to install and to use as well. So yeah, one final update. We're over the next few months hoping to share a document about OPA v1 and the changes that will be coming in OPA v1 with the community for feedback. The focus of the document will be around changes to adopting modern Rego and what that will mean for existing policies and existing policy authors. Please stay tuned to the different OPA channels if this is something that's important to you. I was hoping to share it, but it should be coming quite soon for feedback. We are targeting a v1 OPA in 2024. So yeah, we're at KubeCon after all. So let's talk about OPA in the context of Kubernetes Admission Control, and I'm going to hand over to Sertac to cover that section. Thank you. So for those that are not familiar, Gatekeeper is a customizable Kubernetes Admission Webhook that helps enforce policies and strengthen governance. So here I'm going to talk briefly about what Gatekeeper does and then we're going to talk about what has been the major update since last KubeCon and since last 3.13.x. You might have already been using Gatekeeper and you might not know about it. We have official integrations with managed services like Google Kubernetes Engine or Azure Policy for Kubernetes. And then we have other integrations with some of the other companies. And if you have a managed service or an integration that you would like to future at our website, we would welcome to open a PR and add your service or integration there. So let's talk about Gatekeeper motivations. If your organization has been operating Kubernetes, you probably have been looking for ways to control what end users can do on the cluster. And then these policies may be there for governance or legal requirements or it could be for best practices or organizational conventions. Often the need for a policy comes into play after a cluster or a workload because the systems are in production already and it is very dangerous to introduce something that is in a production environment and which can bring your workloads down or bring your entire service down. So how do we help ensure conformance without sacrificing agility in autonomy? So Gatekeeper solves these in a few ways. So first is a policy as code. So in Gatekeeper policies defined as rego or cell in the future we'll talk about that in a bit later. And Gatekeeper provides a validating admission webhook to make sure the Kubernetes admission requests are denied or get warned. And Gatekeeper provides a mutating admission webhook so you can either default to for security reasons or organizational conventions or however you want. And then this is coming back to the preview the effects of a policy change. So Gatekeeper provides an audit functionality so you can preview the effects of the policies before rolling out in a deny mode so you can see what would happen if you were to roll these policies out. And Gatekeeper does this recurringly in the background. And Gatekeeper also provides a CLI for shift less validation and the CLI is what we call Gator. So in this case you can validate your policies in your CI even without the Kubernetes API server. Gatekeeper provides an external data functionality. This external data functionality is used for communicating with external services. An example would be is a container registry for example and then you can think of it something like a container image signing. So how do you verify that then every image is signed? Last but not least Gatekeeper provides a community policy library where you can get started right away with policies contributed from the community. These are things like the pod security columns or many more. For the community policy library here are our URLs. So you can either find it on our website or in artifact hub. And then this is an example of a policy library policy where this is a mitigation for a CVE and then you should be able to get started right away using the policy library or if you are interested in modifying in any way or contributing please you're also welcome to do so. Let's switch gears to project updates. Since last QtCon we had two leases 3.13 and 14. So some of the notable updates are we improved our multi-engine support with experimental validating admission policy driver with common expression language which is cell. We added a pub support for audit which eliminates the CD size limitation for larger number of of violations. You will hit this limit because you can only store so much in a communities object. Expansion template that validates broke load resources is graduated from stateful sets. And we added support for external data provider audit and validating web of cash and observability statistics for admission audit in Gator CLI are now available. And last but not least we added support for OPA 0.57.1 in our last release. So I mentioned multi-engine support as part of notable updates. Let's dig deeper a little bit on this one. So as of communities 1.28, the validation admission policy which is based on cell is a beta future for communities. It is a declarative in process alternative to the validating admission web books. So we're talking about gatekeeper and validating policy admission like when do we use what? So validating admission policy which short is gap web. It's an entry native in process. So this is entry for Kubernetes. It reduces the admission request latency because it removes the hop to the to the web book. It improves liability and availability because of this change because it is entry and is able to be failed closed without impacting availability. So let's go back to the removing the hop because what happens to the web book services now and reduces operation burdens of web books because you don't have an extra web books to worry about. It uses the common expression language cell and let's talk about gatekeeper. So gatekeeper provides an audit functionality we just talked about earlier which previews the effects of the gatekeeper provides a ability to use referential policies and you can think of it as checking for uniqueness. For example, unique ingress names. So for objects that already exist in the cluster where you want to compare any policies to and we talked about external data functionality for coding external data sources such as container registries mutation and shift left validation with our CLI. So you can make sure to shift left validation so you can validate your policies. And gatekeeper provides complex rules that cell may not be handled today with raga. And we also just talked about the community policy library. And then since it's beginning gatekeeper has been designed with multi engine and multi language in mind. So it can support OPA and more or rago and more. So we just talked about different things but is there a way to get best of both worlds. So as I mentioned earlier gatekeeper has been designed with multi engine in mind from the start. This is why we created this project called constraint so constraint framework is multi language and multi target enforcement in language could be like rago cell or others in the future and then targets could be Kubernetes admission could be Terraform or others. And this is what provides the core constraint template and constraint functionality for gatekeeper today. And then recently we added the cell in the support for validation and admission policy to the constraint framework and then we're continuing to improve on this. So together with gatekeeper and the Gator CLI US users can get audit and shift validation for validating admission policy for free. And then this is available with an experimental flag in gatekeeper 3.14 today and then any feedback is welcome. So we just talked about all this stuff but let's look at this graphically and how this might work. So if an admission request comes into a Kubernetes API server say this is a validating admission policy it can go to validating admission policy and admission controller and then depending on the binding and the policy validating admission policy it can go to validating admission web books such as gatekeeper and then inside gatekeeper gatekeeper will query OPA and then based on the constraint and then the constraint template it will be the policy will be executed in the end. So for future we're thinking gatekeeper as frontend for Kubernetes policies so you still have the constraint templates and then constraints but one difference here is the engine. Notice in the engine we have gates native validation or the OPA slash rego engine. So depending on the engine type in the gatekeeper controller will decide if it is gates native validation or OPA validation and then depending on the Kubernetes version it will either create the validating admission policy so it will be executed with the entry capabilities or if the Kubernetes version is older and does not support validating admission policy it will go through gatekeeper and then gatekeeper will execute the validating admission policy and then the cell because gatekeeper imports the Kubernetes libraries that can interpret and execute this policy. So this works similarly for audit so if you have a constraint template with OPA slash rego it will use the OPA driver and if you have a engine case native validation it will use the case native validation driver inside gatekeeper. So let's look at a demo so in this case we're going to show a quick demo where we use dapper runtime and a pop-up broker and in this case we're going to use redis but dapper supports many other brokers so you're not limited to redis so we're going to have two constraint templates so one of them is going to use the case native validation and then the other one is going to use the rego engine and then we're going to get the gatekeeper audit results and publish to our pop-up broker. Let me move so I cannot see it over there. Alright, so if you're familiar with a constraint template this looks similar but the difference is this is using the cell expressions instead of rego so we're going to deploy our cell-based constraint template and cell-based this is a regular constraint and this is another policy this uses rego so in this demo we're going to see side-by-side cell and rego policies executed by gatekeeper in an older Kubernetes version that does not support well-dating admission policy by default so this is a kind 1.27 cluster and then this is how you would get constraint violations today for a cell policy there are no changes as far as like the user facing you'll still get the audit results and then this is a new future we added it for pop-up so you'll be able to subscribe to the audit logs and then retrieve them and then in this way you don't get limited by the HCD object size so if you have a lot of violations like 1,000 violations whatever you will be able to see all of them and then you will get both cell and rego violations obviously okay I'm going to go back up and then final shout-out to Conf test which is an open project to run rego on structured configuration data and they added support for Azure DevOps in their latest update and then thank you and then if you are interested on joining the community here are some of the URLs and a Slack sign-up QR code please reach out anytime connected and then feedback is welcome so yeah let us know thanks I'm just going to pop it over to you I think we have a very short amount of time for questions we are going to be in the project Pavilion or at least I will be from 1 until 230 when it closes today if you have any burning open questions please come by and say hi and ask them then otherwise the Slack the open Slack is the best way to get in touch with us that's the QR code again if you did miss it we are here at KubeCon keen to chat to new people new open users in particular I think any questions and queries please do come by the kiosk and say hi I am going to grab my lunch and then I will be back there about 1 o'clock maybe see you there thanks very much