 OK. So welcome to Cloud Native Life, where we dive into the code behind Cloud Native. So I'm your host, Shahrir Al-Musaike Mital. And you can call me Mital Shahrir. So I am a CNCF ambassador. And I will be hosting tonight, right? So every week, we bring a new set of presenters to showcase how to work with cloud native technologies. They will build things. They will break things. And they will answer your questions. So in today's session, I'm stoked to introduce Andy, who will be presenting on open source policy at scale. So this is an official live stream of the CNCF. And as such, it's subject to the CNCF Code of Conduct. So please do not add anything to the chat or question that would be in violation of the Code of Conduct. Basically, please be respectful to all of the fellow participants and presenters. With that, I will hand it over to Andy and to kick off today's presentation. So let me add Andy. So hey, Andy, how are you? I'm doing well. How are you? Yeah, I'm fine too. So yeah, I guess we can start a session, right? OK, let's get started. Let's get going. So I'm here today. I'm Andy. I've been working in Kubernetes for about seven years now. I'm the CTO at Fairwinds. We build software built on top of open source. So we do policy and governance in Kubernetes. And today, I'm here to talk about policy at scale, but I am also here to talk about all the many different ways in which we can implement policy in our clusters, some of the trade-offs of those, and how some of those scales, how some of them don't scale, and sort of share some strategies about how we've seen our customers deploy policy in their environments. Oh, I should also mention I keep forgetting to add this to my intro. I'm now a co-chair of the policy working group every couple of weeks. And so please come talk to us about policy in the working group. So let's dive into the demo here. Oh, and please ask all the questions. I believe you're going to interrupt me at any point. So let's keep the questions coming. That keeps it interesting. Yeah, awesome. So yeah, let me add those slides, right? Yeah. OK, stop. All right, so I always love to do things as a live demo, because I love to, I don't know, live dangerously, and have the opportunity to kind of break stuff in real time. More is learned from breaking things and fixing things than from just doing a shiny demo. So here I have three K3D clusters. So I'm just running a multi-cluster setup locally. I actually borrowed this setup from the Kube Crash demo that we did earlier this year. Thank you to Flynn for setting up the K3D script. So in the repo, we have a repo under my GitHub. It's github.com. Slash, Suderman JR, slash demos. And we're in the multi-cluster scale folder here. And so there's a create cluster script, which we'll just go ahead and create these three K3D clusters and then set up our context. So if we do a list context, we'll see, well, it's not list context, it's get context. We'll see all of our three contexts listed here. And I have a handy script here that will run KubeCTL commands against all three clusters. So we see we have just the base Kube system stuff running here. And I'm gonna kick off a script here called install Argo CD. And so if we watch that fly by, what we're gonna see is that we are bootstrapping an Argo CD installation in each one of these clusters. So we're creating the namespace. We are applying the manifest slash Argo CD files, which have been templated from the Argo CD Helm chart. We're gonna wait for those pods to get ready. And then we're gonna add some repo credentials from a local GitHub token that we have. And then we're going to apply one manifest, which is called app of apps, which is an Argo CD app of apps, which you may be familiar with the pattern. Basically, we create a folder that has a bunch more Argo CD apps in it and it uses that to sort of expand out and install all the things that we're gonna use as part of this demo. So if we go take a look at the repository, this is again, Suderman JR demos in the multi-cluster scale folder. We have a manifest directory. We see the Argo CD apps, the app of apps pointing at this folder here. And we have a series of other apps. So we have a cert manager app. We're gonna install Polaris. We're gonna install Coverno. We're going to install the policy reporter. And then we're gonna keep maintaining our installation of Argo CD. So we're installing a lot of things all at once. I'm not gonna go through the details of every single one of them. Cert managers really use for managing the certs for the admission controllers that we're going to stand up. But the ones we're gonna focus on today, we're gonna talk about Polaris. We're gonna talk about Coverno. And then we're gonna talk about some other built-in Kubernetes stuff. And so to kind of introduce all of that, I have this table here that I have created and it is mostly complete. But what I realized coming into this was at scale, we have five, six, maybe seven or eight options for doing policy these days. If you go back a few years, Rego and Opa were probably like the only ones out there. And then we added Polaris to the mix and then Coverno came along. And now we have two more things. We have validating admission policies which are coming in the latest versions of Kubernetes. Not GA yet, but it's coming. And then we also have pod security admission which replaced pod security policies which isn't really necessarily policy but it is a way to enforce policies. And so I added that to the mix as well and sort of created this table of what can all of these things do? What can they not do? And then some pros and cons of each one because they all have different ways of using them. And so I think Coverno is probably the most popular one that I hear about these days. Polaris, not super popular. Opa gatekeeper, we're all familiar with Opa and Rego. We're all, people are very happy to complain about writing Rego. I understand that the learning curve can be steep. And so if we look across all of these they all do different things well. Most of them do policy at admission time. Not all of them allow you to view those policies or audit those policies. They all use different languages. So you may be comfortable writing in a specific language or already familiar with one of these and may want to pick that for your policy. Some of them, namely pod security admission, there's no custom policy here. This is just you pick one of three levels and you get a set of policies out the gate. So that may be enough for what you need to do. And then there's this new thing called the policy report CRD which the working group on policy has been working on and released. And so this is now a standardized way for these tools to report policy violations in your cluster. And not everybody supports that yet either. And so I believe open gatekeeper does. I think I missed a check mark in there. I think there's a plug in for gatekeeper to support the policy report now. Please correct me on that if I am wrong. But anyway, I think the takeaway here is there's six or seven different ways to do policy and a layered approach might be the right way for a large scale policy implementation because some of these are good at some things and some of them are not. So in order to demonstrate that we're gonna use this demo environment and I'm gonna show the capabilities of several of these tools. Let me jump over here. So we've installed our Go CD. We should, if we run our Kubernetes, our K.sh script here, we should see each of these clusters all have Polaris and Caverno installed. And additionally, we have enabled via the API server feature flags, the validating admission policy and the admission registration API on each of these clusters. So this is how we enable the new validating admission policy settings on a 1.28 cluster because it is in beta. So let's talk about what that looks like first. So if we look in our manifest policy directory, we have an HA policy and there are two objects we have to create in order to use validating admission policy. There's a validating admission policy which looks something like this. This is pulled almost directly from the docs as an example of how to use CL and validating admission policy. And what this says is for any deployment that upon creation or updating of that deployment, we want to validate this expression. We wanna say the replica count must be greater than or equal to three. Now, if you are super familiar with Kubernetes and deployments and HPAs, you may go, well, that's not necessarily a great policy to have everywhere because what if I have an HPA? If I'm using a horizontal pod autoscaler, I don't need the replica count to be set on this deployment at all. Actually, it's a bad thing to set because it will cause flapping with the replica count. And so this is where we start to highlight some of the limitations of CDL. It can't be context aware because it's done at admission time of a single object. And there's no way to say, okay, only do this if it doesn't have an HPA. So one of the limitations of validating admission policy right there. So we create that object that's being applied by our CD in our cluster. So if we get validating admission policy, that's fun to type, which I typed wrong anyway. There we go, validating admission policies. I'll just grab that. And we look for those. We'll see that this has been applied to the cluster. It's called HA policy. And then we have a second object in our policy folder for HA policy, which is the validating admission policy binding. And this basically says what to do with that policy. So we've created the policy. Now we need to say, what are we gonna do with it? So in this case, we're only gonna warn and we're gonna audit this policy. So this will add context to audit events for this policy. And then it will also warn upon applying the policy. And then from here, you can also actually restrict where this gets applied. So you could say, don't apply this policy in specific namespaces or don't apply it for specific named workloads based on different patterns. So there's a lot of options there. They're all in the documentation, but we stuck with a very basic one for this demo. So what I'm gonna try to do is I'm going to apply a manifest and we have a folder called demo here. And I'm going to apply that deploy good.naml in the, I'm just gonna do it in the player's namespace because I know that namespace is available and it works. And what we're going to see, oops, can't do that namespace, that's all right. We're going to create the namespace demo and then we're gonna apply that manifest into the demo namespace. We are going to see a warning come up from our admission controller, hopefully. There we go. So validation failed for the policy, AJ policy with binding, failed expression, replica count must be greater than or equal to three. So you can start to see we can very easily and quickly with native Kubernetes CRDs, which will exist in later versions of Kubernetes, very quickly deploy policies that prevent us from doing whatever it is that we want. Perhaps we want to enforce that everything has a label that is for a specific team, something that's very common for cost allocation and also understanding who owns what in your large scale Kubernetes deployment. So that's CEL, that's how you get started with that. And I think it's gonna have a place in a larger policy strategy where there are some things that just shouldn't happen anywhere and we need to get that policy in place in the cluster as soon as we build all of our clusters. This is just the base level. Now the downsides, it can only apply to single objects and we really have no way of auditing existing objects in the cluster with this policy. So this is sort of a only at admission time, we can see it in the audit logs if you have those going somewhere, but that's really the extent of what it's capable of doing. So that's the first strategy that we see which is validating admission policy with CEL. The second one that we're gonna talk about here is the, or the next one we're gonna talk about is pot security admission. So this is a very straightforward and easy way, even easier than validating admission policy to restrict certain things in your cluster, but it's very inflexible. So if we look at the manifests for example, the, well we're gonna look at our cert manager namespace here, we've added a couple of labels to our namespace. So there's a pod security Kubernetes enforce and a pod security Kubernetes.io slash warn. And so we're going to enforce the baseline pod security admission policy. We're gonna warn on the restricted one. And so there are three levels, there's restricted baseline and privileged. And I think that's the only three. And they get, they're increasingly restrictive. So baseline is what it sounds like, it's a baseline of security requirements and restricted goes a little bit further than that. And we can see that in action, if we apply a namespace YAML for the demo namespace. And we see, we have sort of a similar setup here, but we're going to enforce the restricted policy here. And actually, let's go ahead and show it with what we were just talking about. So we're going to enforce the baseline policy and we're going to warn on the restricted policy. And we're going to apply this demo privileged manifest. So this is a deployment that is privileged. I think it has capsaic admin, I think it has basically got all the permissions I could possibly give this pod. And so we're going to get some warnings that we would violate the restricted policy or warning on that. And then we also see our CEL expression here as well. But the host network is true, the containers privileged, allow privilege escalation is not equal to false. There's a whole laundry list of security problems with this that are now just automatically being warned about in the CLI. And because we actually have the baseline security, we will see that the demo deployment for the demo privileged actually isn't running any pods. So if we look at the replica set that was created by that deployment and we describe it, what we're going to see is a bunch of events where it tries to launch a pod, but we see that pod security admission controller failing to create that pod. So this is a very straightforward way to very quickly get some security requirements into your namespaces. Downside here is there are no exceptions to this. Like once you've labeled that namespace, you can't write an exception that allows a deployment to be created that violates these rules. You can't change the rules. All you can do is move between those different tiers. And so this is potentially, again, a very easy way to start down the policy path, maybe have some baseline policies that get into your app namespaces or things like that. But it's not really a full holistic strategy for deploying policy because I can't do exceptions. I can't see this across multiple clusters. I can't audit this. I can check which namespaces are annotated or labeled with this set of labels, but I can't audit the objects already in the cluster with this. So it's really, again, part of a larger strategy for deploying policy. I'm gonna have a quick drink. Please keep questions coming if you have them. Post them wherever. Yeah. No questions, all right? Still no? Yeah. Okay. I will keep moving on then. So we've talked about these bottom two. We talked about positive security emission. We talked about validating emission policy. I think the next thing that I'm gonna talk about is Polaris. So with Polaris, we have, it's fairly straightforward to deploy. I've called it simple here. It's a single deployment in your cluster that has access to audit the cluster. There's a CLI associated with it and it uses a single centralized config. I think there's pros and cons to that. So I put it in both columns here and it does support custom policy. It has a dashboard which I will show you and it supports, all of its policies are written in JSON schema. So some of us may be familiar with that. It's a relatively straightforward syntax to learn. And as of this moment, it does not support the policy report API, but I believe we will be looking at adding that in the future. So let's talk about Polaris a little bit. We have in our manifest directory templated out the Polaris deployment via a home chart. It includes both validating webhook configuration and it can do mutating webhooks as well. There's a deployment for the dashboard and there's the deployment for the controller and the webhook itself. And when we deploy Polaris into a cluster, again, we do that via a centralized config. So we can take a look at our Polaris configuration here and I'm gonna make that an animal file so we can get some syntax highlighting here. So we have a whole lot of checks built into Polaris. You can control whether they're ignored, whether they warn or whether they're dangerous or block in the configuration here. And then we can add our custom policies. So as an example, I've added a custom check for image registry that says your images have to come from specifically allowed registries. We're targeting containers with that. So this will apply to any controller that creates a pod that has containers in it. You don't have to write anything fancy for that. We just include that in the JSON schema or sort of extending JSON schema here. And we say that the image has to be any one of these reg exits. So this says, they've got to come from several Quay repositories or several Docker, US Docker.package.dev or GCR. So fairly straightforward policy there. We have additional ones for resource limits and constraints, so we can target the container. And then we kind of give you- We have a question, I guess. Awesome. So the question is like, how can we map the policies to CS benchmarks? That is a great question. That is a great question. And there's a few different ways. And as I get through Polaris here, I will talk about that a little bit more. But a lot of the different policy engines have policy groups that they map for you. And so, they will sort of say this set of policy maps to the CS benchmark, this is how you do that. One of the things that we discussed just this morning in the policy working group is potentially putting together sort of a CNCF blessed version of the policy packages for mapping to those benchmarks or publishing specific mappings to specific policies. And it's a big topic right now for policy because you go from policy to compliance and they're two very different things, but they work together in tandem. And so, I will say, I believe Polaris, we map the CS benchmark specifically in our checks. I don't know how we specifically say that, but we do say we cover the CS benchmark. So, and then we've also got the NSA checks as well, but we don't necessarily have a way to map those just yet. We do in our commercial product. And I know several other commercial products do as well, but open source hasn't quite gotten to that point yet. But a great question and something to think about as you're building your policy strategy is how are you doing that mapping and where do you keep your policy? How are you doing that mapping and where do you keep that mapping? So, let's just keep going down the road with Polaris. We've got our custom checks. We've got another one here that says you can't have host path mounts because those are dangerous. And then we have exemptions. So, this is where we start to get into a more complex policy strategy because there are things in your clusters that are going to need host path mounts. There are things that are going to need host networking, particularly CNIs and various other storage controllers and things like that. And so, we have the ability to say centrally from this central config, which I'm deploying across all of my clusters via GitOps to say these things are allowed to violate these rules. So, we have things like the local path provisioner which is common in my demo clusters is allowed to have host path mounts because it needs to do that because that's its entire job is to create host path mounts that map to storage provisioner. So, this is kind of the next level up. We need to look across all of our clusters and see what needs exceptions and figure out how we control those exceptions. Polaris allows you to create these exceptions via annotation, but that creates a security problem where someone could escalate their own privileges by using that annotation. So, we have to restrict the use of that annotation or we have to use a centralized config in order to do that. And so, when we're talking about deploying this at scale across multiple clusters, not only do we have to have a valid list of exceptions but we have to think about how we're letting people use those exceptions. How are they approved? In this case, we have our Polaris config in our manifests here. So, if we go to Polaris and we look at the Polaris config map, the settings are all right here. So, we could have a process by which people request exceptions by making a pull request into this repository. That could be our exception, you know, process. We can document that. It's tracking git. We can have approval processes. That's one way to have this sort of workflow by which we add exceptions. And Polaris does allow us to say only get exceptions from the centralized config. We can also take a look at the Polaris dashboard. So, hopefully this is working. We'll see, all right. Right now we're filtered down just to a single namespace. So, you could see, you know, if I am the owner of the demo namespace, which, I think we jumped clusters here. Cluster, am I in here? I'm in the EU cluster. So, that's this one here. We can filter down to namespace demo. And I'm the owner of this namespace, maybe. And we can look at just the events for that namespace. So, you could provide this to your developers or folks that are deploying to the cluster and see these warning policies in this kind of nicer dashboard. And then the last way you could implement this policy across all of your clusters, you know, in this case, we have a single repository that's controlling all the YAML for this cluster, right? It's this manifest directory, you know, all of the YAML is here. And so, we could use the Polaris CLI to audit the YAML in this cluster. We could put that in our CI CD pipeline for that and then sort of warn or block on these checks before they even get into the cluster. And so, we're shifting the responsibility for managing, for responding to these policy violations to the left. So, if we do an audit path manifests in this directory, we use, we can use the same config that we were just looking at. So, you know, we can use the same one that we're running in cluster. We can use on the CLI. And we just want to format that a little nicer for the purposes of the demo. You can see all of these warnings directly in CI and we can control the behavior of that, whether it fails or succeeds, passes the pipeline with this CLI. And so, we've now run the same policies, both in CI, in cluster, and they run at admission time. So, we've got all three places where we're checking these policies and we're using one centralized config to manage all of those. And so, when you are, you know, looking at scaling policy up across multiple clusters, across multiple teams, across multiple deployments, having multiple ways to interface with that policy is super important. So, that's Polaris. We've covered CEL, we've covered pod security admission. We've covered custom Polaris config, the CLI. Again, we have in the Polaris audit command different ways to control exit codes. So, we could say, you know, you have to get a certain score, you fail if there's any danger items. And then we can sort of customize that, however we want to enforce these policies across our clusters. And then the last one we'll talk about, which I know lots of folks are fan of, I'm a big fan of as well, is Caverno. So, we have in this cluster, in our manifest directory, we have a policy app, which we looked at earlier. That's how we deployed that validating admission policy. There are a few other policies as well. So, we've got our Caverno policies in code here. We're managing the policy as code or deploying to the cluster with GitOps. In fact, we're deploying to all three clusters via GitOps. We have this Caverno policy. We don't want to deploy stuff to the default namespace. Pretty common thing to not want to do the default namespace can't be deleted. So, it's a little bit of a difficult thing to work with and you don't necessarily want to deploy stuff there. So, we have a bunch of metadata about this policy and the Caverno policy gives us this match and exclude sort of rule set. They've got their own syntax by which you write these rules. I'm not going to go in depth on how to write Caverno policies. They have a massive library of policies available in their GitHub repo, some of which map to the benchmarks, to our earlier point. Actually, if we go look at the Caverno documentation and we go to their policies section, they have, well, they actually have the specific Sys benchmark in here but they do have groups of policies mapping to different things. So, you can take a look at all those policies and see how they map and they've even recreated the pod security standards that we talked about earlier right here in their policies. So, if you did want to just one policy engine, you could and replicate the things that pod security admission does. And so, we apply these policies as Kubernetes objects. So, if we get cluster policy across all of our clusters, we'll see that we've used our GitOps controller, Argo CD, to apply the same policies across every single cluster. And so, we have a very consistent experience across the clusters, but given the abilities of GitOps and things like that, we could kind of tweak which policies go into which cluster based on how we organize our deployment here. Coverno also has the ability to just audit or enforce, same with Polaris where we did warnings versus danger items where we're blocking them or we're just alerting on them. We have that ability with Coverno as well. And then, Coverno also outputs what's called a policy report. And so, this is where we start to get into the ability to audit across all of our clusters. We get a summary of all the objects that are passing all of our policies, that are failing our policies or that have other issues. It's kind of a verbose thing, but if we take a look at the policy reporter, which is this other open source project in the Coverno repository, it allows us to view these policies and what's failing. So, nothing's actually failing right now, that's cool. If we look at our policy reports, we can start to see all the things that are passing. So, all of our namespaces are passing these things and we can take a look at what policies we have. So, for example, I talked about requiring a team label. So, we have a require labels policy. It's in audit mode, which means it's only going to show up in these policy reports. It's only going to warn us that we don't have that. And so, we can go through and make sure everybody's using that label. We can take a look at the details of it, what's passing that check and what's failing it. In this case, again, we have nothing failing that particular check at this time. Although, if we go look at the EU cluster and we go back to dark bone, we might have something failing. I'm not sure if we do or not, we do. So, I deployed those things into the demo namespace earlier and we see now in this cluster, where I've gone to the EU cluster that I've been deploying into, we see our failing checks here for requiring the team label. So, this deployment does not have the team label. So, we have our policy violation there. We can filter down to different namespaces, just look at that demo namespace that we care about. And so, another way to provide access to these policies to your developers that might be using them or running into being blocked by these policies as we saw earlier. So, if I, for example, edit this deployment and I say put it in the default namespace, we will see that we should get blocked because Coverno has disallowed the default namespace and we can't deploy that there. So, you have to decide what's the right time to expose this because this isn't a great experience for me as a developer. I go to deploy to a cluster and all of a sudden I can't. Well, that's too bad, right? So, maybe it would have been better if we had had something running in our CI CD or perhaps a pattern for deploying these applications that doesn't let you know, that says upfront, here's your standard deployment. It needs to go in a namespace. Here's your namespace that you have. All right. Do we have any questions, folks? Bring the questions in. I guess what keeps these things interesting. Yeah, we need questions now. Folks, some questions. Yeah, you can point to, I guess. Do you have any questions for me? Not yet, let's go. All right. All right, we will keep going. So, what else have I not talked about? So again, like I said earlier, we're using our ROCD to deploy these. The nice thing about this, and I'll have to go get my password again. We take a look at ROCD. This is a great way to deploy across all of your clusters. So, we're using ROCD. It's pulling in all of those directories that I showed earlier in that manifest directory. And so, if we look here at policy, we've got the validating admission policy that we looked at earlier. We've got the validating admission policy binding. Then we also have our caverno cluster policies. And so, we're actually running this ROCD in the same way across all three clusters. It's pulling from the same Git repository. So now, I have a way to control all of my many policies and make sure that they're deployed consistently across every single cluster that I'm running. And if I go look at the ROCD dashboard for the next cluster, the US West cluster, I'm gonna see that I have, again, the exact same policies being deployed, which means that when I go to the Polaris config and I wanna create an exemption for, something in the, where's my exemptions? There it is. Something in the cert manager name states that's now going to apply to every cluster. And we're gonna deploy cert manager in the same way across every cluster using these templates that are all in this one single directory. And for infrastructure level things, such as policy or cert manager or things like that, this is a great pattern. It allows me to be super consistent across all of my different environments. Again, there's probably 10 different ways to organize this. You could run a centralized ROCD server that pulls from this and pushes out to all of your clusters. And so it's really up to you how you wanna do that, but the key is be consistent, manage your exceptions centrally if possible and enable your developers to modify those exemptions as needed with an approval process as you roll out these policies. I was hoping to get more questions. So they're not gonna direct this. Yes. Folks are quiet. Everybody's doing policy perfectly. They all know exactly how to do it. So the last one that I haven't talked about much today is Opa Gatekeeper. It's a popular product. I have not used it a ton personally. I think the folks find it difficult to write Rego, but once you get used to it, I actually really enjoy writing Rego. One thing you can do with Rego is start to write policies that, and I think you can do this with Coverno as well, but one thing Rego does is it makes it fairly straightforward to do contextual policies. And so one thing that we have in, and we support Rego in our commercial product, similar flavors of Rego work in the open source as well. But if I go and take a look at some policies that we're actually using in our clusters today, we can go into this insights configuration that I have, and we can go into the open directory and I can talk about policies. So we've implemented the no default namespace here in Rego, but one thing that we do as well with Rego is sort of RBAC type things. So you can't tell, basically RBAC says what people can do, right? You can create policies that say, you are allowed to create namespaces, you are allowed to delete namespaces, but you can't specifically say this namespace isn't allowed to be deleted via RBAC. You could say you don't have the ability to do that, but one way to be, like in this case, we might not want to ever delete a specific namespace, we might wanna protect that. So we can write a Rego policy that says you can't delete a namespace or a specific namespace. And so this is something that you can do with Rego that is a nice way to enhance RBAC if you want to be a little bit more explicit about things. And so we can also add things in here, like input.metadata.name equals infrastructure. You can't delete the infrastructure namespace or you can't delete the Qt system, but it's not that you would try, I hope, but you can add a lot of different stuff. So Rego feels a little bit funky because you're basically saying all of these things have to be true in order to trigger a violation. And so it's a little bit different way of writing your policy. And then this is an insight specific policy, so we include this action item that allows you to put contextual information around that. And so that's kind of what Rego looks like, at least in the context of everyone's insights. It's fairly similar to how Rego would be written with Opa gatekeeper. And so there's another one here. It's in the resources directory. One thing you can do as well is comparisons. So in this case, we have a CPU limit to high, not dot repo, not Rego. And what we can say is that we can start to get a little bit more complex here. And what I'm calculating is the percentage difference between the request and the limit of this particular resource. So we're gonna say your CPU request can't be more than 100% higher, or your CPU limit can't be more than 100% higher than your request. So sometimes you'll run into trouble, especially with memory and CPU limits where you set the burstable amount that if all of your pods bursted to that top amount, you would overwhelm your nodes with that burstable amount. And so we can set sort of a threshold via Rego and say, all right, there's percentage difference and we have all these useful functions for parsing CPU, parsing requests and limits. We round them and we actually do a little bit of math here and we're able to write more complex validations here in this Rego. And so that's another probably benefit for Rego is that it does allow a certain level of complexity that you might not get with JSON schema or with Coverno policies or with CEL. And so always good to evaluate the different policies that you might want, level of complexity, where you can write them. All right, and then one thing we haven't talked about is multi-cluster and so we had our diagram here and all of these tools really out of the box are sort of designed to run in a single cluster, right? You've got your admission controller that needs to be local to the cluster. You've got your dashboard, in this case, the Polaris dashboard and the policy reporter dashboard for Coverno only report on a single cluster. When I was looking at Polaris here, I moved between clusters by just changing which dashboard I was going to. So this one takes me to the one for the US West cluster in my demo here, but it doesn't aggregate them across clusters. And so this is an interesting intersection of policy at scale and all of these open source policy engines. A lot of them reserved multi-cluster functionality for the commercial offering, which makes sense in a lot of ways. So Fairwinds Insights I know does multi-cluster. I believe Normada's product does as well. One thing you can do is with the policy reporter, this actually emits Prometheus metrics. So we could, if we had a multi-cluster Prometheus configuration where we were aggregating certain things across all the clusters into a centralized Prometheus, we could then recreate a multi-cluster dashboard in Grafana with that Prometheus using purely open source. But it does require a significant amount of effort to build that multi-cluster metrics pipeline and be able to pull those metrics into a centralized place. So you have to make the trade-off of, are we gonna pay for something that does that for us or are we going to just build the open source out ourselves? And I think multi-cluster Prometheus aggregation is probably not in scope for our demo today. And it's not, I haven't built that yet. But maybe a future demo we could talk about using pure open source to pull in all these different things across multiple clusters. But again, being consistent across your clusters is first and foremost, and then having visibility across them as second, I think. So. All right. So yeah, so yeah, like we have just one question today. So yeah, I guess. Quiet today, it's December, folks are fighting from the cold, at least in Northern Hemispheres. So. You can see that. I'll give it another couple of minutes. I don't have anything else really to show as far as demo. You can go to the repo and run the create clusters script and the install Argo CD script. If you wanna take a look at all of this side-by-side, it does automatically deploy via Argo CD, all of those various things. So I'll add a quick read me to this later for folks who wanna try out the demo themselves, maybe Tinker with Polaris and Caverno and CEO and validating admission policy and pod security admission all side-by-side because we do have a ton of different ways of doing policy. And I do think a layered approach is gonna be appropriate as we go forward. So we'll probably be expanding this demo here to include, you know, Opa gatekeeper as well and start to explore how we layer all these different things together and what's appropriate for different strategies, different levels of security that you might need in your cluster, different benchmarks or different compliance standards that you need to work with. So lots of opportunity here. Yeah, and again, please come to the working group policy meetings. We always love to have new folks joining and talking about policy and working on the next big thing. Yes. So yeah, that's awesome, I guess. Let's wait one or two minutes for the folks like if they pop up with any questions also. Yeah, all right. Great, so yeah, in the meanwhile, like if there's any link or GitHub repository that you would like to share with the folks, you can share it with us, I guess. Yeah, let me grab the link to the specific demo. Do I just send it? I'll send it to you in the private chat and then you can post it? Yeah, sure, cool. I don't know if I can access the regular chat. I've never done that in one of these. No issues, I can share that for you. All right, I appreciate it. And then we can definitely share the link to our open source policy engine alerts. Put that in there as well for folks to take a look at. And I'll continue to expand that demo out going forward for folks, so keep an eye on that. And then I can always be reached in the CNCF or the Kubernetes Slack. I am SudermanJR, my GitHub username, pretty much everywhere. Okay, so yeah, I think folks have no questions this time, but it was really awesome just watching a lot of terms. So how this thing is scaling works and how you were applying this policy as well. Okay, so I think as folks have no questions, I guess we can end the session, right? Yeah, I think we can call it. Thanks for having me, always a pleasure. And thank you so much, Andy, for being here and just nice having you in the session. Thank you. Okay, thank you. Bye. Okay, so thanks everyone for joining the latest episode of Clownity Life. We enjoy the interaction and questions from the audience. So thanks for joining us today and we hope to see you again soon.