 Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm your host today. My name is Whitney Lee. I'm a CNCF Ambassador and I'm a Developer Advocate at VMware Tanzu. Every week, we bring you new presenters to showcase how to work with Cloud Native Technologies. We will build things, we will break things, and we'll answer your questions. Today, we have Ritesh Patel and Dallas Sharma here with us to talk about policy-based governance for internal developer platforms. I'm so excited for this one. As usual, this is an official live stream of the CNCF, and as such, it is subject to the CNCF Code of Conduct. Please don't add anything to the chat, that would be in violation of the Code of Conduct. Basically, be respectful, be kind of your fellow chat members and to ask the presenters to please. Friends who are joining us live, please do use the chat. It's so much fun to interact with you all. Even now, please chime in, tell us where you're tuning in from. I enjoy how global the community is here. As always, if you have questions during the presentation as we go, please put those into chat too, and we're actually going to try to have an interactive chat this time and answer your questions as we go. With that, I'm going to hand it over to Ritesh Patel to kick off today's presentation. Welcome, Ritesh. Thanks, Whitney. Super excited to be here. I'm Ritesh Patel. I am a co-founder at Nirmata. Nirmata is a company behind Qvarno, a CNCF incubating project, and we're super excited to share some of the use cases and other things we are seeing in the community. Before we get started, I'd just like dollars to quickly introduce ourselves and then we can get going. Sure. Thanks, Ritesh. Hello, everyone. Thanks for joining. My name is Dolly Sharma, and I'm a Senior Customer Success at Nirmata. Wonderful. Awesome. Let's get started. I'm going to quickly share some slides, just to set the context, and then we'll jump into some demos. Dolly will demonstrate some other scenarios and use cases that we see and how the community members, users of Qvarno are enabling security and governance for internal developer platforms. Just to set the stage, as what we're seeing today is the move towards internal developer platforms. So a few years ago, different teams were using CICD tools like Jenkins to build their code and then deploy it to Cloud infrastructure. Now, what's happened in the last few years with the emergence of Kubernetes as that core orchestration layer for containerized applications, large mid-to-large enterprises have started building internal developer platforms, essentially created platform engineering teams that can deliver these platforms and that enables several application and product teams. So that's the emergence of the developer platforms that we are seeing today in the Cloud native space. Now, if we start double-clicking into this platform and seeing what these platforms consist of, you can actually map a lot of these components that are required or essential to operate and manage these platforms to deliver that capability that these teams need. So obviously, the foundational layer tends to be an orchestration engine, which is in most cases Kubernetes, and then there are different aspects, different areas like observability, security policy and governance for across all of these different applications. Now, GitOps is an area which is increasingly becoming popular for deploying applications to Kubernetes. There are projects like Argos, EDM, and Flux that are super popular amongst GitOps adopters, and then there's projects like Backstage with Service Catalog. So this is not necessarily meant to be a comprehensive list, but just an idea of what a typical internal that platform consists of a lot of these components are replaceable, that are in several areas that are multiple projects that can address those requirements. But today, we'll focus on an open-source policy engine called Qverno, which is a project that is part of Nirmada. But before we get there, as you start thinking about platforms, what does it mean for as far as governance is concerned? We start talking about governance and these platforms are typically used by several teams with different types of applications, and then that results in various challenges. So how do you ensure compliance across these various teams? How do you prevent misconfigurations and reduce that back and forth between the operations team, the security team, the development team, and then you always want to maintain a clean security posture for your platform. So that's where having tools for governance and security comes in, that it allows you to reduce that reduce risk in terms of failures, in terms of security misconfigurations, also provides a consistent development environment to two various teams, so that way they're not dealing with kind of disparate tools and technologies. So that's where the governance layer comes in, and Qverno is an open-source policy engine, a part of CNCF, is a CNCF incubating project. This was a project that we from Nirmada donated to CNCF about three years ago, and since then it has seen tremendous adoption. So you can see some of the numbers on the bottom right, lots of image polls, increasing GitHub stars, and active contributors and Slack members. I think the Slack members are over 2,000 at this point. This is very active community, and the idea behind Qverno is for Kubernetes, given it's the declarative vein in which applications are deployed, it required something that is Kubernetes native. So you see that across the board with some of the projects I listed before, like Crossplane, Argo CD, all of these are Kubernetes native, not projects that were retrofitted or adopted or adapted to Kubernetes. So Qverno is a Kubernetes native policy engine. It enables policy as code where policies are written in Kubernetes native YAML. What it allows users to do is shift security, left detect issues early, maybe in your CI CD pipeline, and even prevent misconfigurations or insecure configurations from entering your clusters through admission control, which is very important to maintain that security posture of your platform. And it also has the ability to background scanning. Obviously, Kubernetes environment is very dynamic, even overall there's always new issues, new challenges emerging. So background scanning is very important. And one thing that Qverno does, which is quite unique, is it goes beyond just validation. It has capabilities to mutate, which is modify incoming configuration or even existing configuration in Kubernetes. It can generate resources. It can verify images, ensuring images are signed and attested and so on. And then it can even clean up resources. So there's a lot of capabilities around security, automation, and even automating some of the security aspects, which makes it very flexible and an ideal choice for internal developer platforms, as you try to bring together all of these CNC components. So with that, unless there are any questions, we can jump into the demo. That's great. I love an internal developer platform. I really liked maybe the second slide you put up that showed the whole stack, because it really shows, I think the idea behind the internal developer platform is like before every single product team, every single app team had to deal with all of these components on their own. And so now those are being outsourced to a platform team and then the developers can focus on the more value-driven tasks. So I think that's super cool. And we do have a question, well, we have two questions, one's just a logistical one, but we'll get to it. Let's talk, is this session available offline? So this is streaming right now to LinkedIn and YouTube. So if you find the YouTube link when it's over at YouTube, I believe has a feature where you can download videos and then you can have it offline. And then our other question is for both of you, how does Kiverno compare to Open Policy Agents, which is another CNCF policy tool? Yeah, absolutely. I think the great question, this is a question we get every time and this is not something that we can go deep into today, but few differences in terms of, between Kiverno and Open Policy Agents. Again, both of them have their strengths and weaknesses. One of the biggest differences to kind of understand when we talk about Kiverno, and I'll just use a slide on Kiverno to kind of highlight it, is Kiverno is Kubernetes-nated, right? Whereas Open Policy Agents was conceived as a general purpose policy engine. It predated Kubernetes. So with Kiverno, what you get is, Kiverno understands a lot of Kubernetes constructs like labels, annotations, pods being part of different controllers like deployment, stateful sets, and which makes policy writing very, very easy. Now, when you're writing policies for Kiverno, the other main difference and the other reason why Kiverno is so widely adopted is you don't need to be a rego expert to write policies. Now, with OPA, you need to learn a language called rego to write policies. There's obviously pros and cons. The benefit of using a language like rego, it's very, very powerful, but a lot of times it could get complex and it has a learning curve. With Kiverno, policies are Kubernetes-native YAMLs. So basically policies themselves can be stored in Kubernetes, you can use any Kubernetes tools, GitOps tools, or customize, or any of those tools to manipulate or manage these policies. So from that perspective, the learning curve is really short and that's one of the reasons why Kiverno is so widely adopted. Apart from that, some of the capabilities, there are differences. I mean, things like mutate, generate, verify images, clean up again. Some of these, I believe, mutate is available in OPA, but generate and other capabilities may not be directly available. I mean, they can be added and created, but that's kind of just a few differences that we have. And I'll share a blog link in the chat to kind of, that goes deeper into these differences. Excellent, thank you so much. So we're ready to share Dallas's screen. Excellent, wonderful. Cool, okay. So yeah, as Ritesh suggested that we can have internal platform where you can use multiple tools to actually build your own internal platform. And this can be a pretty well or pretty good transaction between a platform team and a developer team, where you have your own internal developer platform mixing different kind of tools, and now you have built a platform which can be like a transaction between two teams, right? So this can be a combination of different tools. Something works for someone and something doesn't work for someone, right? But today we're gonna talk about something similar using Crossplane and Qvarno. Ritesh has already introduced what Qvarno does. So I'll take a moment and talk about what Crossplane is. So Crossplane basically is an IAC tool which helps you to deploy all your infrastructure in your cloud using code. So basically you don't have to go through the complexity of deploying an infrastructure using UI or navigating through UI or trying to understand where to click, how to click, right? So you can just have the entire infrastructure as a YAML. That's what the good thing is about IAC is you can actually codify your infrastructure and you get all the good advantage of codifying something or using YAMLs, right? Now you can store it in GitHub. You can audit it. You can enable version controlling from them. So you can use it when you wanna troubleshoot or even roll back for that matter and think of it the situation where if you wanna destruct it from, destroy it from some region and just bring it up in different region, it's just a matter of applying those YAMLs, right? So it's pretty cleaner, good way of doing things. It's a convenient and a smart way of doing things rather than just clicking 10 different buttons or maybe just navigating to different providers, right? Like as you are maybe AWS, trying to figure out where to click in the first place. I go through that a lot of times. I'm AWS fan. So when I went to Azure, I was like, okay, a new cloud. So yeah, I think those challenges are something that Crossplane helps. You don't have to learn new cloud providers all the time. So this is a standard example where there can be different kind of tools you can build and then there's a workflow but I'll show you what does the internet development platform we are talking about today. So I'm gonna demo what cell service model looks like. So you can provide cell service cluster, you can provide cell service namespaces. I'll talk more about it, but taking a step back and trying to understand how does it work. So in this diagram, you're seeing that a developer will have a simple YAML file. Now this YAML file will be like a Crossplane YAML file which will have the bucket, will have the instance or as complicated as an EKS cluster, right? So they can just use a simple YAML and they can just like the deploy or deployment today or apart, they can apply that in the Kubernetes cluster. Now the moment that happens, Crossplane gets triggered and Crossplane will deploy all the resources that are there in the YAML in the cloud, right? So it's pretty straightforward, simple configuration. Now the things that we need to set up over here is Crossplane. So the platform team for once will set up everything that will be required to trigger Crossplane, all the providers, all the compositions, all the definition files, which is like a one-time activity. Once all of that is set and the developers can actually use YAML and just apply those after one after one after time and then deploy all the resources. So yeah, I think enough of the slides. I'm gonna go back to the terminal. I think we all love terminals. So yeah, so let's talk about what does that YAML look like? Because I was talking about, or I was saying that you take this YAML, you deploy this and push that's done, but what does that YAML look like? How does that configurations look like? So this is a simple bucket.yaml and just let me know if you want me to increase the found or if you don't see that, like if you want to increase it, just feel free to interrupt me or let me know when I can do that. So going back, this is a simple bucket YAML in crossplane. Now you will see this is a simple YAML file just like a deployment or Power, reading it, writing this, updating it. It's pretty simple for the developers. So they can add the resources, they can add their bucket name over here. They can add the provider. So today I'm doing using AWS provider, but you can use Azure GCP. You can add location, you can add your encryption, you can decide to make it private, public and all the configurations that you do today to create an S3 bucket, that's something that you add everything in the YAML and then use this YAML again and again to deploy as many S3 buckets as you want, right? We do have a request to please increase the font. Yeah, sure. It's, is it better? It's better for me. It'll be a, it's a bit of a lag to get feedback, but I'll let you know if people still want bigger font. Thank you. No problem. Cool, so yeah, I think this is the YAML and that's how it looks like. Now let's deploy this YAML and see what does the workflow look like, right? So this is my terminal over here. One thing I'd love to say is that like with crossplane, you have a simplified YAML that you can show your developers so that they only see the knobs that they need to know about and they need to turn, but it's backed by like your ops people can add a lot of extra information in the another type of YAML and crossplane called a composite definition. Right, right. Yeah. So I'm going to just start with an S3 bucket and then I have composition as well. So we are going to dig deeper into the complicated resources as well. Love it. So yeah, I think you can deploy this using kubectl apply and simply you can just use the same command that you used today for here to get the pod. And if like you have some kind of issues or if on the troubleshooter you can use the same kubectl command, right? So you don't have to understand like cloud trail, cloud watch, 10 different longing functionality. Everything is something that you can do using kubectl itself. So it's Cuban, it is native. It's pretty simple and easier. Now, once this bucket is deployed, you will see that it, you'll see over here, you'll see the name. It takes some time to get ready sometimes and all the details will be seen over here. So yeah, I think this was that simple to deploy. One thing dollars just to interrupt and I want to quickly add one thing. I think, you know, obviously, just like we had a question about Kivarno versus OPA, there's going to be questions about Croskline versus Terraform. And again, the similar answer, like, you know, Kubernetes native versus non Kubernetes native. So that's one aspect of it. But the other aspect is also the state management, the state aspect, right? So like Dallas mentioned over here, there are controllers in the background that will continue to check the state of that resource, in this case, the S3 bucket and let you know if that was created, right? And oftentimes it's not fire and forget when you're dealing with infrastructure. It's actually, you want to know if something you triggered actually happened and has the right configuration and settings and so on, right? And Croskline, you know, the way it's kind of modeled and developed as a set of controllers and providers enables that and improves on some of the earlier ISC type kind of solutions. Great point. And to add to that, I think in Terraform, you have to actually manage the state file. So that's something that you don't have to do in Croskline. And otherwise Croskline provides the automatic reconcilation. So you don't have to take the state files and just reconcilate it again and again. Croskline using Kubernetes APIs is that automatically for you, right? So these are some of the advantage when you want to go Kubernetes native and use Kubernetes native tool, you get that pretty handy, right? So yeah, I think all of this looks pretty fascinating, cool on papers, but we have seen for our customers that there are some challenges that they see when they're using just Croskline to provision their infrastructure, right? And if we talk about what are these challenges, these challenges can be that today we know that every cloud out there has some kind of best practices. For example, one simple best practices, S3 bucket should not be public, right? Or your security group should not have allow all inbound rules. Or we have one best practice in Nirmata itself that all your resources should be in just North California region. It becomes much simpler to actually optimize your cost, manage it, clean up your resources, I know. And just one day you don't have to figure out that, oh, who created that instance in North Virginia? Who's that guy? You don't have to go through all of that, right? So all those struggles to just avoid all those struggles, every customer out there have some kind of best practices that they want to make sure that that's been incorporated. And then they have some security and compliances for all their cloud providers. So how do they make sure that you provide, you enable first developers using the self service model, but platform team actually have some control and they're actually checkboxing all those security lists. So that's where actually Kivano comes very handy. So using Kivano policies, you can actually make sure that the configurations are compliant with your cloud governance. So all those security boxes are checked, all the governance are in place, all your best practices are being followed while you're still allowing developers to create these all resources in a self style fashion, right? So how does that look like? So this is one simple Kivano validated policy that I'm sharing on my screen. And this policy basically validates that no SE bucket should be public, right? So what it's doing is it's a simple cluster policy, it's a validate policy, there are a bunch of annotations and as Ritesh was saying, right, that this is Kubernetes native and you don't have to learn language. That's what it looks like. Now it's simple YAML. So you can create, you can update, you can play around with it, you can develop new policies, right? So there's a whole lot of possibilities and customization you can do when you don't have to learn a language or it's very simpler to create policies. So yeah, these are some of the rules that in the policy you can see this is getting applied to a bucket API and there is a validate message saying that this public SD buckets are being blocked and this is what it is checking, right? It's checking that public ACL should be true. So there's a parameter it's checking to make sure that no SE bucket should be public. Similarly. I was just gonna say what I really like about it is that it's human readable. Like I've never really looked at a Kyverno policy before and I can figure out what's happening. Yeah. So that's super cool. Yeah. And this is one more like region policy and if you'll see that this is again, pretty simple. It's, you have a lot of messages and annotations that can help you navigate what this policy is doing. So it tells you that using any other location other than this region is not allowed. It's checking this region, right? It's pretty straightforward. You can create your own policies and just make sure that your best practices are getting followed across your environment. Now, what do we have a question? Does Kyverno allow mutating policies or just validating policies? Allows mutating policies, validate policies and generate policies as well. The third one is generate policies. Mm-hmm, cool. Yeah. Okay, so now I'm gonna do what is, I'm gonna just apply that region policy because that's one of my favorite. So we're gonna apply that policy and let's see how does that work. So I have applied the region policy which means that basically you cannot create a SD bucket apart from any other region which I have in my policy. Now let's see my bucket has, let me delete the bucket that I created. And then I'm gonna just show you that my bucket has this region whereas my policy allows only this region. So basically it should just block this bucket because it's not in the right region. So let's try applying this policy here. And voila, so you see Kyverno is actually blocking this and it just not only blocked the resource, it actually tells you why it's blocking it, right? So the developers are actually getting some knowledge that what is not working and why that resource is getting blocked and they can actually go back, they can change the region, come back and apply it again, right? We have seen this in our customer environment a lot that there are some firewall and we are banging our head trying to troubleshoot what exactly is going wrong, right? So Kyverno helps you with that, it provides you descriptive messages and you can make this, and this is customizable so you can make it more descriptive or precise, help your developer understand what's going wrong and make them self-reliable, self-sufficient and they can automatically change the YAML and apply it again. That's amazing. We have a question. Can we use Kyverno in OpenShift? Yeah, of course, Kyverno, you can use Kyverno in all cloud providers, OpenShift cluster, kind cluster, every kind of Kubernetes flavored clusters, you can use Kyverno today. Excellent. Yeah, and I just shared a link that can be handy in terms of how to do it. We probably don't have time to get into it in this live stream, maybe some, maybe in the next live stream we can probably do that. But yeah, it's definitely supported and in fact, some of the Red Hat products like Red Hat Advanced Cluster Manager also supports Kyverno as a policy engine. Cool. So yeah, I think we saw that how you can create a simple SC bucket, right? Now let's understand how you can create some complicated resources like an EKS cluster too. For an example, and let me bring up the cluster claim, let me pull down the store middle. Perfect. Yeah, so we have seen that when you want to create an EKS cluster, it's just not the matter of creating one EKS cluster, you actually have to deploy 10 different prerequisites, you have to deploy a VPC, subnet, cloud tables, security group, da, da, da, many resources to make sure that EKS cluster is up and ready, right? So how do you make sure that when you're deploying these complicated resources, you're making developers self-reliable, right? So there is something called as composition in cross pane using which you can do that. I talk about what cross pane is and how platform team can actually deploy it. But before that, let me talk about how does it look from a developer standpoint? What is the YAML, what kind of resources and how will they create EKS cluster? How easy or difficult it is, right? So this is a cluster claim basically, and this is what an EKS cluster creation will look like. Simple YAML, it has bunch of the variables that are being exposed by the platform team. So let's think about what are the variables which developer team will actually need? They'll need the name, they'll need node size, they'll need node count, that's it. They don't have to worry about VPC, security group and all complexity, right? So you can use compositions and you can actually hard code all of those complexity, whereas just allowing parameterizing only the variables which the developers can use and they can just change these parameters and create any case cluster. So what they can do is now this is like, again, simple YAML, I'm creating a node size which is medium, node count as two, which is pretty good. And I have like a cluster name, and I do this very often in the first meeting. Okay, so I have a namespace that I've created for the developers, I think it is called team A. So you can create a namespace called team A, you can provide access to your developer team for that particular namespace. So now what they can do is using the simple cluster claim, they can actually provision their EKS cluster in the particular namespace. And it's just a matter of applying this YAML and you'll see all of that will get kicked in. So initially the VPC will start coming up if you see that VPC came up already, then your subnets will come up and automatically you'll see roles and everything getting provisioned. It will take like 10, 15 minutes for your cluster to come up. So you'll see using simple cluster claim file, you'll be able to trigger your entire EKS cluster. So the developer will only choose three parameters or four and that's with the platform team to decide. So the platform team can decide that, okay, do I wanna expose this parameter or not? So they have the entire control whereas they're also enabling developers to actually create EKS cluster in a self-sufficient manner, right? So we have a question which I think you just maybe restate what you're doing but can we create infrastructure with Kyberno? So to be clear in this demo, we're creating the infrastructure with crossplane and you can create it with other tools like cluster API. And then the role that Kyberno is playing is that it's making sure that the cluster that's automatically created is following company policy and company best practices. Is that correct? Yeah. That is correct as far as this demo is concerned but Kyberno as we talked about has the ability to generate YAML through policy. So you could trigger, again for crossplane, if you wanted to, you could trigger creation of infrastructure based on policy through the policy. So it's certainly possible and if you wanna do it based on some event or some setting, I mean, so it's all possible as long as it's doing it through a Kubernetes, in a Kubernetes native manner. Interesting. Like essentially creating a custom resource because ultimately Kyberno can mutate or generate any custom resource in the cluster and that custom resource could be a crossplane, S3 bucket or something like that. Is that a common use case for generate? What's a common use case for generate? In most common use case and that's kind of we are getting into the next part of the demo around namespace as a service. I mean, right now it's mainly generating the Kubernetes native use cases, right? But again, a lot of things are possible because of the flexibility that Kyberno has. And on the generate use cases, we will talk about it in what we call the multi-tenancy use case or a namespace as a service use case that's coming up. But before that, maybe Dallas can wrap up this one. Okay, sounds good. Yeah, so I was just gonna say, we have another question before you move on to the generate thing, but wrap this up first and then we'll get to the questions, yeah. So yeah, this is like a composition that I'm showing. Like if you're wondering that where the resources, the VPC configurations and everything are coming up from. So these are like composition, which are like a crossplane configuration thing that you can do beforehand. So here you will see that all of that is something that I've hard coded. So I have hard coded the node group. I have hard coded like what region, what instance type do does small, medium, large mean, right? And this is like the policy and all the thing that you do as a part of prerequisite today when you wanna create an EKS cluster, you can take all of that setting, put it in a composition file, platform team can deploy it once, keep the setup ready, that's it. Choose which variable they wanna expose, provide that YAML to developers and boom. The developers can use that YAML again and again and create the EKS cluster, right? So yeah, I think we see a lot of interaction and developers looking for clusters to test their applications, a lot of back and forth between developers and platform team and just like creating an EKS cluster sometimes becomes a challenge or getting an EKS cluster to deploy an application sometimes takes time, right? So all of this can make developers self-sufficient. They don't have to wait for some team to create and cluster and provide that. And that helps them to be more productive. They can 15 minutes start deploying their application, test it and done, right? One thing I really like about your demo, when I think of Coverno or policy in general, I think of security and I think of governance, but then you're also showing automation use case which is pretty cool. And you also talked about cost optimization which of course makes sense now that you said it but it's not something I had thought of before. Yeah, exactly. And one example of cost optimization on this cluster is now we can provide this to developers but again, like a very pretty common challenge in real life, in practical life we see is how do you make sure your developer doesn't spin up nine nodes, right? It's gonna spin up so much of your cost, right? And the next thing I'm gonna cut down this model saying that, no, no, this is not working for us. Our cost is pretty spiking high, right? So the policy comes into the picture where it gives you all the control you want, right? So this is one simple policy that I have on the screen and this is again a validate policy folks. So just bear with me, we are coming with new data and generate policy soon but this is simple validate policy. I'm not gonna talk through this but this is gonna actually check that that developers require small nodes so nothing should be created with large and this is another policy that talks about the minimum size should be three, not like nine, 10, don't go crazy on node size, just keep it like two and three, right? And this is like one policy heads with cluster names. So we have seen that enterprise customer have this naming convention that they should have dev clusters with dev hyphen and prod cluster is prod hyphen, right? So all of this where you see that there is some possibility of human errors, they can change the convention, they cannot follow the best practices, just create as many policies and make sure they're following it, right? So pretty quickly, I'm gonna deploy one policy and can I interrupt with a question? Sure, does one, we have, will this support OCI? Just an open container initiative. Yeah, so I think from, Kivarno does support OCI, one of the things Kivarno, the capability Kivarno has is the ability to verify images and actually be able to configure policies on images and that's where that support for OCI comes in. The other capability that Kivarno has is basically ensuring that images are built from specific base images, you know, and again, this is more like a security use case. So a lot of those use cases require Kivarno to interact with OCI registries and so on. In this case, I guess OCI is Oracle Cloud Infrastructure. Oh, okay. A different question. Which is absolutely, yeah. Kivarno is agnostic to, within OCI, Oracle Cloud Infrastructure, Kivarno would actually be used with OKE, Oracle Kubernetes Engine and it's completely compatible with that. In fact, we have customers who are using it there, right? So definitely, there's no kind of nothing specific in Kivarno for cloud provider support. There are nuances in terms of how it's deployed or unibles, but apart from that, it should work on any CNCF, conformant, Kubernetes distribution. Excellent. Well, I just, when I think I get, have all the acronyms straight. Yeah, that one, that one stumped me a little bit. So yeah, meanwhile, I think I deployed this to policy and I'm also made sure that my cluster K claim is not so good. So I've added my node sizes large and like I've created like nine nodes I want. And let's deploy this cluster claim and see how does that look now. I'm going to just apply it in my list, I think. And you will see that Kivarno will actually block it. So Kivarno will say that, okay, this is not allowed because of two problems. I see two policy violations. First is the cluster with any more than three nodes is not allowed. You have nine, go back, change it, apply it again. And this node size is large, I don't like it. Go back, maybe I'm come back again. Yeah, that is a pretty cool feature to have and make costs, you can optimize your costs. We have Phenops policies that works perfectly well for your cloud providers. We have validated policy for your best practices. We have muted policy, you can mute it. We have gendered policies for automations. So pretty much a lot of use cases, but I'm going to stop for cluster as a service use case. I'll go with namespace as a service use case. But before that, do we have any questions for this use case? Anything that, or maybe I'll catch. I guess there's some confusion. So we have this link to Kivarno policies and our Oracle Cloud friend doesn't see Oracle Cloud listed in the policies. Yeah, so the policies, the list of policies, a lot of those policies are cloud agnostic, right? And there are some policies that, and most of these pretty much are contributed by the community. So there are some policies which are cloud provider specific. Maybe there's, you know, there's Oracle policies, specific policies are not in that list. And, you know, we'll work with the community to add more, but a lot of the policies that are in there, you know, can be used on Oracle Cloud infrastructure, right, all part security policies, any policies again, which are related to Kubernetes resources can be used on Oracle, you know, Oracle Kubernetes engine, right? So that's the lot of vast majority of the policies are independent of any cloud provider. Excellent, thank you for that clarification. All right, Dulles, let's see the next part of your demo, please. Sure, so yeah, I think this is like a standard namespace as a service model. So now we have seen how you can provide cluster as a service to your developers and make them set sufficient. But what about namespace? We have seen customers having like big clusters, like 10 nodes, 15 nodes having 20, 30 namespaces and all the developers are confined to a particular namespace where they're working in a cluster in a shared model and they have access to only one namespace, right? So in this scenarios, there is some multi-tenancy best practices which Kubernetes tells you to follow. For example, every namespace should have network policy just to make sure that your port traffic is confined in a particular namespace. So there is no application talking to different application in different namespace as a best practice. Now, there is best practice that you have to enable quota, request, and limit just to make sure that someone doesn't bring like big application and utilizes all your cluster resources, right? So such kind of application which helps to not have any kind of impact on the cluster for just one namespace. And this is like multi-tenancy best practices that Kubernetes suggests to do for shared clusters. So how do you make sure that this happens in an automated fashion? Because think of it this scenario that this is like big cluster. You have 20 namespaces and you have like 10 different clusters. So how you're going to manage all of that, right? How you'll make sure that, OK, this namespace belong to team A and this namespace, doll is created. How you're going to manage all of that? You can go crazy with that, right? This is how Kivarnam will help you with that. Now, let's talk about how. So basically, I have like a use case where namespace without labels are something Kivarnam will block. So you need to make sure that your namespace has a label. Label can be small, medium, and large. Depending upon that, Kivarnam will automatically generate quota for that namespace, right? So a developer can create a namespace with, for example, I have an application which requires small quota. So I can create a namespace with small label. It will automatically create a quota for me, request and limit, network policies, all 10 things that will be required for multi-tenancy. Also, it will add or it will mutate a label in my namespace with my name. So that the platform team understand that, okay, doll is created, so is there anything wrong? You should reach out to doll is, right? So, yeah, let's start from very basic, like if you don't have a label, namespace should not be created. Let's start from there. So to start there, let's create this policy. Now I have this, again, a simple validate policy checking. It should have a label and it will tell you the label should be present either with small, medium, large. So I'm gonna just quickly apply this policy, validate namespace label. And the moment I apply this policy, you will see that no namespace without a label should be created. So now since this policy is applied, I'm gonna create a namespace and it will throw an error saying that, okay, you need to choose a label or it's not gonna happen. So I have a demo namespace over here, this one, and this namespace has like a small label. So I'm gonna apply this namespace, which is demo and it will go through and it will go through, it will, it says it's configured. That means it will go through and it will apply all the labels and stuff like that, right? So now it's just a matter of the developers that they have to create this namespace with a proper label. After that, all the things which we were talking about like your network policy, it's automatically there in your namespace or it's in the matter of quota. That's automatically there in your namespace, right? So you have like, with the label that was small, there was automatically a small quota created in the particular namespace, right? And request and limit and even if you just get this namespace and you do a dash or YAML, in this namespace, you will see there will be a label that gets created. It says, Keevano dash I created by dollars, right? So it becomes very easy for the platform team to now understand that this was created by dollars and the moment I just do apply, it will create quota for me, it will create network policies for me using generate policy and it's gonna mutate the existing namespace by applying this labels. So let's go back and see what this policy looks like. So what does a mutate policy look like? Now I have this something called as a mutate add namespace labels. So this is what a mutate policy looked like. Let's understand line by line what it does. It looks very simple again. So it's very similar to what validated policy looks like. So basically what it's doing is it's having a name, it's matching it with all the namespaces and it's mutating this user info, it's taking this using the James path and you can just append this label in the namespace, right? So this is like simple mutate policy that you can create and you can mutate your existing resource. You can add your trigger. So here the trigger is that this rule will match any time a namespace will create. So you can add triggers like, okay, if config map gets created, mutate this, if a secret gets created, mutate this, like do any kind of mutation with any kind of trigger. You can play around with this, you can do many things. So this is a mutate policy and let's understand the generate policy. We generated our network policy, quotas, request and limit. So this is one generate policy that we have over here again, simple and straightforward, similar to validate and mutate policy. It's basically giving you the name, you can apply it, it's matching the namespaces and then it's using the selector. So if the selector is small, it's gonna create these many things, right? So if the selector is small, it's gonna create small quota, one CPU, 8 GB memory, da, da, da, stuff. If the namespace selector is medium, it's gonna create medium quota, maybe code four, memory will be 16. If this is large, it's gonna create large quota for you, right? So these kind of policies is something that it's gonna generate these YAMLs. So you don't have to generate it, it's automatically creating generate YAMLs for you. So this can be used for automations, like in your, if you have some automation use cases where with this trigger, it should generate 10 YAMLs or it should generate some resources or it should mutate some resources, okay? You can deploy, you can create a given a policy and it can generate and mutate resources. It's brilliant. It's like you have a super easy developer experience where they don't have to think about company policy. It's all, it's almost invisible to them, but then, yeah. Unless they go wrong, but then they have a good, if it's designed well, they have a good message understanding why they're stepping outside policy. That's cool. Yeah, and one thing to add to that is, there's also a configuration option to actually even prevent or keep things, keep the resources in sync. So for example, if a developer goes and deletes that network policy, Kivarno will make sure that it gets recreated, right? So they cannot just bypass it. Obviously you can do that through roles, role binding, things like that, but Kivarno also, it essentially follows the same principle as Kubernetes will always, you know, treat the policy as the source of truth and make sure that, you know, if the namespace label matches, that resource always exists, right? And then even on the cleanup side, if that resource is deleted, the generated resources are automatically deleted. So that's kind of how it helps manage this lifecycle. And I mean, these are small things, but in large environments, they, you know, add up and, you know, create a lot of, you know, challenges. So Kivarno addresses those pretty elegantly. What if you, what if you wanted to make a, go outside of policy in a way that you have to, is it a matter of just finding whoever wrote the policy in the first place? Like what if there's a, you need an exception to a policy? Yeah, which is actually that's a great point. When you say exception to a policy, you mean somebody who, if then there's an application that wants to kind of bypass that policy. That needs nine large nodes, like in Dallas's demo that couldn't get through. But this, I have a really data heavy application that needs that. How would I get around it? Yeah, so there is, there is, you can, there is a construct within Kivarno called policy exceptions. And you can actually create these policy exceptions on a per name space, per resource basis. And you can allow certain applications or resources to bypass that policy. So that is certainly available. And we see that very often, especially when it comes to security policies, a lot of times, if you're dealing with third party applications and don't have the ability to fix those security issues immediately, you may want to request an exception and then get those, you know, those issues fixed while the exception is in place. Wonderful. Dallas, I have a question for you. If people, it sounds like, if people want to do this demo on their own, you'll have a link for them. I put it up on the screen right now. Do you want to say more about that? Yeah, I think, yes, of course. So let me just pull it up and, you know, this is like a GitHub, all my policies are over here. So if you want to just, you know, check them out, all the policies for cluster services in this folder, name, class, service, S3 bucket, all the configurations are over here. Maybe I'll put in like installation document which will help you to install cross plane and using all these instructions, you'll be ready and get it done in your environment. Cool. Get your own hands dirty. It's fun. Awesome. Cool. So yeah, I think these are some of the benefits. And yeah, while Ritesh was talking through policy exception, I also realized we have something called as cleanup policies. So, you know, we have struggled with cleaning up cloud resources again for cost optimization. That's a problem for everyone today. So we have something called as cleanup policies where you can actually clean up these resources so you can use them as a cron job where like your EKS cluster automatically gets deleted after two days. If anybody just forget to delete it, right? So that helps you as well to clean up everything using like a cron job fashion. So that's one thing that's available. Just realize that. But yeah, apart from that, these are some of the benefits that I'm showing on the screen. It helps you to automate it. It helps you to have consistency. Not to forget, self-service style is fantastic to adapt. Then it helps you with security, best practices, compliances. It helps you to scale well. And then we have community support out there. So QBurner is an open source project. We have a Slack channel. And, you know, if you have any issues, you can just drop your messages or questions. And our entire team is quite active on Slack. So we are there to help out there. Is it in the CNCF Slack workspace? Yeah, it's in the humanities Slack workspace. So anything should work. OK, excellent. Cool. Is that the end? Yeah, I think so. If you have any questions, we can take them. Yeah, absolutely. So far, we've been staying on top of the questions. But I've really, really enjoyed your presentation. And you have just really amazing, like, relatable use cases that I think are really valuable. It's worth seeing, like, with internal developer platforms, there's no, like, one-size-fits-all solution. Like, you're building out custom APIs and such for your own internal development teams. And so when you do that, you also need to build out your own policy. And so you're showing that there's, like, great communique, like, great, what you said, Ritesh, is that you use the word elegant. I like the word elegant, like, elegant solutions. That's just common sense. They make a lot of sense. Yeah, and quickly what we saw, just to add to that with me, I mean, in the past, a lot of teams were kind of struggling with these challenges. And the kind of one way to address them was creating, like, operators, right? And when you start doing that, one is obviously, you know, creating the operators, managing, maintaining them. That creates overhead. And obviously, that's additional, you know, core resources, time, and complexity. So with Kivarno, you can certainly eliminate quite a few of these operators, especially the ones that would, you know, manage life cycle of certain resources. Now, maybe for super complex things, you still need to write an operator, but for basic life cycle, you know, resource management, Kivarno can handle that, right? So that's kind of an advantage. Excellent. We have a question from Alan. I really like the multi-tenancy topic. Is there more material that speaks on best practices or learnings around this concept for Kubernetes? There, in fact, yeah. There is, and I can, you know, find a link and share it. But in fact, CNCF has guidance and we worked with CNCF on that topic as well. I think, you know, Dallas beat me to it. I think she's shared that link. So if you want to put that out there. Right. Very speedy. And then multi-tenancy, there are a lot of strategies in terms of out of their technologies, like the cluster, we can make virtual clusters. KCP is one that I don't honestly completely understand to be honest, but I understand how it's multi-tenancy. And I'm going to put up another multi-tenancy link. So tell us about this link that I'm sharing right now. Yeah. So the first one is like Kubernetes best practices for, you can see there are, and yeah. And maybe I'll take like a minute and talk through that because, you know, these are like pretty interesting things that we can talk about. Okay. Is that like, okay. Great. Yeah. So in Kubernetes, they also have like, if you're using spot instance, or if you want to dedicate your node to a particular application, there is something called as priority class as well. Right. So the list is long for multi-tenancy. Can we zoom in please? Zoom in? Yeah. Yeah. Okay. Yeah. Okay. It's too small, but we're talking about it at a high level. So it's fine. People can go there. Yeah. So I think there's a tenancy model that you can deploy where you can stick all the parts to a particular node. So you can add tolerations and taints and do that. So there's Q and a policy that automatically helps you to apply the tolerations in every part based on the labels. There is something called as priority class for isolations. There are a bunch of recommendations you can dig deeper and you can make it more and more tight and secure. And we have Q and a policy for each of these use cases whether it is like quota, request limit, priority classes, isolations of nodes, stuff like that. So yeah, this one link talks about what is the best practice all about. And the other link talks about what kind of policies we today have. So this is like something that you can start with. Now not to forget, this is like YAML policies, right? So you can create and develop your own. You can play around and have fun with these policies. So yeah, these are like seven policies we have but feel free to explore more and create your own policies. That's super fun. Yeah. This has been a really wonderful presentation. Thank you so much, you two, for sharing your time and your expertise with us. We're getting a lot of nice thank you, great content and conversation and a lot of nice comments and chats. So really, really appreciate your time today. Is there anything you'd like to say in closing? No, I think just if you're interested in learning more about Kivarno, like Dallas mentioned, join the Slack channel, join the community. We are always happy to hear about your use case, learn more about how we can extend and expand Kivarno to address some of your challenges. And at Nirmata, we are fully committed to kind of ensuring that Kivarno continues to be that number one policy engine for Kubernetes, right? And we'd love to continue working with the community. Excellent, wonderful. Yeah, our friend who's interested in using Kivarno on Oracle Cloud. If you have any troubles, Slack, it sounds like Slack's the place to go with more questions. So thank you everyone. Thank you for sharing your expertise, Dallas and Ritesh. Oh, I have a closing thing I'm supposed to say. Thanks everyone for joining today's episode of Cloud Native Live. It's great to have Ritesh Patel and Dallas Sharma here teaching us about policy-based governance for internal developer platforms. As always, I really love the interaction when questions from chat, y'all are the best. And thank you also for everyone who watches the recording later. Here at Cloud Native Live, we bring you the latest in cloud native code on Tuesdays and Wednesdays at noon US Eastern. So thanks for joining us today and we'll see you again next week. Thank you so much.