 Thank you so much. Welcome, everyone. It's nice to just kind of have my very first conference as an international one. So I'm quite excited for that. And I'm really looking forward to at least kind of sharing some knowledge that I have gained over the past one or two years about cloud and Kubernetes itself and also about the developmental knowledge that I've gained so far with my internship at Nirmata. So a little bit about me. I'm a software engineering intern at Nirmata. And currently, I'm also a CSE senior. I'm still studying in college. And I'm in the final year right now, pursuing my computer science degree from K.R. Mangalam University. And I have also been the Linux Foundations LFX mentee at the cluster API provider AWS project, which falls under the Kubernetes SIG cluster lifecycle. And I'm also the creator and the maintainer of Kvernow AWS adapter, which was just recently released just around the time of AWS re-invent. And I've also been the former secretary of Computer Society of India, KRMU chapter. So let's get started. Well, when we talk about port security itself, we know what security is. But when it's about port security, there are different levels of complexities when it comes to containers. So if we need to access specific host services, for example, Linux services, we might not want to because it might be very risky for any unintentional access being provided to any kind of image. So please let me know if anybody has any kind of questions, if any terminology gets complex because it's a beginner's talk. And I'll be very happy to even address your questions midway. So in Kubernetes, port security, it kind of ensures that your Kubernetes pods are safe and secure from any security-specific misconfigurations. And these security issues might arise, such as the example that I just gave. You might not want any host processes to be accessed by Kubernetes pods. So that is just one example. And it also led to the creation of pod security standards. And pod security standards were basically derived from the experience that was gained by implementing pod security policies. And pod security policies were actually introduced in the Kubernetes v1.3. But now they have been deprecated in the v1.21. And finally, they were entirely removed from the version 1.25 of Kubernetes. But this removal did not just come as is. There was also a pod security admission controller, which was introduced in Kubernetes v1.25. And it essentially just kind of enforces whatever standards we have right now, pod security standards. And it creates policies out of those and only deals with those standards as specific, like, enforcements within your Kubernetes clusters, right? But now, what are pod security standards? So as I just mentioned, there are specific standards. And these have been divided into three different kind of policies that are pretty much generalized for our use case, right? So there might be a privileged standard, which is extremely not recommended because you basically provide all the access to your resources using this mode, privileged mode. And then there is the baseline mode. So baseline mode is essentially the kind of policies, the kind of standards that are enforced whenever you create any generic Kubernetes pod, right? So if you just use the kubectl run command, then you are basically creating pods which are following the standard baseline profile of pod security standards. And then there is restricted mode. It is actually heavily restricted. And we will kind of see what these look like when it comes to the actual Kubernetes implementation later on. But now let's see what's a pod security admission controller. So an admission controller in Kubernetes essentially means that whenever there is any kind of resource admission within the cluster itself, there might be a few webhooks which would be looking at the resources, right? So these webhooks might be able to either validate whatever the payload was when the resource was kind of getting into the cluster itself. And then we can basically make some decisions on the basis of whatever payload consists, right? So for example, if let's say there is a specific mode, then the webhook will see. So before getting to the mode itself, we also have namespaces, so pod security standards. In case of pod security admission controller, itself is applied on a namespace scope, right? So the way to implement this is by adding a label to the namespace itself. And essentially it's the same format as I've mentioned over here. So it would be pod-security.cubinities.io slash mode. And this mode has to be anything between Enforce, or Audit, or Warn. So these are the only acceptable values for the mode. And then in the case of level, we would mention which kind of pod security standards we want to apply for that entire namespace. So you cannot apply these standards for specific pods, right? You will have to apply this on the entire namespace. So all the pods, all the containers that are present in that entire namespace would be affected by this pod security standard. So be very careful about whatever standards you're using and what pods those standards are going to affect. And again, these will be privileged, baseline, or restricted. Now, since the title itself mentions Kvernow, I'm not sure how many of you, I mean, this is entirely voluntarily, how many of you have heard about Kvernow before? Anybody? OK, I can see four or five hands. That's great. It's even a better session. So Kvernow in Greek means to govern. Now, what we mean by governance in Kubernetes is that we are essentially having control over what resources are making way into our Kubernetes clusters, right? And then we can make decisions on the basis of those resource specifications. And by this implementation, it just means that it's a policy engine. So we can write policies for Kubernetes in Kvernow. And these are entirely Kubernetes native resources, which means that if you want to deploy a Kvernow policy in a Kubernetes cluster, you just have to write YAML. There's no other language required. And that is one of the coolest things about Kvernow. There are other implementations for policy enforcement in Kubernetes, such as OPA gatekeeper. But as you might have heard of it, it also requires the usage of regolanguage that is very specific to OPA. But OPA definitely has a lot of different use cases. It's not just restricted to Kubernetes. But if it comes to the ease of use, Kvernow is definitely a step ahead of it because it just requires YAML. There's nothing else. And you can essentially audit or enforce the policies. So by audit, we mean that we will have an entire log of whatever resource has tried to make their way into the cluster. And we also have three different kind of rules. And I forgot to mention enforce. Enforce essentially just entirely rejects the admission of a resource in the cluster if it does not follow whatever Kvernow policy we have written. So it's very restrictive. If you have just one specification that is misconfigured in your, for example, Kubernetes pod, then it will just not allow the creation of that pod itself. Well, now there are three types of rules. Actually, if there are four types of rules now, there is validate rule in which we kind of just validate whether or not a specific resource that's trying to make the way into the cluster is valid, as the name itself specifies. If it's not valid, then you have the option of either just auditing this event in the form of cluster policy or policy reports in your Kubernetes cluster itself, or you can also enforce the policy and you can instantly make a decision whether this has to get in the cluster or does not. Then there is a mutate rule. Mutate rule is pretty cool, actually. If, let's say, you have any kind of resource that you forcefully want to modify on the spot as and when it tries to make the way into the cluster, you can simply modify its specification, whatever specification you want. And on the basis of that payload itself, whatever it's getting from the Kubernetes API server, it will be able to make that decision just in the way of whatever policy you define. And then there is a generate resource rule. You can also generate resources, for example, if there is a specific resource that requires the creation of a secret, of a Kubernetes secret. You can just kind of see if this is that specific resource and you can generate a new resource or a new secret on the basis of that Kubernetes resource, which we're trying to validate. And then, yeah, the fourth rule is the image verification rule, which is still currently in progress, but it's pretty much about to be standardized, get in the way of its standard usage across the community. It is essentially able to verify any OCI images and attestations that are associated with the Kubernetes spot. So you can also verify whether or not this is an image that you want to be associated with a pod or any kind of, let's say, even a deployment. So one of the cool things about Kivano is that you also don't need to worry about whether you have to write, if you're trying to write a rule about pod or a deployment, you can use autogen controllers, which are very specific to Kivano itself. And it will create specific auto-generated rules for whatever pod controllers we have. So if you create a rule for a pod, then it will be automatically applied to, for example, replica sets or deployments. So that's one cool thing about Kivano. So let's just try to understand the architecture of Kivano. At a very high level, as I've already told that an admission controller keeps an eye on whatever resource is getting into the cluster. So on the basis of that payload, the admission webhook, Kivano has an admission webhook. Actually, there are multiple webhooks in Kivano. But essentially, in Kubernetes, there are two kinds of webhooks, which are the mutating webhook and the validating webhook. The validating webhook takes care of any kind of validation that is to be performed on the resources. And the mutating webhook takes care of modifications in that resource. And in this case, the admission controller, the Kivano admission controller, it utilizes the Kubernetes webhooks to make such decisions. And on the basis of that, it is able to create RCRs or policies or generate requests, so on and so forth. So this is a very high-level architecture of it. You don't really need to understand all of this if you just want to implement any kind of Kivano policy. You can just start right away if you want to implement any such policy in your clusters today itself without any such knowledge. All you need to know about is the kind of specification that we use in creating a Kivano policy. So here we have a really standard sample policy. I'm not sure if you're, I hope everybody is able to see the content here. So let's try to understand what we have here step by step. The API version for a Kivano policy is just kivano.io slash v1. And the kind would be cluster policy in this case, although we have two kinds of policies in Kivano. One is a cluster policy and the other is just policy, which refers to any namespace policy. So we can create policies that are applicable throughout the entire cluster. Or we can also create policies that are applicable to just specific namespaces. And in that case, we won't be using a cluster policy, but instead, just a policy. And the API version would be the same in that case as well. Next, we have the metadata. You can see a lot of different annotations here. These are not at all required if you just want to try out Kivano for the first time. These just kind of provide you with additional information about that specific policy. And for example, you can categorize what this specific policy is related to. So in this case, it's a best practices policy from the Kivano website itself. And we can also provide a description if that's needed. Then let's get back to the actual real deal, that is the specification of a Kivano policy. So here's what the real action happens. And this, we have a validation failure action where we have set it to audit. And in case of audit mode, it will just create events and policy reports for any kind of events that happen, which are not really intended. Then we have background mode. So background mode basically ensures that it is able to keep track of any previous resources that have existed in the cluster to be checked even after this policy has been applied at a later stage. Then we have the rules that we need to define for our policy. So a policy need not contain just one rule. It can contain many rules. So in this case, that is why it is a list of rules present here. So we just have one rule mentioned here. That is check for labels. And this policy just checks for the existence of this label here, that is app.cubinities.io slash name. As long as this label is mentioned, it's present in the metadata of any pod. So you can also see that we are currently matching against all the resources that have a kind of pod. You can also mention multiple resources here. It just does not need to be a pod. It can also be a namespace. It can also be a deployment. You can mention as many resources you want in the kinds list. And it will validate all those lists, all those resources for this namespace. And that's basically the purpose of this policy. It's as simple as writing just plain YAML for deploying your first Kubernetes Kivano policy. Now here's a policy to restrict privileged pods. Now this is very specific to pod security itself. So just looking at the pattern here, we can see that there are specific specifications of that pod. In this case, this is something new for you, I guess. It's a syntax that is specific to Kivano, although that is the only syntax you would need to understand about Kivano. It's called an anchor. And there are only about five or six of these anchors, which are used for comparing the values or like validating the values within fields and their children fields in any kind of specification. So in this case, it's just trying to see if any init container exists, then proceed to its child elements. And if the child elements exist named as security context, then look for its children as well. And if this privileged field exists, then it needs to be false. That is basically the idea behind this entire syntax. So this is just one syntax that is equality syntax. There are multiple different, like four or five different syntaxes or anchors, as they are known in terms of Kivano. There are multiple anchors, five or six of them. It's very easy to implement. And on the basis of such logical decisions, you can pretty much decide what pattern, any kind of Kubernetes pod in this case, since we are matching against all, any kind of Kubernetes pods, it will be validating for these specifications in that Kubernetes pod. Now, there is a new rule that is implemented in Kivano after the version 1.8 release. And this is known as the pod security validation rule. Earlier, there was no such rule that was dedicated towards just pod security standards or pod security policies. But with this rule, we are able to even make it a lot more fine-grained, a lot more granular to implement pod security standards. Because pod security standards by themselves do not provide you with any such way to make any modifications in the standards that are already defined. So this is where Kivano comes in. It also gives you the ability to kind of make exceptions for, or exclusions for different control names. So what is a control name? What are the restricted fields? I will basically get you through it as well. But this is essentially the syntax of any pod security rule in Kivano. So in the pod security, so the parent field here is the validate field, which falls under the Kivano cluster policy spec itself. And in the pod security field, we have profile which mentions which pod security standard profile we are aiming for. That needs to be enforced on the concerned resources, which are pods in our case. And we will mention which Kubernetes version are we going to validate this against. So you can also specify which version. You can make different pod security standard policies on the basis of different versions of Kubernetes. Then you can make exceptions for different control names and make exceptions for specific restricted fields in those control names. So I'm not sure. So there's just one more example for this. In this example, for the same previous syntax, we have a baseline profile. This baseline profile is being used for all the latest Kubernetes versions. And then we are just trying to exclude the control name for capabilities. And what these capabilities are, it's very specific to just Linux distributions. It essentially tells you what host capabilities you want the system, the Kubernetes pod to have. In this case, we are modifying the capabilities for this specific profile, that is baseline profile. And we are adding, we are making modifications to the restricted field, which is mentioned as any containers. So this asterisk represents the wildcard symbol. Any container that is present with a security context capability, the add capability within the security context, it is allowed to have any value such as a kill, setGID, and setUID. So these are the different values that we are making exceptions for. Apart from this, every other values will be excluded. Like they will be validated against. And so I guess it's time for a demo. I think this might be a lot of just jargon, but let's kind of see if you're making some sense here now. So okay, this might be very small. Please let me know if you're able to see this. I think that might be good. All right. So in fact, let me just use the terminal that might be more appropriate. Yeah. So for the demo, I'm not using any kind cluster or many cube cluster. I'm just using an EKS cluster. There's no specific reason for that, except that my system might cry a lot for resources, which it might not have right now. It happens a lot. So I'm just, these steps are something that you know you don't need to take care of. It's just so that I have access to the Kubernetes cluster. All right. So there is no other resource on the system. If you want to install Kvernow, there's a very beautiful documentation out there on the Kvernow website itself. You can just go to the documentation and there are installation steps. In my case, I'll be specifically using the Helm chart for Kvernow. Yeah. So you just have to add the depository for Kvernow Helm chart and you will be good to install it as well. And it seems like my system has already started trying for some reason. Yeah. Okay. So, yeah. So let me first install Kvernow. For this, I'm using the repository that I've already added. So I don't need to add it again. And I'm using the version 2.6.3. It's the latest stable release for Kvernow right now. And I will just use this command and it will deal with all the stuff that I need to take care of in order to install Kvernow. Okay. It's taking some time. All right. It's done. So let's look into the Kvernow namespace itself. We will certainly have some pods for Kvernow, hyphen and Kvernow namespace. Yeah. It's currently in the initiating stage. And yeah. It's running now. So let's just try to, so now that we have already installed Kvernow, first let me demonstrate how the actual pod security policies work in terms of like the pod security admission controller itself. So if I want to enforce, we are not dealing with Kvernow right now. We are just using the native Kubernetes pod security admission controller. If we want to enforce the admission controller right now, all we need to do is we just need to create a namespace. So in this case, let me just create a namespace called baseline. And if I want to enforce pod security, pod security admission on this, I will simply label this namespace as baseline. So this is the baseline profile I'm using, which means that whatever the default pod security standards are for kubectl run command, those will be enforced here. So now our pod has been labeled. So if I just run the nginx image in the baseline namespace, it won't show any kind of policy violations, but let's say that if I create this resource, which is a pod that has a privileged mode enabled basically. So privileged mode means that you will be able to access all the host resources of this pod. So if I just try to apply this pod, I'm pretty sure, okay. Once again, I'll just copy this, yep. So it's telling me that the CTX demo container must not set the security context dot privileged equals true value. What we have seen right now is essentially the pod security admission controller. Now, if I set this to false, it should be working as good as it would previously. So if I set this to false, let's see what happens now. It should work pretty fine and it's created. So the problem with this is that it's very much rigid in terms of whatever capabilities or whatever kind of standards are being enforced here. So if I just show you the pod security standards, these are pretty much well-defined already. You cannot customize these. If you are kind of enforcing these standards on any namespace, then they have to follow these different practices. For example, in case of privileged pods, you don't need to take care of anything. It's basically just giving the key to your door to any kind of thieves or stranger. That's how privileged it makes your pods or resources. But in case of baseline, it has some different practices or values defined for the different control names. So we have already seen control names in Kubernetes Kivano policy specification earlier. So these are the control names that we were essentially talking about earlier. And you can see that there are some restricted fields and it has stopped scrolling. Yeah, all right. So it has some restricted fields and these are the restricted fields that we have seen earlier in the Kivano policy. And these are the allowed values. So essentially in this control name, we are just restricting all these fields to these specified values. If there is any other value for this host process control or for these fields within any pod specification in Kubernetes, then only these values are allowed. And if you try to set it to true, then it'll fail immediately. So this is the baseline profiles. Just one control name. There are multiple control names across different restricted fields. So that's why it's pretty much rigid. You cannot make any changes to these existing standards. And that's how it is implemented in Kubernetes as well. But in case of Kivano, as we mentioned, that we can make it a lot more fine-grained. So let's have a look at what a Kivano policy essentially looks like. So here's a policy for checking the labels as we have already seen. And let me just demonstrate creating a pod which does not have this specified label. All right. So if I create this policy, and let me also change the label name here, yeah. Here we can see the cluster policy is created. And if I try to apply, for example, if I try to run an Nginx container, let's see what it says. It's currently created. And it is, let me see if it's an audit policy or an enforced policy. So it's an audit policy. So I must have got an event for this. It's getting stuck. It's already starting to try for resources now. Get Cpaul. If I try to get cluster policies now, it would say that we have these cluster policies. And if I try to get Cpaul resources now, sorry. Then we could see that there are two failed resources here. And this is essentially, if I even, I don't know, we amel this, then we can also see that there are these two failing resources also contain this failed policy. So it's an indication that our policy has worked. But it's only in audit mode right now. That's why it has not taken any action. If it was enforced, then let's also see what would happen if it was enforced. Let me create another image that is in the next one. It has simply stopped the creation of this pod. And it has shown us the error that the admission web book has denied the request, which means your pod could not make its way to the cluster itself. Now let's have a look at the pod security policy that we talked about earlier. So in its current state, if we don't include the commented section of it right now, it will work pretty much the same as the baseline profile does. So here we have the pod security rule. And in this we have specified the level of pod security that is baseline. And then we have the version that is latest. So if we apply this cluster policy, okay. Oh, it's C-Pol2, my bad. We have created this and it's only applied to the baseline namespace, sorry. Yeah. And if I try to create the pod as earlier, let me see if it has privileged, let's set it to true. And okay, let me delete, let me first delete the pod itself. I believe I'm taking a bit more than my expected time. But now as you can see, so it currently gets restricted by the baseline profile. If I disable the label itself, and then I try to apply the policy, then Kvernow is the one that's making the decision that this pod is not allowed in the cluster itself. So that's basically the demonstration of the pod security rule in Kvernow. There's a lot more detailed, there are a lot more detailed policies out there on the Kvernow website itself. If you want to have a look at the policies, then you can just go to the policies section on the Kvernow website. We have tons of different policies uploaded over here. Those are all created by the Kvernow community. And if we just have a look at it, we have two, three, seven different policies created by the community, which is always going to be increasing, by the way. Now let's have a look at what we have remaining. I'm pretty sure there's nothing else remaining for the demonstration. Yeah. So this was pretty much the final point of my presentation. Thank you. Arigato and thank you everybody. Okay. Thank you very much. Are there any questions? Thank you for a really interesting presentation. Hi, my name is Kentaro. I'm from Japan. And yeah, actually the pod security policy is really, I think it's really a big problem. And then I was at previous company, I was a developer, and then the pod security policies is deprecated and then erased. So, sorry. My main question is, which is the best migration software is the Kvernow or open policy agent? Or the cloud native service, many policy pod security services. And today I really interested the Kvernow. I am not hands on yet. I'm trying, I would like to try to use it. So how do you think the Kvernow or OBA or other open sources, which is the good one to determine the benefits of substance? If you know that. That's actually a really great question. And there's been a lot of debates in the past as well, and even ongoing debates. As for what is the most appropriate policy agent to use, right? In case of Kvernow, it's definitely a lot more easier to use when compared to OPA because nobody wants to learn anything new in order to apply what they already know. They just have to implement it in a way that is acceptable by Kubernetes itself, right? So, Kvernow just utilizes those webhooks that we talked about. I'm not entirely sure how OPA is implemented as a developer. So I cannot speak for the internal complications or like features that are present over there. But in case of Kvernow, I'm pretty sure that as compared to other policy engines, I believe that's the only policy engine we have right now, which allows us to create policies in just Kubernetes native fashion. And there are also increasing use cases for Kvernow as in when the community demands. So just as I mentioned, even the image verification rule, that's again increasing, that's being supported by the help of Cosine also, created by ChainGuard. So we implement Cosine in our Kvernow development just to verify the image verification as well. As for the pod security part, I believe that was your question, right? For pod security, definitely we are still working on making it a lot more like appropriate to make it production ready, right? In case of Kvernow. But it was just recently released. I believe it's, it might need some community contributions as well. I'm not entirely sure because I'm not right now on the Kvernow development team. But as for pod security standards themselves or pod security policies, it's definitely a good option if you want to get started with implementing pod security policies in your clusters and make it a lot more like, you just want, you just don't want to learn new, like redundant things. You just want to know, you just want to implement whatever you are already aware of, right? So in that fashion, in that sense, Kvernow is definitely the option to go. But OPA and other engines might have their own features, pros and cons. As for this debate, I think we already have one article here. This article as well was also written by me only. And it mentions, it has actually referenced specific talk by one of our co-maintenors, Chip Zoller. And also by shootings out, who is also one of the co-creators of Kvernow and also the maintainer right now. And in that talk, it's actually mentioned in the slide itself. In that talk, they have pretty much tried to bring up all the differences between Kvernow and the pod security policies or the pod security admission controller as well. And there are also a different comparisons present on the NIMATA website itself and also on Chip's website, neon mirrors as well. So you can just kind of find all the different comparisons made by someone who's more qualified as I am to do that comparison. Yeah. Thank you very much. Thank you very much. Thank you so much. Any other comments? No? Okay, from me. Could you say anything if there is how much performance degradation is using Kvernow? I see. For example, if there are so many policies in the table, it seems to take a long time to look up the table. Yeah. So it's again a really good question and there was one recent change in Kvernow made by one of our maintainers. We have basically, so earlier what was happening was we were keeping track of policy reports in just one YAML file or like one custom resource. That is the policy report itself. It's either policy report or the cluster policy report itself. And we were not splitting those in any fashion just as of recently. But now we have made some modifications in how Kvernow policy reports are implemented. And we are now splitting them. We have provided people with the ability to split policy reports per namespace. So you could have different policy reports for different namespaces as compared to like having just one single policy report and that containing all the details of whatever admissions were made in the previous requests. So that's one optimization we have done previously. As for the performance, I think there were some issues but we have resolved most of the performance issues in our previous pull requests. I cannot provide an exact benchmark as I don't have any graphs to show you right now. But yeah, there have been some improvements in the previous pull requests that were made on the Kvernow project at least from what I've observed right now. Yeah. Thank you very much. Thank you. Any other questions? Okay, so we have covered everything on the agenda. So let's end it here. I would like to express my appreciation for all the speakers and the participants. Thank you very much. Thank you so much for having me.