 All right, cool. Thank you everyone for joining me today. As you know, POT is the smallest unit in Kubernetes for execution. And the POT security would probably be the first thing you want to address during your deployment. So today, we're going to be talking about the POT security and how to use the Kverno to manage the POT security. Oops, I guess we're in the run. Slides. Just a quick introduction on myself. My name is Shuding Zhao. I'm a Kverno maintainer. And I've been contributing to Kubernetes policy work group, as well as multi-tenancy work group. And I'm currently working at Nomada. All right, so POT security. It is important, right? And Kubernetes has introduced this POT security standards to help you configure the security context of your POT manifest. And the POT security standards has been defined into three different levels. The first one is privilege level, which contains the most unrestricted policy. It provides the widest possible level of permissions. And then they have the space line or default policy, which are minimally restrictive policy while preventing non-privilege escalations. And we also have this most restricted policies following the current hardening best practices. So what are the current existing solutions provided in Kubernetes? What are the built-in solutions? As of now, in Kubernetes, they have introduced the POT security admission in Kubernetes 1.2. And it is offered as a built-in POT security admission controller. And it's currently supported in alpha feature. And the other built-in solution is the POT security policies. As you know, it was running in beta for a long time and has been deprecated in Kubernetes 1.22. And I believe it will be removed in Kubernetes 1.25. There are a lot of blog posts discussions around PSP and why it's been deprecated. If you're interested in learning more about that, I have attached the fill links here for you to visit later. So now, let's take a close look at POT security admission. The PSA is good. It is a built-in Kubernetes validating admission controller. And it'll also help you to migrate from your existing POT security policies. But there are always trade-offs, right? Since it's been currently supported in alpha feature, enabling any alpha feature in Kubernetes would require the access to the API server, which is not provided or available in most of the managed Kubernetes services, like AKE, EKS, or JKE. And the PSA can be enabled in three different modes, like the audit, the audit, weren't, and enforce. And if you have the PSA configured in audit mode, the reporting ability is really limited to audit logs. And in order to enable that, you have to configure the audit policy via the Kubernetes server, which, again, requires the access to the Kubernetes control plan component. And if you have the PSA enabled in enforce mode, the enforcement is really at the POT level, not at POT controller level. Let's say if someone is trying to create a deployment in the Kubernetes cluster, then the creation of that deployment will actually be allowed. But the POT will not be scheduled into your cluster. And it may take a few extra steps for developers to troubleshoot why the POT isn't being scheduled and why, like, yeah, that's the kind of thing. And all these three different modes are enabled at namespace level, which means there is no support for fine-grained privilege. For example, there is no exception that you can make to allow a POT to be scheduled into that namespace if you have PSA configured at that particular namespace. And since the PSA runs the validating webhook controller, there is no ability to mutate the default values to set the security context of your workload and the POT. PSA is good. It is encouraged to start with the PSA for secure defaults. And it is also recommended to use a policy engine, like Kvernal, for fine-grained security configurations and the enforcement of the additional workload and configuration security. For the centralized reporting, the enforcing best practices, and the automation of defaults. So why use Kvernal? There are plenty of a lot of policy engines out there. Why you would choose Kvernal? To answer that question, we have to first understand the goals of Kvernal project, what it currently offers. So Kvernal is a Kubernetes native policy engine, which makes the Kubernetes policy easy to write and manage. And it makes policy results easy to process. In Kvernal, we support the validated policy in audit and enforce mode. We have the mutate policy to add the default values for your POT security context. And then we have this generate policy for you to generate any like default resources upon triggers. Kvernal supports all different types of Kubernetes resources, including customer resource. And it used Kubernetes patterns and practices. With all those being supported in Kvernal and by being Kubernetes native policy engine, Kvernal really simplifies the Kubernetes policy management in this case. All right, if you look at a Kvernal policy within a policy, you can define one or multiple Kvernal rules. And within a true, you can have this match and exclude block defined to select the target resource. Based on the labels, namespaces, namespace selectors and the user enforce, even the POT images or any attributes inside the POT. And in the body of the Kvernal policy, you can specify any of these four patterns to mutate, validate, generate resources, as well as verify images that are used in your workload and POT. Kvernal by default provides this a simple policy library which covers the POT security standards from baseline to restrict level, right? So if you look at this screenshot, it is a Kvernal validate policy which helps you enforce the security contacts of your POT must have run as non-route set to true, both in the POT level, containers level, as well as inside the NA containers, right? And this PSS policy can be easily installed into your Kubernetes cluster using a single line command with customize and kubectl. So next I'm gonna jump directly to the demo to show you how to install those PSS policies. All right, so let's look at the command first. It is a customized command and it posts from the remote GitHub repo and then will be applied all the resource manifest using kubectl apply. And the policies will be installed, right? So let me quickly switch to my terminal. And I hope the font size looks good. I can make it larger real quick. All right, so before we do that, I'm gonna show you the setup for today. I have a simple Docker desktop cluster running. If I do kubectl get POT, get node, there is a single node cluster running there. And I have already have Kvernal running my cluster. If you haven't installed that, it can be easily done by a single kubectl command or you can use a hand chart to install Kvernal. And then obviously all those setup being done in your cluster, let's do the command, right? Let's do customize, build and let's prove from GitHub repo. Not gonna, all right, here it's the command. So once I run that, it'll prove from the Kvernal sample library repo and you can see a bunch of cluster policies that's been created into the Kubernetes cluster. So since Kvernal policy is just a Kubernetes CRD, so you can easily get it using the kubectl get cluster policy and then you'll see all the policies are installed and it's currently in ready state, right? So let me go back to the slides and let's talk about how we can use those PSS policies to manage the pod security contacts, right? So with Kvernal validation policy, it can be configured in Infos or Audit mode. When you have the valid policy running in Infos mode, which means the violations will cause the resource to be rejected and you can also configure the valid policy in Audit mode, which is kind of a soft limit policy. That means when you have any policy violation, the violations will be locked into the policy reports and recorded in the events, but otherwise the resource creation would be allowed. An awesome thing that is supported by Kvernal is this auto-genability. Kvernal automatically generates the policy rules for pod controllers when you have any rules written on pod, right, which means you can simply define a single rule written on pod and Kvernal would understand that and convert all the pod's rules to cover the pod controllers. And this feature is enabled by default and can be configured with this annotation to match a particular pod controller or even turn it off. Right, so the next demo I'm gonna show you today is how Kvernal Infos policy works in a live cluster. What I'm going to do is that I'm going to create a workload resource deployment and then with all those PSS policies installed, I would expect Kvernal reject the resource creation, reject the deployment creation for me. Okay, so before we do that, let's inspect one of the cluster policy. Let's say Kupkado get cluster policy and then let's look at this require run as non-rude. All right, actually before I do that, this is the sample policy available from the GitHub repo. You can see that I have a Kvernal validate rule which matches to the pod. And then there I'm gonna validate the security contacts, run as non-rude has to be set to true, right? So if we install that into the Kubernetes cluster and then spec the policy, oops. Let me just grab the manifest. So you will see there are three rules in total. Remember in this static manifest I only have a single rule available, right? But if you install that, Kvernal would convert the pod rules to cover the pod controller. So you can see there are additional rules being generated. This one is matches the crown draw and we also have another one, another rule match the deployment, demon set, jobs, default set, all those pod controllers, right? So let's go ahead and create the workload, create the deployment in this case. Let's do Kupkado, create deployment, demo nginx dash dash image, gonna use the nginx image. And with this command that you can see, I have no security contacts and especially with that run as non-rude tag, I don't have it set in my deployment. So if I run that command, you would say, okay, so my resource question is rejected by Kvernal validating webhook. And you can see these are the policies that are actually rejecting the resource creation. Right, this is good. So the developers don't need to troubleshoot or find any traces of why the pod isn't being scheduled to the cluster and so on. All right, so this is how Kvernal enforce policy works in the cluster, helps you enforce the pod controllers. And then what about your existing resources that are already deployed in your cluster, right? With Kvernal policies, we can also scan all your existing resources and audit all the policy results in the policy report. By the way, Kvernal uses the policy report CRD, which is provided from Kubernetes Policy Work Group. And you can use Kvernal to generate either in cluster policy report or if you don't want to install Kvernal into your cluster, you can also get the reports from Kvernal CLI. And I've attached a few attributes inside the policy results attribute and you can see we have the policy name available here. We have the resource information, we have the rules and the final result being recorded in the policy report. So let's go back to the cluster. Before I install all the policies, I have a single pod running in my cluster. It's just a ninja next pod, right? So since again, policy report is another CRD, we can get the policy report using the scope code, get policy report, and then you can see the summarize of the report. So this is the report generated for default namespace and there are 34 rules passed and two failures, right? So let's inspect the policy report and see what are the exact failure there. So let's do kubectl, describe PoR, by the way, PoR is just the short name of policy report and you give it name, grab on the failed status. If I do that, you would say there are two violations on pod engine X inside the default namespace and here is the policy that the resource is violating, right? Again, this is another policy that is enforcing the pod configuration. All right, this is the kubectl command you can view the policy report, right? But we also provide this policy reporter as another two which help you view the policy results generated from either Qvernal, Falco, Kubebench, or 3V. And the policy reporter can also be configured to send the notifications to different targets like Loki, Elasticsearch, Teams, Discord, Slack, and so on. Here is just a UI of the policy reporter which gives you the centralized view of all the policy violations. All right, we also have this Kronfana dashboard provided here, if you configure that in your cluster, you will see the score of your cluster. Currently, you can see this cluster got a hundred score here since there's no violation, but if there's any, it'll be reported in this Kronfana dashboard. All right, so far we've seen how Qvernal enforce your incoming admission request. How Qvernal would generate the reports against your existing resource. But how do we remedy the policy violation, right? The simplest way to do that is just to modify the pod manifest and to let it comply with the policy. You install and configure it in the cluster. But think of if you have hundreds of applications, you wanna manage as a cluster admin, it is not an easy job to do, right? So Qvernal, we provide this mutate policy which helps you to set the default security contacts or any other values inside the pod configuration or any other like Kubernetes resource manifest in a flexible manner. So you can choose the resource based on the labels, the selectors and the annotations, any attributes inside the resource manifest and to inject the default values, right? On this example policy, it's just a mutate policy which would operate on the resource type pod. And I'm saying that I want to inject the security contacts privilege to set to false. Okay, so let's go back to the cluster and see how the Qvernal policy injects the defaults. By the way, the autogen is also supported in mutate policy, which means if I have my rule defined on pod, the Qvernal would convert that and mutate the default values to pod controllers. All right, before we do that, let's look at the policy again. Here I have a single rule which applied to Kubernetes pod and I want to set the security contacts privilege to false. And what I'm gonna do next is to create a deployment which is a workload in Kubernetes, right? So I would expect after the mutate policy has been applied to the deployment, it should have has this privilege security contacts at to false. Right, let me put up the terminal and install the mutate policy. To cook, create the chef mutate privileged. All right, before I create the deployment, let's look at the policy again. Since I installed all the PSS policies in false mode, it'll, you know, block the resource creation. So in this case, I'm gonna first patch all the validation failure action to audit to let my resource creation path through. So let's do customize viewed. Dash, qvernals, policies, I believe that's enough, cook auto applied, chef, dash, oops, where am I? No objects, that's interesting. Nice, pod security, let's add this command, qvernals, policies, pod security. All right, so let's get the cluster policy again and see all the, yeah, all the failure action is now patched to audit. So which means my deployment creation would be allowed. Right, then let's go ahead and do cook auto create deployment. Let's say nginx, image nginx latest. And once my deployment is created, let's do cook auto get deployment nginx and grab on the container object. You can see that the security contacts has been set privileged, has been set to false successfully. This is how qvernal mutate policy injects the defaults against your incoming admission request. All right, so we've been talked about how to manage the pod security, how to automate the defaults using qvernal policies, but there is more beyond pod security. It's not just the security for pod configuration, other configurations also need to be secured, right? Some of the examples today, for example, the workload identity. So you would want to guarantee that the single service account can only schedule the same application once into your cluster. And for the fine-grained RBAC, the service configurations, and there is one vulnerability issue has been discovered in Kubernetes, is that the Kubernetes API server allows the attacker who has the ability to create the cluster IP type of service and set the spec.cluster IP field to intercept the traffic to that particular IP address, right? Similarly, you would also want to secure the ingress configuration in a similar manner. The images that are used in your application in your pod also need to be verified for signatures and the testations to prevent supply chain attacks. And the best practice configurations should always be standardized in your Kubernetes cluster. All right, to summarize it, a few takeaways from today's session, right? So it's good to start with the beauty and security configurations like PSA in Kubernetes. And you will also need a policy engine for additional checks, configurations, and automation. And there, Qverno provides a Kubernetes native experience to help you enforce best practices on pod as well as other configurations. If you like Qverno, if you're interested in any of the features we've discussed today or if you have any further discussions you wanna continue with us, please feel free to visit us at Qverno booth. And you're welcome to join our office hour, which is scheduled on this Thursday for 30 p.m. And if you're online audience, feel free to join the Qverno community. We have the documentation of the sample policies for today available from Qverno official website. We're also maintaining the Qverno Slack channels under both Kubernetes workspace as well as CNCF workspace. We meet monthly upon communities and we meet weekly to discuss any issues from contributors and the feature request. All right, thank you everybody. I'm on Twitter, LinkedIn, Slack, anywhere. If you are interested, I'd be happy to connect. Thank you. Anyone have any questions?