 Hello everyone welcome to our section. Yeah, welcome. Oh, thank you. So welcome everyone Session for today is called Kiverno and production and deep dive We will be two co-presenters on stage today. So please Let us introduce ourselves briefly and we can look at the session agenda and start the talk So my name is Charlie. I've been a Kiverno maintainer for one year and a half now I recently joined Yamata the company that created an open source Kiverno so since then working on Kiverno has been my full-time job and you can contact me or be about me or LinkedIn and And It's a little bit inconvenient apology for that. We will get more we can go more fluent afterwards My name is Tinhong Banh-Hot. I'm the chief cloud architect and product owner from Saxo Bank I'm also one of the organizers from cloud native Copenhagen meet-up group So if you have a good talks, please contact me in LinkedIn and that's also my handle for for get up there For today's section, we have packed agenda for you So in the beginning shout out what will give you guys a great intro on Kiverno ladies and what can you use it for? Afterwards, I will talk about the use case we have for Saxo Bank and the VLOG And we end up with a shout out to bring you guys to some advanced features Kiverno gonna come in the coming weeks Thank you So we're going to start with a Kiverno introduction But first I'd like to recall why we need a policy engine So why do we need a policy engine? Fortunately, there are a lot of good answers to that question so the first good answer should be for security reason and to ensure that for example Containers don't run as roots in a cluster to prevent improperly signed container images or things like this Another good reason would be to enable collaboration and multi-tent tenancy inside the cluster And I think Gino will talk about that in more details later Or it could be used to implement a sort of cost-control system To prevent every service to allocate a new load balancer or things like this or to validate for CPU and memory requests or these kind of things And it probably can be used for plenty of other things too Just because a great company will want to validate and and for different things so policy engine Be quickly becomes necessary at this point It's exactly what Kiverno is and does So Kiverno is an open source Kubernetes native policy engine. It allows you to create validate and then for policies for a cluster The advantage of Kiverno is that it doesn't require any specific programming language to create policies So it's usually considered very easy to use All policies are completely declarative and created from standard ML syntax In this sense then can easily be managed Version controlled like any other code. So this makes Kiverno a very Github friendly tool in the end If we look at it a bit deeper Kiverno currently supports five type of policies We have the validation and image verification policies that can be used To validate and verify resource content We are Generation policies that can be used to automatically create additional resources in a cluster when something happens We have mutation policies that can modify resources on the fly while they are being admitted by the API server And we have since Kiverno 1.9 a new kind of policy which is the cleanup policy That can be used to deal it periodically new resources in a cluster based on certain conditions So that's basically the five policy types Kiverno supports And if we look at it a little deeper again This is what Kiverno architecture looks like so We are in the rectangle at the middle of the screen the values Kiverno components and On top of the screen the API server admission chain We can see that the API server offers two extension plans in the mutating and validating admission web phases So the first job Kiverno has to do is to configure those webbooks based on the policies installed in the cluster This is what the web controller does Constantly constantly watching policies and configuring the webbooks Then at this point Every admission request that matches the web configuration should be transmitted to the Kiverno webbook which in turn will invoke the Kiverno engine and evaluate all relevant policies against the Resource being admitted by the API server So based on that the webbook will be able to take a decision to accept or reject a request And eventually return G-zone patches if there is a mutation policy installed in the cluster We can also observe Two other components the report and background controllers But they are not directly related to the admission phase in Kiverno The report controller maintains reports inside the cluster while the background controller is responsible of Generation policies and mutating existing resources So before we look at a concrete demo We can look at what Policy looks like in Kiverno. So the first thing we cannot is that it's very similar to Any Kubernetes resource we have an API version a kind which in this case is a Kiverno dot IO slash V1 and The cluster policies so the policy applies to the whole cluster and not just a specific name space After that we have a meta data section like most of Kubernetes resources with which has a name But it could have labels and notations on our references and things like this Then comes the spec which is the but the policy itself The validation failure action is set to enforce which means that a resource Violating this policy will be rejected We could have set it to audit the police the resource would have been accepted But the violation would be recorded in in a report the background true indicates that this policy applies to background scan and Finally comes the rules which is the logic of the policy and how we are going to validate mutate Process the incoming resource So every rule is identified by a name. So this one is required team Then it starts with a match statement and the match statements indicate what this rule will be interested in In this case, this is a rule to validate pods and finally the validate section contains the pattern that we are going to validate when Admitting a pod and in this case We are going to require that meet our daughter dot levels dot team is a non-empty string so Every part being submitted will have we need to have a team label and this team label should contain something And we can illustrate that with a quick demo. Oh Thank you So we have a cluster in style locally and we deployed Kiverno in it So we can see that the Kiva the Kiverno pods are running if we look at the If we look at the other web book configuration Currently, there's no policy installed. So Kiverno created the web book configuration, but for now, it's still empty So as soon as we create a policy in the cluster Kiverno and the web book controller we talked about earlier should pick that and Configure the web book. So it's exactly what happened right there We are going to start receiving admission reviews for replicas controllers, pods Demons, diplomats and so on so since then if we try to create a pod without the team label the Kiverno engine should catch that and reject the request because of the policy we just created above If I try this I Okay, I can observe that Kiverno Actually rejected the pod creation We have the error message we put in the policy and the And the pod is not created in the cluster So to get this pod created correctly, we can do the same thing that this time We have the team label the team label is just set to my team And with this one Everything should pass Fine, okay That's the difference and that's the way a policy can Validate a specific flavor on the resource and of course it works with every resource So that's almost it for the introduction The last thing I wanted to note is that Kiverno is more than just an admission web book We looked at the admission part, but it also generates reports in the background for resources Not just the resources that were creating Created after a policy was installed It creates events in the cluster corresponding to detected violations. It can run offline With the help of the Kiverno CLI and it's a perfect candidate to evaluate Resources against policies in CI pipelines You can visualize violations in real time with an optional component, which is policy reporter And there's a large policy catalog available to accelerate Kiverno adoption, but You know, we'll talk about that later Yes later is now Yes Yes Just a quick one how many of you running Kubernetes in production and Please keep the hands up. Oh my god. I love it. And how many of you please keep the hands up How many of you runs Kiverno in production? Great, then you will learn something today might be useful At the Sucks of Bank we do we run Kiverno and the Kubernetes in production Sam for goes to my previous employer VLOG's And why do we pick Kiverno out of all other policy engine in the market? I've heard people asking me why didn't you pick an open policy agent? They are they have been around for longer time and then they can work also outside the cluster also Having people asking me how about the VAP the new the new kid in the blog Validating admission policies introduced in 1.26 Kubernetes release So let's take on the the later one first VAP only covers a subset of secure It makes an echo. Let me reposition myself It only covers a subset of features we have we desire from Kiverno and as the name implies It only validates and we use Kiverno for more of as For as of open policy agent open policy agent is very powerful You can use for many things, but the use case we have again Exceed or at least a new need to jump a couple hoops if you want to implement that to using OPA For that I will deep dive later on But why Kiverno? Kiverno has many great features where shall add one already outlined a few earlier for us There are three of them are super important. First of all, there's no new languages required Everything looks like everything else in Kubernetes. Second of all, it comes with the extensive policy libraries We don't need to write everything from scratch The last one is the most important one. It has a strong community support Last time I checked that it was over 1.3 billion downloads of Kiverno So that's a huge amount of your space out there because of the wild adoption and the strong community never we pick Kiverno as our security policy engine running Kubernetes So, how do we do that? If it comes yes, first of all, we utilize the out of box policies There are many policies now not only written by the maintenance like shall add what but also with everyone else in the community who just the user and the user of Kiverno and Second of all, we use that for poly process automation Many enterprise companies and or when you're at a certain size with the security mindset You will have for example private container registries That will require you to have image post secret to be distributed into different or all name spaces And we don't want to do that about ourselves But we need to automate automate automate Kiverno can help us with things like that Thirdly, additional security enforcement. So on top of the base security and best practices policies come with the Kiverno You can write your own additional ones for example on the utilize in the image verification image signature verification feature of Kiverno that that's a very low hanging fruit to get to add another layer of super important security enforcement to your text tag and Then I'm gonna show you how to do that Now I'm gonna show you some of my favorite feature of Kiverno first of all multi-talency as We adopt GitOps There are many things we need to do when we adopt a new team or new talent in GitOps words. So for example Our back access for the service account or access for each team The resource quota for each name space all of those it's a bundle So do you want to create them one by one? Probably not So Kiverno can be a perfect candidate for doing things like that Of course, you can also define your own hem chart every time when there's new name space You can distribute the hem chart Second of all for resource management I mentioned that before you can use Kiverno to automatically Automatically create image for secret but Kiverno can do much more than that. Kiverno has a mutation webhook where it means you can Change the resource into something which not quite look like that and you probably only want to utilize the original Property some of the property you want to add some additional property to that One of our use case was when we were adopting a massive chart Due to the due to the version change One and a secret generated by one component cannot be directly fed into another component The format that has been changed due to the version change But this is a GitOps world and so we cannot do manual change in the environment So instead we use Kiverno to do the trick to fill up the gaps to mutate and resource generated by component A and Fade into component component D The third thing is resource validation So you can use Kiverno to validate the many different kind of resources config map Depriments of ports various things but for us and the highest the value comes to Port disruption budget validation So I don't know how many of you have had a bad experience like us that a mis-configured Port disruption budget could hold a whole port a whole node for upgrading because that's only for example if they said and Minimum available to one and then there's only one replica. So what's it gonna do? You hold the node and don't want to let go In those situations, it's much easier just configure Kiverno and then look at in the deployment say you shall not pass because you are mis-configured The last thing is image secret signature verification So if yeah, so give or no if I if I said it correctly behind the thing using cosine for doing the image signature verification Because yeah, that's how I seen the parameters are configured which a match for now is yeah This is actually something I'm going to demo so I much rather spend more time on preparing the environment But before demo, maybe it's easier if I talk about the setup first The left side is that I use cosine to generate a key And put it into Azure Key Vault and it doesn't have to be Azure Key Vault It can be any secret mode that cosine support. You can also put it just locally if you want you And then after that I use that key to sign an image and put it into container registry It happened to be Azure container registry. We are there market. We are so much a Microsoft house as today and during today's demo The keyboard are running inside Kubernetes. So hopefully retrieved the key from Azure Key Vault to validate if the image is Has the correct image signature or not if the image has it it should not be passed. Otherwise, it should stop it Now let me try it works Cool. So first of all, maybe I should delete to clean the I'm not very good at multitasking. I'm a policy. What type I can't speak so again existing The cluster policy by the way CPOL the full name is cluster policy in Kivono They have a policy and a cluster policy the cluster policy affects whole cluster Yes, so first of all, let's quickly look at the setup So I have this policy That's the name and then that's what we wanted to do It should look at an image reference saying if there's any image coming from this path. I should use this public key up down here to validate it and this public key right now sits in Great, it's in my presentation And I need to zoom out here This is the key I tried to read from and then I got two images One called the bad busy obviously it's bad because it hasn't been signed and then the other one It's called good busy. It has been signed and then you can see the image signature is indicated by the shahash here So now let's go back here So all I need to do now is to apply this cluster policy One thing to remember though because we are reading stuff from private Keyboard and private container registry. So when you apply things like that, you should remember During the Kivano deployment, you need to specify what image post-secret you can use and what kind of like Credential you can use to fetch the secret which is in the cosine key So let's get to it If I now apply it just double check. Yes, it's still connected to The high speed mobile mobile internet So now I can see the cluster policy is in place And you can see it's not running in the background and it has in false Action, so if anything's bad, you will stop it. So let's run a bad busy first You can see it's coming from The registry which is being monitored The performance by the will be tuned in the coming upcoming Release 1.10 So I don't want to promise me that there will be a specific Flag you can set in here in actual argument to speed up in the process Name those which cloud provider to use So you can see there's no matching signature for this bad busy Now I really hope when I run good busy you pause You can start thinking about the questions later. You're gonna ask So it's not awkward silence Yeah, it's great. I thought there would be a clap for things like that Thank you, you guys so lovely Okay, it won't be long. I promise Now three learnings I try to I want to share with you guys before I hand over to to try the word first of all Take advantage of the recommended policies Don't really manual well, but use it as a base and customize it into your organization need It have saved us a lot of time Second of all start with the validation failure action audit If you start with enforce your developer gonna hate you and you don't want that Yeah, take one or two audits fix the warnings And then document it to share with other teams and enforce them Developer experience The last thing is about migration if you have opa Rules you want to all migrate one to one into kimono. It might be a tough job Because you might have a lot complex rule in place Try to see from kimono's perspective. How can you utilize because there are quite a few things kimono can do opa count So we have had our learnings that just see it and not just as a migration But a transformation using a new way to manage a kubernetes native way to manage your security policy engine Okay, this is all the toothbrush, but I promise it will be fun so This comes with the within the question I had before some people they disqualified kimono because It cannot run outside of kubernetes and I'm just thinking everybody has toothbrush And every morning when you're standing in your toilet In your bathroom, and then you look at toothbrush and I look down to your toilet Technically it is a toothbrush. It is a brush Will you see there? So do one thing and do it well. Thank you Yeah, we have a little bit more by the way, if you want to see an advanced feature We can go quickly Through the last part of the presentation which covers unreleased feature yet That should come in So That should come in for some of them in one term which should be available Approximately next week And the last feature will probably Start being implemented in one eleventh. So it's not really one term, but it's a significant change. So It makes sense to talk about that because it's in our roadmap So the first coming feature is the support for notary v2 genome mansion cosine, so Today we only support cosine and with one term it will be the first time we support another technology for image signature verification Um So the differences are notary uses the new oci Artificial send referrals api. So it's very different from what cosine does And probably cosine will follow at some point because it's very it offers a lot of advantages Um The downside is it's not supported everywhere yet. Even if it's part of the oci standard Different registry have to implement it Fortunately, ecr acr and docker hub support it And there's no keyless signature in notary. So keyless signature signature Is only for cosine You can see the oras discover call Here and we can see that there's a new reserve tag That can reference a layer That contains the signature signature So in practice, it looks like this you can create policy which has Um The notary v2 type so it looks like Uh cosine policies you have an image reference that you can use to target images you Oh I can do that Better So Yes, the type notary v2 is here the image reference Is uh at the same place that it was before And you have the same set of attestors, but this time you have to provide a certificate because There's no Keyless signing. So I have a cluster running the latest main locally, so if we deploy such Image in the set policy in the cluster Uh, we can now try to create a pod based on the Image above so there's one tag unsigned in the registry and this one should be rejected Okay, and it was rejected because uh, we failed to verify the Signature of the image And we can do the same with a signed tag So and this time It should work. Okay, so it worked So yes, it's still initial support and for now we are supporting only the signature verification But we are working on supporting attestations too So it will be the next step Uh, another feature that will be in one nine. It's uh, Kivano will now be able to call different services in Inside the cluster and it looks like this so you can now have an api call in the context It supports get and post it supports http and https protocols and in the end the Payload has to be json and the response will may be made available in the context So in this case, for example, we are calling a service And this service And the response from the service is used in the conditions of a deny to Enable or disable Accept or reject the request we often Are people that say why don't you support a programming language in new policies? This is potentially an answer because You can create your service and call the service from the policy And the last one very quickly, uh, it won't be in 110 potentially in 111 Is about the validating admission policies, uh, that are in alpha in puberty is now so We have implemented Validating admission policies in the cli so we can already use the cli with validating admission policies There are a couple of challenges because traditionally kivano has always been a web book. So migrating to validating admission policies Has some challenges like how do we create events when something happens? On the reporting side, it looks like we can do The same as in the epi server. So it's not a problem But what if your rule needs to call a service or things like this, it's not currently possible with validating admission policies But we are convinced it will improve over time. So Uh, we are working on it. We don't have any release date yet, but uh, I'm sure it will come And that's it. I think We're at the end So if you have questions