 Merci d'être venu, merci d'être venu, merci d'être venu pour votre努力. J'apprécie que vous êtes venu à probablement l'une de les dernières sessions de la CUPCON Paris. S'il vous plaît, n'hésitez pas à passer la fête à 4 p.m. Nous sommes tous fiers, j'espère que vous ne pourrez pas dormir pendant la présentation. J'espère que je ne vais pas dormir, aussi. Nous sommes sur le stage, nous allons parler d'utiliser Kiverno pour contrôler et appliquer des policiers dans les projets de CloudNative en utilisant des policiers de Kiverno. Raoul va commencer et Mariam va parler de récentes additions en Kiverno et je vais parler de l'avenir de Kiverno, de ce qui s'est passé pendant l'année passée et de comment Kiverno pourrait changer dans la prochaine version. Raoul, merci. Bonjour tout le monde, mon nom est Raoul Garcia. Je travaille pour le D-Kix, l'une des plus grandes échanges internet en Allemagne. Avant de commencer, je vais introduire qui le D-Kix est et aussi vous montrer comment l'environnement se regarde avant de voir comment nous avons implementé Kiverno et pourquoi. Le D-Kix est l'une des opérateurs de l'IxPies pour faciliter la connexion interne des réseaux afin d'améliorer l'efficacité des services de l'IxPies globalement. Nous offrons beaucoup de services différents, comme pour le peering ou d'autres connexions interne, comme CloudConnect, afin de jouer un rôle important dans l'architecture de l'Internet pour les providers de réseaux, CDNs, les réseaux de large entreprise, et tous ces réseaux d'exchange de trafic directement afin d'améliorer l'attention, d'améliorer la banque, et d'améliorer la compréhension sur l'Internet. Ici, sur cette map vous pouvez voir où nous sommes présents et où vous pouvez l'interconnecter avec nous. Comment l'environnement se regarde? Nous avons une application platform qui provoque des services centralisés, comme un monitoring, un storage long terme, un stack de login, nous avons aussi l'OCI Image Registry, nous avons un store secret, un repository code, etc. Nous essayons d'améliorer l'architecture de l'IxPies pour tous les équipes, et nous avons d'autres équipes qui ont des clusters de Kubernetes, ils ont un stack de monitoring, ils ont un environnement de logging que nous prenons en compte, nous les providerons avec un contrôle de l'intérêt, avec des accesses de store secret, etc. Mais tout de suite, les équipes sont encore en mesure d'améliorer l'architecture de l'IxPies, donc nous voulons que les équipes puissent créer leur emploi pour utiliser les techniques de Kubernetes, et tout de suite, par contre, dans une façon guideline, donc nous n'avons pas les équipes donc pour cela, c'est exactement où l'IxPies s'entend et avant d'aller voir des exemples de l'architecture de l'IxPies, je vais rapidement introduire l'IxPies pour ceux qui n'ont pas eu l'expérience de l'architecture de l'IxPies. Alors, qu'est-ce que l'IxPies? L'IxPies est une entreprise d'architecture de l'IxPies, qui est intégrée nativement dans les Kubernetes. Il s'agit d'un grand stack de fonctions que vous pouvez utiliser pour valider, to mutate, to generate, or to clean up resources. It also has a possibility to do image signature verification. Next to it, it also has a rich reporting functionality so that you can, you get an overview with a user interface to see which policies or which validations of policies are passing and which of them are failing. You can also expose Prometheus metrics, which are very good, so if you want to see in a long term how your teams are behaving and last but not least, there is a huge community in Kiverno existing, so they have already 5,000 stars in GitHub and there is also a huge amount of policy examples on kiverno.io slash policies where you basically find predefined policies for every use case. All right. Let's have a look what we are using it for. The day kicks, since I mentioned two slides before, we provide teams with managed Kubernetes classes. We want them to deploy their stuff by themselves. However, we don't want them, for example, to deploy privileged containers or also to deploy services of type NotePort or Load Balancer so that they don't expose any kind of traffic to the internet by themselves, which is not regulated by our network team. Next to this, we also prevent them on the management of resources. So let's say we deploy resources to the clusters of the specific teams like the Ingress Controller and so on. We don't want them to be able to make modifications on the Ingress Controller. We don't want them to be able to change the Ingress Controller, to change the image version and so on. So we basically have a rule in place that mitigates that they make changes to the managed namespaces that are provided by us. Next to this, we also have a validation policy in place that ensures that no one can deploy any blocking port disruption budgets. I mean, we have seen quite often the situation that teams deploy a deployment where they say replica is one and then someone goes and deploys a port disruption budget says min available is also one. When it comes now to a cluster update, the train of the node would be blocked because the min available and the replica is equal so the train would not be able the node would not be able to be trained. In order to prevent this, we have created a keyveno policy that is preventing this and I will show you also an example of this on the next slide. Yeah, what else do we have? We also have policies in place to mutate so basically we add to every port that a team deploys a node selector so that every port that a team deploys ends up on the correct node pool. Last but not least, on the mutate side we also disable the out amount service account talking for the default service account so in case that any team says that they want to make any calls from their port against the API server we have created an independent service account for this specific service. And last but not least we also use keyveno to generate road bindings for user created namespaces so in our environment users only have permission to do operations on namespaces that they create by themselves and we use keyveno to create the necessary road bindings so that only the team members of the specific cluster get access to the namespaces that they create by themselves. Every other namespace they don't have access to they can read in the namespace but they can do any modification and this is all being handled by the road bindings that are automatically being created using keyveno. All right, let me quickly let me quickly show you an example so what we see here on the slide is one of our rules the blocking process for option budget so what we see here on line let me just make this bigger here so what we see what we see here on line 58 is that we are running this rule whenever a deployment scale command is being passed to the cluster so whenever you say kubectl scale deployment there is replica 2 for example this rule is going to check a deployment that is existing for this scale command so that you see for example here on line 62 is existing and if it is existing then we basically go check the replica that is defined in this deployment and next then we go make another call with this JSON match expression to obtain the min available definition of the pot disruption budget that is linked to this deployment so once we got this 2 numbers we know how much is the min available and how much is the replica value in the deployment and next then we compare them to each other so we check how many replicas have we defined in the deployment and how many min available have we defined in the pot disruption budget so should the replica in the deployment now be equal to the amount of min available this policy is going to block the operation and same is also happening if the replica is going to be less than the amount of min available in the pot disruption budget so that no user can create any blocking pot disruption budget and I mean you can see this is only a short overview of what this policy is doing, you can see there are some more rules defined and this is basically one of our use cases we have I think currently like 30 rules in place in every cluster which we use to prevent the users from doing anything that we do not allow on the platform and yeah but that I'm done and give over to Mariam thank you Hi everyone I am Mariam Fahmi I am a Kiverno maintainer as a software engineer at Nermata today I'm going to talk about Kubernetes validating admission policies and how Kiverno provides a full policy management experience to validating admission policies so before digging into Kubernetes validating admission policies I would like to explain first what problems do validating admission policies solve and why do they even exist so before validating admission policies custom policies were enforced through admission webhooks so typically when you create a new deployment an API request is sent to the API server and then this API request gets processed and you know it gets through different phases of execution the first stage of course is the authentication and the authorization where you want to know who sent the API request and whether he is applicable to perform that action or not and the second phase is the most interesting part here it is the admission controllers the thing is that it is run different built-in admission controller where it is configured in the API server some of the admission controllers run directly in the API server whereas some of them can run as an external system and this external system is often used when you want to perform complex validations mutations and in this case you use Kivernu which will be responsible of receiving the API request from the API server and then responds back to that request with either allow or deny and as you see you know this can cause some delay or some latency in the Kubernetes cluster and of course the responsibility to make release plans for this admission controller and for that for these reasons validating admission policies comes in so so validating admission policies provides an alternative solution for admission web hooks specifically for the validating admission use cases you know in validating admission policies you write validation expressions in cell language and these validation expressions are usually evaluated directly in the API server and as you see it's directly evaluated in the API server so there is no need for admission controllers at all and hence validating admission policies reduce the latency that comes you know as a result of the round trip between the API server and the admission controller and of course it also reduces the complexity of writing and maintaining an admission controller so with that said let's take a quick look on how to write validating admission policy so first of all you write a policy definition where you specify the resources that you want to check that you want to apply the policies to them so for the sake of this example in the left manifest you will see that we have a policy definition that matches deployments and you also specify validation expressions those cell expressions are going to be checked again as the incoming resource so in this example it checks for the deployment replica it makes sure that any incoming deployment should be less than or equal 5 and of course this cell expression will be evaluated to either true or false so if it is evaluated to true this means that the API server will allow the application of such resource because it doesn't violate the policy otherwise if it violates the policy then the cell expression will be evaluated to false and the API server will have to take an action regarding the violation of the resource and this is important and this is how the validating and mission policy binding is important for us so the validating and mission policy binding as you see in the example in the right manifest you will see that you can use the binding to specify the validation action that the API server will take in case the resource violates the policy and in this example it is set to deny which means that the API server will block any resource that violates the policy and another thing is that binding provides scoping to the resources that is mentioned in the policy definition itself what does that mean so as you see in the right manifest the binding is actually specifies match resources field where it specifies that it will be applied to all resources whose namespace labels is set to environment as a key this means that together with the policy and the binding it will be applied to deployments but not all deployments it will be applied to only the deployments whose namespace labels has the environment label yeah as I said validating and mission policies you know is used to write policy definitions and apply them to resources but it lacks some important tools that provides a full policy management experience to users and that's where Qiverno comes in so Qiverno is a batteries included experience when it comes to managing, validating and mission policies one of the most important tools provided by Qiverno is the Qiverno CLI so the CLI can be used to apply validating and mission policies to resources locally or again as clusters and this makes it easier for you to simulate the policy enforcements without altering the cluster state and of course it makes it easier for policy authors to make adjustments to the policy before deployment and you know making sure that the policy achieve the desired outcome and of course users can use the Qiverno CLI to validate resources in CI CD pipelines and there are many much resources sorry there are many much use cases that you can benefit from the Qiverno CLI I prepared a quick demo on applying validating and mission policies to resources using the Qiverno playground so that you can see how you can run validating and mission policies without actually deploying them in the cluster so first of all on the left side we have the policy definition and here it is the same example as the manifest I have showed in the slides so this validating and mission policy matches deployments and it checks that the deployment replica is less than or equal to and yeah sorry and we have also the validating and mission policy binding as you see here the validating and mission policy binding is used to select deployments with a label with the app label so again this policy will not be applied to all deployments it will only be applied to the deployments who has you know the app label and the validation action is set to deny which means that the resource will be blocked in case it violates the policy ok with that said you will see on the right side on the right side the resources manifest we have two deployments the first deployment you know is called busybox deployment and it has a replica of four and as you see here it has the label app and the second deployment is the engine x deployment 1 and it has also the label app but with a different value ok and you know it has a replica of four again so both deployments violates the policy but one of them the policy will be applied to and the other the resource will be skipped because it doesn't match the policy in the first place ok if we run it yeah you will see that only the engine x deployments batches yeah you will see that only only the deployments the engine x deployments that matches the policy so the policies apply to them fail and the other pass this is because there is a third deployment that has a replica of two but yeah I forgot about it yeah anyways yeah here is it anyway I provided a link to this example in the slides so that you can enter you know click on the link and you can manipulate with the resources furthermore and if you have any questions please reach out to me it is just a basic example if you want more examples yeah I am glad to help if you have any specific use case ok the last yeah the last thing that I want to talk about is the reporting system so in addition to Qiverno CLI it provides a reporting system that generates policy reports as a result of applying, validating and mission policies to resources and these reports you know provide valuable insights and information about the complains and the state of the cluster and the state of the policies in the cluster and you know teams can analyze these reports and identify the policy violations and then they can take appropriate actions to rectify them you can also even have an existing resources and then you for example deploy the policy so you want to apply this new validating and mission policy to the existing resources so this is extremely benefit when you want to look for the existing resources and there is some some part of the manifest of the policy report it shows us the result of applying the validating and mission policy which is called check deployment replica to the deployment whose namespace is staging an S and the result is fail as you see and why does it fail it violates the deployment replica it should be less than or equal 5 but it seems that this resource has you know a higher number of replica that's it if you have any questions you know I will be glad to help next we will talk about Kivirino Jason and Chalwar will yeah we'll take the lead in this part thank you OK thanks Mariam so my name is Charledois Boitiché I'm working at Nirmata as a staff engineer in the same team as Mariam so I started working at Nirmata 2 years ago I did a lot of contributions to Kivirino during the first year and the past year I spent most of my time doing research and innovation to see how we could do the same thing as Kivirino but outside of Kubernetes cluster basically so we are going to talk about that and that's what the 2 infinity and beyond is about how do we take what we are with Kivirino working well in Kubernetes clusters and make this generic enough to apply that to other technologies than Kubernetes so to start with we studied the strengths and weaknesses of Kivirino because there's something important to realize it's that Kivirino is a Kubernetes native project being a Kubernetes native project is great because it works well with our native ecosystem it was built for Kubernetes so it's definitely extremely easy to use Kivirino in a Kubernetes cluster but when you try to use the same tool outside of Kubernetes what was an advantage at the beginning becomes a problem actually so we had to first ask the question what are the strengths and what are the weaknesses of Kivirino so for those who know Kivirino it will be an evidence and obvious for those who don't know Kivirino it will be good reasons to adopt it so Kivirino is simple it doesn't require any coding language and then it's really easy to adopt and to start with it is Kubernetes native if you are running Kubernetes it's great but it makes it not portable and with those attributes of course Kivirino grew quickly and became widely adopted it evolved fastly but also suffered a lot from the Kubernetes influence we added features in Kivirino but we were always focusing on making those features work well in Kubernetes clusters so in the end the more we added features the more we were coupled with Kubernetes and the more it was going to be difficult to move outside of Kubernetes and to do that we probably wanted to do something different thank you but different but the same actually and other slides will be same but different but yeah it's kind of similar basically we would like to keep what we love in our toy but making it more modern and more adapted to the today needs so for example most of our users wanted to be able to use Kivirino policies with infrastructure as code and they wanted to write policies for their Terraform code they wanted to write policies in the same way they do it for Kivirino so with just basic camel definitions in a declarative way and that's what we wanted to do for that we wanted to escape the Kubernetes jail Kivirino was logged in while keeping the simplicity while keeping the no coding requirement we wanted to erase the Kubernetes native attribute and make it portable so for that we had to look at the anatomy of Kivirino policy actually what is doing a policy engine we can summarize the job of a policy engine like very simply given an object you have an object in your hand and you have a set of policies the policy engine is going to take all the rules in your policies is going to first determine if the rule applies to the object if it doesn't apply it will move to the next policy and the next rule if it applies it's going to evaluate the content of the rule against the object you provided and provide an outcome so is it matching the expected requirements or not so and at the end you have a bunch of outcomes corresponding to the result of evaluating every rule with the given object and you can make a decision I am going to validate this object or not for that the structure we have here is perfectly adapted but if we look on the right there are a lot of things that are very Kubernetes specific like resource kinds labels namespaces, cluster roles it is completely Kubernetes specific so taking this didn't make sense keeping the same structure made sense but without the Kubernetes specific parts so we are somehow to replace that the question is how are we going to replace it with and we are going to go through a few examples of how we could do that by comparing QIverno policies and QIverno JSON policies on the left we have QIverno policies for deployment so the match statement says any resource that has the kind deployment we can do the same in QIverno JSON with something quite similar we will match on any object that has a field called API version and the value app.sv1 and another field that has the name kind and the value is deployment so this time we don't have any dependency on the Kubernetes schema instead of API version app.sv1 it could be type s3 bucket it's just the partial definition of an object if the object we are provided matches the partial definition we have here it will be matching the second point is how can we for example check that the number of replicas of the deployment is greater than 2 so in a QIverno policy we used patterns which represented comparison operations so we go in spec then in replicas and we apply the greater or equal to 2 and this was replaced in QIverno JSON with something a bit different but a lot more flexible so we replace validate with assert and we have all check that our intermediary stands as and we are checking that spec and this time we take replicas we apply comparison operator so replicas greater or equal than 2 and we expect this to be true so it looks a bit different but also close to what we had in the in the first place but in the second case we could have other conditions like this so it makes it a lot more flexible and the third one is about doing array processing differently so we introduced a way to process arrays a bit more elegantly and a bit more elegantly and flexibility so we now have the tilde in front of the data that means that QIverno JSON will iterate over every item applying the same assertion to every item in the array so basically this allowed us to write QIverno policies for Terraform code and in this one we have a match this match is going to match on something that's completely different than QIverno it's resource for this one it's an ECS task definition so if the payload we are receiving contains any ECS task definition it's going to check that the resource AWS ECS task definition network mode is AWS VPC if it's different than that the resource will be considered violating the rule and will be reported so basically this is the new tool QIverno JSON this one is simple it has no coding required it is portable this time and it gives more freedom for policy authors allowing for example to use it as a go long library so one developing is our own tool yes sorry so it can be used as a CLI tool as a go long library or as an API and finally this is mostly a technology this is not really a complete project because when you on code for example you are probably talking about TF files those TF files are not directly expressed in JSON so you probably have to convert that to JSON before you can process them with QIverno JSON so this is more technology than product there's probably something to build on top of that for that we created two tools the first one is QIverno Chain it's a tool to run end-to-end tests that we used internally and that we open sourced a lot of folks started adopting it and we are creating an envoy plugin to be able to perform authorization based on policies in service meshes like Istio so this is really a project this time this is really outside Kubernetes clusters because we are going to use it for very different things than before and in the end the goal is to give that back to QIverno so that QIverno we should benefit those evolutions in the future oops sorry and yes finally those two tools are working well so the next challenge is to get this back in QIverno that's it and thank you so just to recap you can join the community we have a very active Slack channel we have weekly meetings you can contribute to our projects share your feedback and we hope you enjoy the conference and the coupon this year