 Welcome to Cloud Native Live today, where we dive into the code behind Cloud Native. I'm Taylor Thomas, and I am an engineering director at Cosmonic. Every week we bring a new set of presenters to showcase how to work with different Cloud Native technologies. They will build things, they'll break things, and they'll answer your questions as the goal of this. So today we have three people here who work on the Caverno project to talk to us about everything that they're doing. So this is an official live stream of this CNCF, and as such it is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that will be in violation of that Code of Conduct. Basically, it boils down to be respectful of everyone and all your fellow presenters and participants. So with that all said, let's hand it over to Charles Edward, Mariam, and Michelle from Caverno. Hi folks. Hi everyone. Hi everyone. So I'm Charles Edward. Hi everyone. So I'm Charles Edward. I joined Yamata who is building Caverno and open source Caverno and gave Caverno to the CNCF I think almost two years ago now. So I'm working exclusively on Caverno and yeah, I dropped the mic to Vishal or Mariam. Yeah, so I think I should go ahead now. So I'm Vishal and I'm a student. I'm a final year student studying computer science and I have been working on Caverno for the last one year and it has been a really amazing journey and yeah, we have worked on a lot of new things for the last couple of months and I'm really excited to share everything with you. I hope it's a wonderful experience for everyone. So I am Mariam. I am working as a software engineer at Yamata and I am a Caverno engineer. So I am working and focusing on Caverno. I was a previous LFXMAT that, you know, working on validating admission policies and bringing all the features that support to Caverno. Yeah, that's it. Thanks for the interest everyone. I believe Vishal, you were going to kick things off with giving a little bit of an overview on Caverno and everything that's going on. So take it away. Yeah. So yeah, so it's been quite some time since we had a release in Caverno. It's been like three months and during the last few weeks, few months, we have been working on a lot of things, right? Like we have a couple of new projects and a couple of new additions that we would like to share. So in this session, what we will be doing is that me, Mariam and Shahad, what we will be giving you a sneak peek on what's to come as well as we'll be giving you an overview on what Caverno is, right? So I think by the end of the session, everyone will have a better idea on Caverno and what they can expect from us in the next couple of months, right? So everything we've been discussing today is already out as an alpha release. So if you like anything and you want to try something out, you can just go ahead to its repo and try to use it. And if you have any questions or any solution, you can reach out to us via the Kubernetes Slack channel and we will love to hear from you, right? So before we get started and talk about what's new, let's talk about Caverno, right? So I think the right place to start for that would be let's talk about Kubernetes and security in general, right? So let's to kick things off, let's start with the tape, which is so Kubernetes is a very good project, which is good at several things, but it does not do security well by default, right? Your security, if you're using Kubernetes, it's your problem. So just let me just give you some time to take that, let that sink in. Yep. So when I say that, what I mean is that, yeah, I know that Kubernetes has stuffs like RBAC and it has other tools that can help you improve the security of the cluster, but the problem there is, is that it just doesn't give you finer control. It doesn't give you as much control as you would need to, you know, enforce security well in your cluster because every customer, every product, everyone has a different requirement and they need tools that are tailored to so that they can enforce security and do the right thing, right? And it's not just even, it's not even just about security, right? So let's say you don't care much about security because it's an internal cluster, even in that case, the problem is that Kubernetes is a very complex and it has a lot of improvement and it is really easy to mess up just for those little pitfalls that I did not know about. And you really need something that will help you, you know, as a guard to protect you against these mistakes, right? And that's where you will need something like a policy engine to protect you from these, just to a place where you can disinforce policies and, you know, I would have things set up properly so that you don't make any mistakes like that. So let's talk about policy engine and hope I convinced you enough to pick up a policy engine. So now that you want one, what can you pick? Let me share my screen. So we at Kiverno, we basically have a policy engine called Kiverno. It is a CNCF incubating project and basically it is a general purpose policy engine which was built and designed for Kubernetes. So if anyone here who has used Kiverno before, they know us as a general admission controller, which can be used for Kubernetes. But now we have branched out of Kubernetes and we work on different things like general failures as well, which you will learn more about in the other part of the other part of this presentation. But now, okay, so Kiverno is a general purpose engine and it is rapidly growing and we have one of the most popular policy engines for Kubernetes. We allow you to basically have different rules like validation, mutation of your resources, as well as generating new resources and cleanup and all of that stuff. And we also do not require you to learn any new programming language, right? So it is very easy to get started with. You don't need to hire a new person or just learn something beyond what you already know to get started with. It's something that is very desirable because the last thing you want is something that adds more complexity to the system, right? And besides that, we also have a very wide range of policies. And yeah, I think we have very wide range of policy that we have created over the last few years, which covers almost all the scenarios, like multi-tenancy and policy security, like supply and security, all of that. So when you just want to get started, you don't even have to write policy. You can just pick the things that you want from here and just start using them, right? So that's Kivarno. That's a Kivarno and supply and security. And now let's talk about an important concept here, which would be how do you write a policy in Kivarno, right? So let's go into documentation and writing policies. So where is it? Well, the policy is just, let's just look at a policy and how a policy is structured and how it looks like. So what you see here on my screen is a standard Kivarno cluster policy. Here you see there is a name and a specification. You know what? I think there is a better resource here for that. I think it would be selecting resource somewhere. The blind policies, policies are really perfect. So do you have a policy? And every policy can have one or more rules in them. And they have a match exclude block, right? So what a match exclude block is that it allows you to specify what this rule targets. So maybe you want to apply a rule to just a pod or a deployment or anything that you want, something that has a specific label or in any space. And once you have specified that, now do you have four blocks, four different type of policies that you can have? Like validate, mutate, generate, verify images? That will help you specify what this rule does, right? So you can either validate the resource, validate the current request you have, or let's say you have a mutate resource as well. Or maybe a generate or a verify image to verify signatures. You can do all of that using Kivarno to see examples. You can go to the policy repo. And OK, so this is how a policy looks like. And now let's talk about what, how would you even go about? You know, how would Kivarno go about applying these policies for you, right? So let's just look at the lifecycle of how things happen. So let's say you have just created a new admission request and you're trying to create a pod. So what happens there is that you are sending an API request to the Kubernetes API server. And that API server exposes two different webbooks. So we have a mutating admission webbook and a validating admission webbook. And we get callbacks from those webbook. And those callbacks are then intercepted by Kivarno. So you send a request. In the mutating stage, we get the resource. We apply your mutating policies. Let's say you have a policy that adds labels to things. So Kivarno will get that request, look at the policies and then apply that policy to your resource and then send it for schema validation. Once that is done, and if that's successful, then we move on to validating admission. So this is the place where we apply your validation policies. And now we validate the resource just to make sure that you have the, let's say you have the right tags. You have the right digest. And your pod does not, your pod does something with all the rules that you set up. And once that is done, if that's successful, we store it in NCD. So what happens if the policy fails? So let's just go into reporting. So let's say you have applied a policy, you have a resource. What would happen if, let's say the policy fails? So there are two scenarios here. Let's say you have set your policy to enforce. And in that case, what will happen is that you create a policy. You create a resource request, and there's a policy that it does not comply with. It will just not let you create the resource, right? You will try to create the resource, we'll just block it, you won't be able to do it. But there are some cases where you might want to create a resource because you know that that's not the right thing to do in most circumstances, but there's this exception where you want to do it. For that case, we have an audit type, right? So we have a policy of type audit. And what that does is that whenever you create a resource that fails, it's just going to tell you that, okay, I don't think you should be doing that, but you know, I'm not the one calling shots around here. You can do whatever you want, but just letting you know you should not be doing it. And you are the one inviting some of the policies. So how it will do that is that it will create this thing called a policy report, right? And these policy reports are actually Kubernetes custom resources, which are also stored in your HCD cluster, right? So these policy reports are basically contains the results of all the policies that have been applied to a resource. So it has like multiple results for multiple policies and in the end it aggregates all of them and tells you which one of them passed and which one of them failed. And so these are stored in your HCD cluster, right? So that's that. And now let's talk about, this is what Kiverna does and this is the basic structure of Kiverna. Let's talk about the things that we have worked on in Kiverna. So there's one thing related to do this reporting. That's why I was focusing on this one. So let's say you have been using Kiverna for a very long time. And now you have grown cluster and now your cluster has like a thousand blocks in it. And you certainly notice that all of a sudden you're, what is it? Request your A-request to API calls are taking way too long, right? But that might be happening because it has so many resources that API server cannot keep up with all of them, right? You're just sending too many requests to the API server and it just takes time to process each one of them. And that's why every request takes so long to process. That's not even the main problem. There's one problem that you might run into in the future and that is with your EtsyD, which is the storage that you use. EtsyD has a theoretical limit of eight gigabytes. And the problem with that is that if you have a very large cluster, you might just say it's that eight gigabytes limit. Did someone say something? Sorry. No, it's just a little mic. I should, you're okay, keep going. Okay, great. So I think I should share my, change my screen. That would be great. Yeah. So what's there's another issue that you might run into and we have recognized that. So that is with your EtsyD and the EtsyD has a theoretical limit of eight gigabytes, which can easily be crossed if you know, if you have a very large cluster with a lot of resources, you can surely you can add another EtsyD instance, but then again, you have a limit of 16 gigs or instead of it, which is still a limit. So how can you fix that, right? You have a couple of problems that is performance and the other one is the size limit. And to fix that, we have created a new project, which is called report server. So what this is the, this document that I'm showing is the KDP is the design document that covers it. So a report server works similar to the policy server if you're aware. And I think there's a diagram somewhere for effect. So what the report server does is that it takes your API calls that you are making to create and update reports. And instead of sending it to the standard APS server, which is, you know, this is the standard route basically the APS server and this is the standard which every other resource will follow instead of doing that, it will send it to something called an extension APS server. And what that does is that now you haven't, now whenever you try to create a policy report, it will send it to the extension APS server and that will send, that will instead of storing it in a city, it will store it in a another database which does not have any limit. And in our case, it's a relation database. So now this solves two problems for you. Now the APS server has more headroom. So it can process more requests and you request to the faster. And now your city won't hit the limit of eight gigabytes because, you know, the policy reports are stored in a different place. And there's a very good reason of why we can do this for policy reports and maybe even give our Kubernetes events. That is because those things do not store historic data, you know, they are just created and deleted all the time. And even if someone accidentally deletes them, even if you lose them, they will still again get created. Those things don't need the guarantees that are provided by LCD. So you can easily store them in some other place which is not at city. So that's how we do it. And we have a project, which if I can... Yeah. So this is the project repo. It's called report service and the Kverno is public. You can go ahead and take a look at that. And we have documentation covering that and we have multiple ways to install it as well. And I will be showing you a demo right now on how we can do that. So all you, you don't have to do much if you want to install the report server. All you have to do is just take the install manifest of the report server and then just apply it to your cluster. And once you have done that, then you can just start working with, you know, the report server of the standard APS server. Let me just show a quick demo on that. Let me just stop sharing my screen and just make sure. For those interested, I'm dropping the link down in the live stream. So you should be able to see that if you want to check out the report server. Yep, thanks. Great. And by the way, it was something a couple of users have been asking for a long time. Yeah, it's always really great when you get to deliver those things. So for those who've been using it, this is something obviously, like if you've been using Kverno before, you might have run into this need already. So thanks for calling that out. All right, so here I have... You increase the size of your terminal font, we shall just so people can read a little bit better. Is that better? You do it like two or three more times. Okay, just give me one second. We all like to pretend we can read the tiny font, but we all eventually have to go increase our font size anyway. There we go, that'll be perfect. Okay, that's nice. And let me just resize that. Okay, so here I have a standard cluster. Actually, I already have it pre-installed, I forgot to uninstall it before the event, but I'll just pretend that I did not install it. So QPAD will apply. So what you have to do is just apply this manifest and it will just create a couple of things. Unfortunately, I forgot to uninstall things, so you have to deal with this. Yeah. So what it does is that it creates a couple of things for you. The most important things are, it creates some APS services. So we have the APS service for, nice. Yeah, so we have the APS service for the policy working group, as well as the reports working group, obviously, Qverno reports working group. So these store the ephemeral reports which we use to aggregate and these are the upstream policy reports which is managed by the policy report. SIG. So those are two things and what they do is that we get APS services. So you see that there's every other APS service and then there are, wow, pretty cool. And there are these two, report server as well as, I'll just do that again. It's just this conducting update thing. So we get APS services, perfect. Yeah. So then there is like these two, the reports APS service and they don't send to local. So what local means is that your reports, they are going into HCD and through the local APS service. And when you have it set to something else, which in this case is a report server, what it tells Qvernities is that instead of processing that request in your local APS server, what you should do is, you should just send it to my service, which in this case is report server, the name of the service and the namespace which is also report server. And then that server will then process the request and do whatever it wants to do whatever it wants. And that's how it will work, right? So now let me just try to create something. Let me just try to create a test policy report, right? So I think I should just do it. And test slash test report. So basically I just have a standard policy report here, which contains everything. And let me just try to apply this file. So that's just policy. Oh, nice. So what is that? No, that was a board. I think I have already created it. I think I will get policy reports if I can get it. Oh, I already have one, which was created a long time. I'm just going to delete your artist. For the audience, this is the tell-tale sign that this is not a canned demo. This is live, great job, I'm going to say shot. Yeah, so finally I got my PC to work. It's finally listening to me. I, okay, so I deleted the old one and I created a new one. You can see that you've got to get policy reports. I just created a new policy report. And if I just fetch it, you can just YAML. All right, so it's the same policy report that was there and got created right now. And that's that. So it might look like the exact same thing is happening previously, but like if you did not have this cluster, but there's one difference. And that is if I, there is a new pod in the report server namespace, which is a Postgres database. And if I exec that pod and just provided the password, perfect. And this is the Postgres database where everything is stored instead of et cetera. So let me just report DT and I'm going to write DB. And if I just query the database, so this pod, which is a Postgres instance, it has a couple of tables in it. So two for the upstream policy reports and two for our fmbr reports, which we use in Kivorno for other things. And if we try to get a report that we just created from which of those policy reports, their name equal to test, right? So you can see that this was the policy report that was just created. It's name, yeah, it's name is test and it's the default namespace. And so whenever you query that, what is it, query the database query, the AVS server to get policy reports. AVS server is then going to, post going to the service, which is connected to this Postgres instance and it is fetching those reports from here. So the advantages that having them in a Postgres instance does, the advantages of having them in a Postgres instance of EtcD is that now you do not have a limit for that. And actually you don't even need the guarantees that transactional database like EtcD needs because even if something, some data get lost, Kivorno will eventually create those policy reports for you anyways, right? Because it is just the data that gets updated all the time. So you don't have to worry about those things as well. So all you have to do, even if you are running on a cluster that already has Kivorno installed, you just have to apply the YAML file and all the existing policy reports that you have will already, will again get created for you and you don't have to worry about anything. So that's how policy, the report server works and you can try it on your local cluster. The information for that is already present in all of that installation instruction is, wait a second, it is, yeah, it is present in the report server documentation. You can follow the guide, you can, we have a migration guide as well as everything you need for configuring the database and everything. All of the information is present here. And yeah, go ahead and try it out. If you have any questions or query, you can reach out to us via Slack. And yeah, I think I've taken quite a lot of time in covering all of this and we should now, I might have bored you a lot. We should start covering other things as well. So after this, let's discuss something about VAP, which is about Incadmission Policies. And I will like to invite Mariam to go ahead and cover this. So, Mariam. Okay. So we are gonna talk about Kubernetes validating and mission policies and how Kiverinu can be used to manage and generate these policies. So first of all, let me check my screen. Okay. So yeah, is it clear now? Yes, it is, you're good. Okay. So first of all, Kubernetes validating and mission policies are used to declare some validation checks that will be evaluated against the incoming resources. So if these checks are evaluated to true, then the API server will allow the creation of such resources, otherwise they will be blocked. And these validation checks are written in some expressions and they are usually evaluated directly in the API server, which means that these validating and mission policies don't require an admission webhook. And therefore it reduces, of course, the complexity of implementing and maintaining the admission controllers. And it also reduces the latency that results as, you know, that results from the round trip between the API server and of course the admission webhook server. So it's pretty straightforward and it doesn't require any third party product to execute. So let's see how to write a validating and mission policy. First of all, we write a policy definition and of course we write a binding for this definition. So let's start with the policy definition. The validating and mission policy is used to, as I said before, to declare validation checks and they are usually written in some expressions. And the second thing is that the policy definition must declare which resources this policy are gonna apply to. And for example, this case, the validating and mission policy will match deployments and it makes sure that the deployment replica is less than or equal five. And the second important resource that needs to be created along with the policy definition is the policy binding. As you see here, the validating and mission policy binding is used to declare the validation actions that the API server will take in case the resource violates the policy. Of course, there is other values for validation actions like, you know, word and audit. And of course, in these cases, the API server will not block the creation of the resource in case it does violate the policy. For the sake of this example, the validation action is set to deny. And the most important thing about the validating and mission policy binding is that it provides scoping to the policy definition. So together with the binding and the policy, it will be applied to resources. You know, for the sake of this example, both will be applied to deployments, but not all deployments. It will be only applied to deployments whose namespace labels set to environment as a key with test as a value. So this is what does it mean by that binding provides scoping to the policy definition. And, you know, there are alternative options like, you know, you can use the objects in vector and there are many more. With that said, validating and mission policy is used to define policies and it can be, and of course, apply those policies to resources, but they lack many useful tools in providing full policy management experience to users. And of course, this is where it comes, where Kibirno comes in. So Kibirno is a batteries included experience in managing validating and mission policies. And one of the most useful tools for, okay, you see my screen now. Yeah, we can still see everything, you're good. Yeah, okay. So one of the most important tools that Kibirno provides is the CLI. So Kibirno CLI is extended to apply and test validating and mission policies locally or against clusters. And this allow the users to simulate the policy enforcement again, simulate the policy enforcement without, you know, altering the cluster state without doing any changes to the cluster. And of course, this allow the policy authors to make adjustments whatever they want in the policies and to make sure that these policies will align the desired outcome for deploying these policies in the cluster. And of course, users can apply these policies to resources in CI, CD by appliance. And there are many more use cases that you can use the CLI in. I prepared, yeah. I prepared a quick demo on, you know, how to use the CLI. Okay, so first of all, we have this validating and mission policy where it matches deployments and it makes sure that the deployment replica is less than or equal to. And we have, of course, the validating and mission policy binding with the validation action set to deny. And the binding says that it will not apply deployments in all namespaces, but it will, the policy will be applied only to deployments whose namespace labels are set to key, sorry, set to environment as a key with the values of either staging or production. So this policy will be applied to only those deployments who exist in namespace with such a label. And of course we have, you know, some deployments to test this policy again. So we have six deployments. The first two deployments are deployed in the testing namespace. One of them has a replica of four and the other has a replica of two. The second two deployments are deployed in the staging namespace. One of them has a replica of four and the other has a replica of two. And as expected, you know, the first two deployments are deployed in the testing namespace. So we expect that the policy won't be applied, will not be applied to them at all. And the second two deployments will be, you know, they are matching the conditions where you're matching, they are in the matching criteria for the policy and the policy will be applied to both. One of them will fail and the other will pass. And the third one, finally, the last two deployments are deployed in the production namespace. And again, one with four replica and the other with two replica. And again, as expected, the policy will be applied to these resources as well. So as I said before, we can test policies locally without even communicating with the cluster. And for the sake of this example, we are providing the values manifest where we specify which namespaces we have and these namespaces has which labels. So here we are specifying that we have a namespace code staging with this label and the same for production and testing. The last thing is, you know, the main resource for testing policies, again, as to resources, is writing a key learning test YAML file where we specify the policy and we specify all, you know, all the six deployments each file contains two. And then we are specifying the expected result. So here we are saying that the expected result of applying this policy on these resources will be failed. Will be a failure, the same for the other two. Since, you know, staging and production has a replica of two, then we expecting that the result will be a pass and same here. So, and of course we are passing the values manifest file so that the CLI can know which namespaces we have and which labels they have, okay? So let's, yeah, let's run this command. Here we are using the key learning test command and as you see here, the result is for all of them is pass because actually the key learning CLI test command compares the expected result that we had written in this manifest file against the actual result of applying the policy to the resource. And if they are equal, then the overall result that appears to the user is pass. And as you see here, the testing deployment one and two has a result of pass and they are excluded because we expect that, you know, the result will be skipped because policies will not be applied to deployments that exist in the testing space. So this is, this was a quick demo about guarantee on I, there is many more to discover regarding this part. Another important thing is the reporting system. So key learning provides a reporting system that is used to generate policy records as a result of applying validating admission policies to resources. And this policy records actually provide valuable information and insights about the compliance and the status of these policies in the cluster. And actually the users can, you know, analyze the records and see the policy validation and see, you know, which policy, which resource violates the policy and according to that, they can take appropriate actions and rectify these resources. So it's really important for having or generating such records for validating admission policies. This reporting system actually exists for key learning policies and again, it's extended for, you know, providing this kind of system for validating admission policies as well. And I prepared a quick demo for this. Okay, so we are using just the same example as before in the CLI, we have the check deployment replica and again it, you know, it provides scoping to the deployments. They together will, together, sorry, both resources will be applied to the source to deployments in case they have the required, you know, namespace tables. And again, we have the same six deployments. And okay, I'm gonna try first to create these deployment. Let's check first that we don't have any records. Okay, and that's applying first the deployment. The idea behind applying the deployments first is that we wanna see that there are actual, you know, policy records generated as a result of applying the policy to existing resources in the cluster. So we first apply deployments as we see here. At this point, there is no policy. So all of them will be applied as expected. They will be successfully created and then we are gonna apply the policy. Okay, and as in the previous example, we are expecting that a deployments whose namespace is testing will not be, will not be, you know, checked or applied against policies. So if we try to check if there is, you know, policy records in the testing namespace, we will see that there are no policy records generated for these resources because there's no actual policy applied to them. Okay, and we can try to get policy records from the saving namespace. And as you see here, there are two policy records. The first policy record is for the staging deployment one. Okay, the staging deployment one, whose Rackika is four. So it got a result failure. And the second record is for the staging deployment two. And again, it has a Rackika of two. So we expect that it passes. And this is what the policy records tells us. We can have a quick look on how the policy records look like. So first of all, we have this policy record. And it says that this policy record actually is generated for this deployment. And the result is a failure. This is the result of applying the policy. And it's binding to, you know, this resource. And the second record is the same one. Yeah, we can check if there is some records in the production namespace. And here we have the same to, you know, sorry, not the same, of course, not the same policy records. But again, for each deployment, we have a policy record for that. And since the production deployment one has a Rackika of four, it got a result of a failure. And we have another deployment that has a Rackika of two that then it passes. And it doesn't value the policy at all. So it gets a pass as a result. Okay, so again, there's a lot more about policy records that needs to be discovered. And there are many cases on how we can generate these records and how we can benefit from the data or the information they are providing us. But feel free to explore it. And if you have any questions related to that, feel free to ask on our Slack channel and we are glad to help. The last thing is about Kevernol policies itself and how Kevernol can generate and manage the full lifecycle of validating English on policy. So first of all, Kevernol policy has a validated doccell subgroup. This is a new field that was introduced. And here you can write the cell expressions that you wanna evaluate them against resources. And there are some flag in the admission controller that you can use so that you will have the freedom to either generate validating admission policy from this Kevernol policy or you can let the Kevernol engine handle the resource validation itself. And in case you chose that there will be a generating validating admission policy for this Kevernol policy, then you will see that there isn't a field in the status of the Kevernol policy set that it is successfully generated. And you can see that there is no validating admission policy actually generated and there is an over reference that it comes as a result of Kevernol policy. And of course, if you update or delete Kevernol policy itself, you will see that any operation or anything you do in the original Kevernol policy will be directly reflected into the validating admission policies. That's it. I hope this was beneficial for you and given the importance of policies in Kevernol policy, it's really important to have policies that checks and validates other resources like data form, do profiles and many more. And this is how Kevernol JSON works and it will be introduced by Shalva. Thanks, Maria. So, hi everyone. I'm going to terminate this live stream by presenting two new tools that are gaining popularity and adoption. So the first tool is Kevernol JSON and that's something the users were asking for quite a long time. And this new tool allows applying sort of Kevernol policies outside the Kubernetes world. So, let me share my screen and I can, okay, select a tab. So I need to select a tab. No, I guess I can share the entire screen. Yeah, this one. Okay, and the second tool will be about chainsaw, which leverages what we did in Kevernol JSON and applies the technology in Kevernol JSON to end-to-end testing for Kubernetes operators and more. So, do you see my screen? Yes, it's visible, it's good to go. Okay, perfect. So, this is the GitHub repository for Kevernol JSON. So, basically, Kevernol JSON is Kevernol for any JSON-compatible payload. So, JSON-compatible payloads include Terraform files. So, eventually, Terraform files are not written in JSON out of the box, but there are tools to convert those Terraform files or Terraform plans or Terraform resources into JSON payloads and those JSON payloads can be evaluated against Kevernol JSON policies. The same thing applies to Dockerfiles. We can take a Dockerfile and we will look at an example how Kevernol JSON policy can validate or reject a Dockerfile. It can be applied to cloud configurations. It can be used to perform service authorization requests and we have an LFX mentorship coming to create an envoy plugin based on Kevernol JSON to allow using Kevernol JSON as an authorization framework. And that's it. So, we have all the documentation available here. We have a small policy catalog, but it is expected to grow in the coming weeks or months as Kevernol JSON is more and more adopted by more users. We have good documentation. So, basically, Kevernol JSON comes in three forms. You can use it as a Golang library. We have a command line interface. It's the Kevernol JSON tool. And we also have a web application with a REST API. You can send your payloads. The policies are registered in the Kubernetes cluster as CRDs and then you can create your custom resources for your policies. And that's it. So, the next step is to look at how to write policies. It looks like Kevernol policies, but not exactly the same. So, the first thing to note is that the API version is JSON.Kevernol.io and not just Kevernol.io. So, the structure of Kevernol JSON policy is a little bit different. It shares a lot of similarities with Kevernol policies, but there are subtle differences, of course. The kind is validating policy. Then it's like any other Kubernetes objects. You have metadata, you have a name. It's a cluster-level resource. So, there's no namespace, but you can have labels, annotations, and everything else in metadata. Then you have a spec. The spec is made of rules. And all rules have a name, and they validate a couple of things. On top of that, we have the match and exclude statements that we are used to manipulate in Kevernol policies. The only difference is Kevernol policies are used in the Kubernetes world. So, it was easy for a Kevernol policy to manipulate a kind or a namespace because those concepts are always present in a Kubernetes cluster. As Kevernol JSON wants to address more than Kubernetes clusters, we needed more flexibility. Basically, what you put in a match and the next exclude can be anything. In this example, the policy is going to validate S3 buckets tags. We are going to make sure that every S3 big bucket has a team tag, and this team tag has a value. But an S3 bucket doesn't have any kind. It doesn't live in a namespace. It doesn't mean anything for an S3 bucket to talk about a namespace or anything else. But in this case, we know it's an S3 bucket, and an S3 bucket will have a type, and this type will be AWS S3 bucket. So, everything you put in a match and in a next exclude statement will be evaluated against an incoming resource, and it has to match or not, and it will be accepted or not. And of course, we will support the context entries we are used to find in Kevernol policies, but we don't have, and we don't need for-reach statements. We don't need pattern operators. We don't need anchors or wildcards or things like this because in Kevernol JSON, everything is, every validation or every check is actually an assertion tree that said that's something we created just for Kevernol JSON and that we reused later in chainsaw. So basically, an assertion tree supports complex expressions in the key of the YAML structure. So in this case, we are going to check, for example, this is a pod because Kevernol JSON can also work with Kubernetes resources. It can do more, but a Kubernetes resource is made of YAML or JSON. So there's no reason why Kevernol JSON could not work with Kubernetes resources. So in this case, it will be a pod and we are going to check that the spec.service account name equals default is false. So we don't need patterns like not default or greater or equal to default to three or four. So that's really the difference. The real difference with Kevernol policies, we made a syntax that's more flexible than what we had in Kevernol policies and it allowed us to remove forage patterns. Everything like this are not necessary anymore. So I guess we have only five minutes left. So I can make a quick demo. So I talked about Dockerfiles and I have a quick demo that does just that. I have two tests there and the policy. So this policy is going to be evaluated against a Dockerfile. This is a very simple Dockerfile and everyone can see that it's just calling wget with a number of options and a new URL. The idea is that we first need to translate this Dockerfile into a JSON payload. And there are tools that do that perfectly. So you can look that up on Google and you will find tools that can do that. And then we have the corresponding JSON payload with the stages, with the from statement, the commands. And then we have all the commands that we can find here in our Dockerfile. Maybe I should make it bigger. So yes, this is the Dockerfile and this is our policy that tries to check that stages, commands that have the name run. And look at the command line of those commands. And if something starts with wget or contains wget with the iPhone iPhone not check certificate flag is present. We expect this to be false. So we don't want any string, any command starting or containing wget and containing this flag because it's considered unsecured. Oh, and if I run this simple test so against the first Dockerfile which is supposed to break. I have the error detected by Kevano-Jason and we see that we have an invalid value, true, and we expected false. So that's the expected value. And if we have true, it means that we have this in the Dockerfile, this flag, and it's true. We have this flag in the Dockerfile. That is wget command in the Dockerfile. And of course, I have the equivalent good test which is also a wget but without the forbidden flag. And in this case, if I run this second test, this time it passed because nothing is breaking the conditions of the policy. So that's it. I see we have only two minutes left. Taylor, do you want to, are there questions or anything else? Currently, there aren't any questions from the chat and given the time right now, I think I had a few but I had stages up, so I just want to wrap this up and say thank you to you all. This is some really great demos here. Everyone can go check the project out. I believe the website is giverno.io, correct? And so you can go check out the project there and see what they're up to and all these things that they've been demoing and what you can do and use them for. So thank you so much for coming. We're so thankful for all the effort you guys put into this and we really hope that everyone enjoyed it and we'll be having the next live stream in a little bit and we hope to see you all there. So thank you all for coming and thank you all once again for presenting this. Thanks everyone and see you all. Thank you.