 Hello, everyone. Wow. This is loud. Welcome to Valencia. Welcome to the conference and welcome to hopefully the last session for today. So, yeah. Let's go. Yeah, it's been an exhausting five days. Let's let's wrap this with a huge ban. So today we will be talking about policy reports CRD and how you can manage your admission control Runtime and scan reports. That's amazing. So before we get into the technical part of the adapters and the policy reporters Let's maybe go through a quick round of introduction. So today we have four panelists One of us wasn't able to join us because of some visa issues. That's Stephen right there and we'll come to his introduction We have a video from him in a quick. Well, I am Anushka Mittal. I come from India I'm in my pre-final year pursuing bachelors in engineering I have worked with Falco in the past. I'll be talking about Falco adapter soon. I work with Keverno currently and Let's go. Let's over to you. Yeah, I'm Frank. I'm from Germany I'm a senior software developer working at Le Vue and I'm also a contributor to different open source project like Falco and Kyberno And yeah, I'm also the maintainer of policy reporter, which will I present in if you yeah this talk? So Jay Hi, everyone. Welcome to our talk. My name is Mritanjay and I also belong from India Currently, I'm a final year computer science engineering student of bachelors in technology from India And I'm also an intern with Nirmata. Currently. I'm contributing to Kyberno other than that previously last year I worked with Jim Bugwadia as a part of LFX mentorship to Bill Cube Bench adapter Which we are going to talk about in today's talk But before we move on to the adapters, let's discuss what was the foundation and motivation behind the formation of Policy working group. Anushka, would you love to would you like to tell her? Let us know about that. Yeah, definitely so You know their policies are an important feature and we have earlier seen a lot of scattered support for policies different levels of maturity is So the purpose the motivation behind policy working group was basically to provide an overall architecture, you know one platform to discuss current Implementations of policy as well as discuss future Implementations and future proposals. So with this we were able to provide a universal view of policy architecture in Kubernetes Let's also talk about policy report CRD This is what our adapters that we'll discuss in a while are based on so policy report CRD to Tell you the motivation imagine having a huge cluster with multiple policy engines and getting a lot of outputs If they aren't unified it might just be a big hassle So the motivation behind policy report CRD was to unify these outputs in of you know provided by multiple policy engines like Falco, Trivi, Cube Bench Etc. So this was aimed at helping cluster admins manage their huge clusters and Do that by just you know treating policies Just as your normal Kubernetes resources by using any Kubernetes management tool So and how fits this new CRD in our ecosystem? That's that's lovely. That's lovely question So you can see here we have a few points overview of how this fits into Kubernetes architecture So there's a policy information point right here That's where you get the information from your policy engines that Followed by the policy decision point where your policy reports policy engines Policies with there you can see interaction between the policy administrator administration point policy decision point this is where you want to decide what's next and That's where the administrator administrator sits and you know reviews these policies that are unified After this whatever here whatever happens here the decisions taken are imposed in the last policy enforcement point That was a very brief introduction to policy report CRD policy working group and the overall architecture We will now move on to the Q of adapters that were built and we have Mr. Jai going with the first adapter called Cube Bench adapter Now that we know about What is policy report custom resource definition? Let's talk about one of the first adapters that was built on top of it So before we talk about what is Cube Bench adapter? Let's talk about what is Cube Bench? So Cube Bench is a tool that was built by a cross security and what it does is that it it helps us run CIS security benchmark checks for the Kubernetes clusters that we are running whether they run on AKS or EKS or any other clusters so Those checks are performed by Cube Bench and what does it give it gives us just results But how to take these results how to move forward with these the results is something that we are going to discuss How we solve it with that Cube Bench adapter? So what are we solving? What are we solving now before Cube Bench adapter? It's a simple process that's defined Cube Bench Installation and it's like you you have your communities cluster running you apply the yaml's of That that is provided by Cube Bench in their documentation and you get the results But that's a very manual process and you are not able to move ahead after results What what what do you if you want to have like more fine grain? Control after those results if the cluster admins need that That is what our adopter tries to help with and not only try to help with it But it tries to abstract that entire manual process by making it up being inside a nested job, so if we see now in the Diagram now that entire process which was a manual process has been abstracted by the Cube Bench adapter and not only that The policy report the working group that has defined the policy report customer source definition It has been mapped with the results that we get from Cube Bench And on mapping what we get after after after after the mapping that we get is the cluster policy report or the policy report Which can help us in policy admission controls How did we solve it? How how how how we how what were the stages of solving it? So the first stage the first stage was that we already had the customer source definition defined Of course, but we needed a client code talk to them to to create update or delete the Objects of the customer source definition. We had the customer source definition, but we wanted their objects So the first step was that creating the client code after creating the client code The next big step was mapping because we need to know how because both are different data structures The Cube Bench is on is a totally different data structure and policy report customer source definitions have different types of mappings and just on define That was a good process. That was a really brainstorming process where we mapped On what we need how what kind of source what kind of properties where are things fitting it and after that it was all Hemified, I would say using a single command hem command We can now a run data after and it can be scheduled as a cron job So of course for the demo we are not going with a one month or a one week cron job Which is used in production use cases, but yeah for here in the demo next we are going to see the demo It's it's a schedule cron job for two minutes where we'll be seeing live example of how we can generate a Cluster policy report with a cube bench adapter. So it's a single One hem command after that the only prerequisite required was that you have a cuban T's cluster So locally we are testing so just kind cluster. It will define out It will automatically apply the custom resource definitions that we have and after that now we wait for the jobs So if we can fast forward a little bit so that we can see the jobs A little bit more earlier Yeah, it take two minutes because we have scheduled it for two minutes But yeah, so this this this is the cube bench That's the third one that we are seeing this would have been happened any manually but that's happening now inside the job itself so it's a nested job actually and After that the jobs are done We'll be able to get our first cluster policy report and after the creation of cluster policy reports If we don't change the name It will just keep on updating it with the results whatever the results that we get So we are getting our first results as we can see and we can also get the animals which later on will be used by his policy report Viewers so yeah, that was the demo from my site. Next we have trivia adapter. See if it is not here with us in person But yeah, yes, send us a recording. So yeah, that's right. Thank you so much from Stephen y'all Steven ecobanettis engineer in 10 at kuba matty Thank you for stuff. Thank you matty jay for your quick introduction Cube bench adapter. So today I'll be talking to you about trivia data Trivia adapter is one of the Soons to adapt the policy report So today I'm talking to you more about trivia adapter. Let me share my screen real quick Sorry, okay, so this is just a quick introduction about myself Ecobanettis engineer in 10 at kuba matty. So I'll be talking to you about trivia adapter. I'm so first of what is trivy Trivy is a comp is a is a simple and compressive scanner for vulnerabilities in container image file systems which are positive as well as configuring issues Trivia data is created is it to created by the aqua security team This is a the link to the repo which you can follow To know more about trivy and if you need I'm sure the aqua security team and their physically for you Disables in the aqua security team for you to go and follow or to ask small questions about trivy Yeah So I'll be talking to you more about trivia adapter, which is going to be the next slide What is trivia adapter as the name implies to adapt as a combination of both trivy the scanner and Adapter which is the policy report CRD So what it does is to take is to take Container images scan them which we then I'm up the results with the policy report Reports the results with the policy reports So that does a quick architecture of what? Trivia that does so once we scan once we scan our board once once trivia data detects Our port trivy takes charge, which is going to scan our container image Then map the results from trivy to the policies CRD reports the results With the policy reports as a policy report So I'll be showing you a quick demo next and in this demo I've already installed trivia data locally on my system and I have a component is cluster running my system which is which has a port and a Container image is running as well the port so I will scan in the container image with trivia data Seeing the results for more visualization for more for more visibility. Have we seen the results of the policy report? I hope that's good And yes, there's a link for you to go and for you to know more about trivia. That is a link It's part of the working for the working group policy prototypes You can you can go to the link then if there's any pull request or any issues, please let me know In the repo and submit a PR. So, yeah, let's move to the demo real quick Thank you Next we'll have a quick demo on how trivia adapter works. And yeah, that's how your reports would look for via trivia adapter So with that we can move on to the palco adapter. Hello again So, uh, yes for the next part of the presentation, we'll be looking at the palco adapter that was built Last year it was just released in Falco sidekick 2.25. So let's start with some background That's about Falco and Falco sidekick. So what is the Falco project? The Falco project is an incubating CNCF runtime security tool, right? It's the de facto kubernetes threat detection engine. It is pretty amazing. It acts like the security camera that You know looks for and detects any data theft intrusions or any unexpected behavior To detect this Falco has a certain set of rules These rules are Extensive they're great. They're built for kubernetes linux and cloud native moving on to Falco sidekick Falco sidekick acts as a middleware between Falco and whatever output you need from Falco We have more or less. So Falco would basically give out five types of outputs Falco sidekick takes the hdp output and you know Send it forward to whatever type of output that you'd want That could be well any of the ones that I've mentioned here and many more policy report is another Another output to Falco sidekick. So if you have to use it, you just simply install it and enable it to be You know true while installing Falco. So yeah Moving on to the Falco policy report adapter Falco policy report adapter the overall architecture looks somewhat like this When the top you can see the extended architecture of Falco How it's giving out alerts via an hdp output that Falco sidekick is accepting How Falco adapter fits in Falco adapter takes that output in your cluster that already has these crd's policy report and cluster policy report Installed it maps the Falco output to the crd and helps in generation as well as updation of multiple policy reports This is done in an n plus one fashion. That is the n namespace specific reports and one cluster wide report This is one of the cool things about Falco policy adapter The next one being, you know, multiple Falco sidekicks in a huge cluster. Each one has a unique name That's nice The best thing about this I find is the configuration options because any end user would want to optimize Personalize the kind of events they see in their policy reports, you know the number of events the priority of events and What events would they call as high priority or low priority? You can configure all of this through policy adapter And finally you get Falco outputs. There are different fields. They're being mapped to policy report crd So there's this sort of agreed upon mapping between them Next let's take into a quick demo that will show you how your falco sidekick With policy reporter turned on looks policy report turned on looks like So I have a cluster running. I have the crd's Installed and this is how the logs would look when your policy reports being created a namespace specific report or Well cluster wide report following this If I just want to quickly look at The reports the summary That's how that would look like Yes, so like I mentioned, there's one cluster wide report and One for all the namespaces in my case. I have three dummy namespaces Just for the sake of demo So yeah something like that My limit to the events was 10 so you can see that the maximum number of events in my report is 10 And yeah, that's just how the reports would look like This is unified right the cube bench adapter outward the trivia adapter output and mine It just looks the same. I think that's all from my side. Uh, over to you frank Yeah, thank you So now we saw what tools can generate this policy reports and how they look like in a yaml format But um, yeah, how to work with this kind of crd. I will present you policy reporter And what it is and what was the motivation? So policy reporter is a tool to add observability and monitoring possibilities to your cluster security Based on this presented policy reports the id the motivation came from different disadvantages I encountered myself by working with it in a mostly context of caverno so One diff one problem was for me if you have a cluster wide policy Which violates namespace scoped resources you have many resources across many namespaces So it's very hard to find all results that relates to a single policy Also, you have the problem that a single policy report can contain many results for different policies and resources And if a new violation, of course, it's not that easy to find it And it's also very hard to find all results for a Dedicated resource to help with this kind of issues policy reporter provides different features So at first you can send new violations to different tools. For example, grafana loki elastic search slag discord or microsoft teams It has an optional metrics endpoint So you can also use your existing observability and monitoring tools like prometheus and grafana And it has also a standalone dashboard to get a graphical overview of your results with all kinds of filters without the need of additional infrastructure or tooling So how does this work policy reporter consists of three components which are Installed as you configure it We have the core component which is responsible for watching over the policy report crd And converts it into a metrics endpoint as a end in rest api And it is also responsible to send your new violations to the configured Tools i mentioned the second one is the caverno plugin Yeah, as a name suggests it is especially for caverno and adds additional information on top of the policy report um definitions about your caverno policies And how do you configure it the last one is the policy reporter ui which is the mentioned dashboard And yeah uses the rest apis of the other components to view your informations in a graphical and more readable way To show you what i'm talked about i prepared a demo and if you want to try it out yourself I have prepared a github repository for you where you have all instructions and hand charts You can rebuild the same environment i'm using in this demonstration If you want to find out more about policy reporter And yeah, i want to try try it out with more possible configurations check out the policy reporter github um repository under the caverno organization It's has also a link to a dedicated documentation where you can find all informations relates to it So let's start with a demonstration So at first you see the mentioned policy reporter ui we have a Overview dashboard which shows you all violations found in all policy reports in your cluster You have that grouped by a namespace and you have also a counter for your cluster policies And under it you have we are much more informations and details about your violations by tool For example, we have also caverno in this environment where you see your caverno violations If you want to know what happens or what was wrong in your configuration You can click on an item and see the error message of this report We can also see the results provided by the mentioned adapters of falco for example Where you see a policy About attach exec also here you see the error message and also the provided metadata over the output fields. Yeah, and Also the cube bench informations But that's just an overview if you want to see all informations from a report you have dedicated pages For the source of a policy report Just have a look at the cube bench thing So we can also group them by other informations like rule which makes more sense for cube bench So we have all results that are related to the api server Now if you want to see only the failed ones you can just filter the table and you see the failed one You can also filter for the warnings. And so you can just grab all violations. You are interested in So if you have Already monitoring solutions like graphana and prometo as you can also Use a sub chart for monitoring which interacts with the prometo as operator and you get with this installation also free Predefined dashboards already in your graphana, which are labeled with policy reporter And yeah, you get almost the same information as in the ui with the different filters and overviews and Yeah, you have One source of truth as always before Then we have a locks page which works as a demonstration of the real-time notification stuff so If we just Run a short engines which will violate against some caverno Rules They will just show up in this locks. Yeah a few seconds later and you see okay at this time The new pot violates against for policies This is an example as I said you can use this with graphana low key or elastic search or other tools Like slack which makes more sense in the production environment So the last slides are as I said caverno specific you get an overview about your Running policies how they are configured you can show Some yaml configuration and you have a detailed page where you also see the results related to this policy so that's Mainly it about a policy reporter And now we have a short outlook what comes next Now that we know What happened or what has happened in the past? Let's talk about what's in the store for the future So As we mentioned as we talked about these kind of adapters There was another adapter that one of my mentees with jim built He was also there with us, but he had just left hard dick So he built qabba adapted was recently incorporated So it was recently completed the other that is in the process and we are going to build it later as a gatekeeper other than the policy reports growing adoption what we have is that we also plan for the mapping of The the the the cluster the custom resource definitions to oskel And not only that But entirely automating the entire process using a cli So that is one of the other features that we are that are in the plans in the outlook The other two things that we are planning is on the kubernetes control catalog So it's it's more like again mapping of all the security configurations that we try to see fitting into our kubernetes cluster And yes, absolutely. We all are here. So We all want that you all be a part of the future So next is that how we can come in touch with the community that we work with the policy working group that we work with So this is our mailing list slack channel github and community. So do help us Building this community make even better looking forward to see you all in the community channels And now that we know about the community, I know that this is the last stock after that We will be most probably going to our homes. So yeah before going home. Let's get stay connected Let's stay in touch. These are our social media handles Do give us a ping on slag or on twitter or on github would love to know and answer your questions and Stay in touch with all of you. Yeah, thank you so much. Thank you so much everyone for joining Thank you Now, uh, if i'm not wrong, we have a couple of minutes. So if there's any questions if there's anything you'd like to talk about Now would be a good time Yeah Yeah, thanks. So in uh, gavana, you showed the overall status of the cluster or the namespace Can you also like tie this back to the individual developer or the individual team that then can fix it? Sorry, can you Uh So here the policy was violated, right? That's what you have this report for Can you and if you run like a larger cluster in an organization with multiple teams using the same cluster Can you also like tie this back to the individual team or the individual developer that can then fix it? Um, so in the grafana dashboard, there's Not much filtering in the ui or get general for the violation pushes. I mentioned you have the possibility to Create channels and filters. So you are you are able to send different or violation from different namespaces Or a set of namespaces to different team channels, for example So they are notified that when in their working environment, something happened um, yeah I think next Thanks for the talk. Um, so, uh, is so for example for start manager. We have our own, uh, Policy approver now our own like certificate response attack. Would we be able to build? independently our own policy reporter to work with this With this with this um, what was the resource name policy report resource? Would we be able to independently develop our own integration for this is my question We are you are able to build your own kind of adapter for your engines and as long as you as your results Are a policy report you can work with the policy reporter So you will get the same information the same functionality for your policy engine as long as your results are mapped to a policy report Right. The only thing that will change will be the mapping. We already have the client code for you Uh, it's in the cuban t is working group repository Uh, you just need to import that and after depending on the policy engine that you have you can independently build it and contribute to it Yeah Thanks. Thank you. Thank you. Thank you. I think do we have any more time? All right, uh, thank you even thank you very much for showing up and uh, thank you so much for coming