 Hi everyone. Hi Jim. How are you? Very good. How are you? Good morning. How are you? Doing alright. How about you? Another wonderful day in quarantine. Yep. So I see we had a presentation on the agenda from a Red Hat team. I was seeing that myself. Yeah, I added that. They reached out to me. We're directed from, what's his name? Anyway, they have a, it's from the IVM originally project actually, the multi cluster management or something. They have a policy component and they were interested. And I said, yeah, come on by and then asked if they wanted to present what their project was and how it relates to policy. Cool. Oh, which they're asking you about access. Does that sound like something I need to do? Zoom should be wide open. The only change Zoom has made recently is you have to kind of log in your Zoom account, which is just your personal thing. I'm not aware of any other. I unfortunately don't have access to the CNCF Zoom account. So I'm just a Joe user. Yeah, let's see if that works. Are they still having difficulty getting in? Yeah, they're just signing up now for Zoom. I don't know why Zoom made that change, but that's what I'm like. I think it was all the Zoom bombing at schools. Right. Yeah, Zoom is no longer just that it's now a thing. She's pretty crazy. Hi, Erica. This is Jaya. I was able to join. Hi, great. I know that was a little bit of technology glitch as usual, but yeah. Yeah, sorry, they were just talking about Zoom added the requirement to sign in pretty recently and we forget. Sorry for not letting you know so you could do that ahead of time. Okay, I'm going to give the same instructions to my team. Okay, I think we have only half an hour, right? So I think we have the whole hour registered, but okay, I mean, we have other things to discuss. That's fine. Yeah, so whenever you're ready for my topic, I think that's good. I think I have my team on the call. So I think you're good. Okay, let's check the agenda. Make sure we see if we have any other little items. So if you want to introduce yourself and talk about project or I'll hand it over to you. Awesome. Hi, everyone. Thank you for the opportunity to present in this forum. My name is Jaya Ramanathan. I am the Chief Security and Governance Architect for Red Hat Advanced Cluster Management, which is a new offering for Kubernetes cluster management that we released actually last week as a technology preview. So we're pretty excited about it. It is based on technology that is accessed originally in a multi-cloud manager product offering within IBM and we have brought it over to Red Hat and we have integrated and delivered it last week as a technology preview. So what I wanted to do today was to take you through a quick intro of the overview of the product and also do a quick demo. And I also have on the call several folks in from Red Hat who are also working in the space on security and compliance aspects. So why don't we have you, Oz and Jacob, you guys go ahead and introduce yourselves. All right. Thanks. All right. So my name is Juan Antonio Sorio Robles, but Oz is the shorter version. So I recommend you use that. I work in security and compliance at Red Hat and we are mostly doing open shift. So not a lot of upstream Kubernetes just yet, but hopefully that'll change. Happy to be here. Jacob. Yeah. I work on the same team as Oz. I don't have actually much to add to what he says, the same team, same job. So work on this open shift compliance operator. Yeah. That's pretty much it. Okay. Yeah. Hi, everybody. My name is Yu Tao. So I work with Jaya. We work on the same team, Red Hat Advanced Cluster Management. I'm the main engineer for the governance capability that you will see in a few minutes. Happy to be here. Okay. Awesome. So let me go ahead and share works. Okay. Looks like I will have to come back in because sharing permission for Zoom is not enabled in my thing. So let me go back. I'll come back in in a minute. Okay. Sure. Maybe we'll have a minute since we're a small group. Robert and Jim, you want to introduce yourselves to the new people? I think the other people, they know me. Yeah. Absolutely. Hi, everyone, and thanks for joining. I'm Jim Baguardia from Nermata and have been working with this group on some structure and I guess common machinery around policy management. So one of the open source projects that we lead is Kivirno, which is a policy engine for Kubernetes. And a definition that we are proposing as part of this working group is to standardize or have some common way of reporting policy violations. And also, you know, I guess policy execution reports, which based on gender, we can, and based on time, either we can discuss in this session or an upcoming session. Robert Fagalia, I've been active in the CNCF Sig Security Group now for, gosh, it's been close to a year, helping with the security assessment activities around new CNCF projects, and also active in this policy work group as well. And I also have recently started working with the Linux Foundation on Kubernetes training topics. Okay. Looks like at least it seemed to work. Can you all see the screen? Yes, we can. Okay, good. Let me put into presentation mode here. Okay. And I assume you can also hear me. So let's go ahead and start. So basically, the topic I'm going to be covering today is governance capabilities that we have within the Red Hat Advanced Cluster Management offering. And it is also part of the open cluster management community project. And so there is a way for, so that's our way to encourage collaboration across Red Hat as well as other third party products as well, and third party vendors, et cetera. So the whole idea of, I'm sure, since you're all part of the security work group, you're very familiar with these concepts. So I'll just quickly cover them. So by governance, what we mean is a structured way of operating an IT infrastructure based on well-defined processes, policies, and procedures, et cetera. And typically, that's what most enterprise clients do. So they have internal standards as well as, depending upon which industry segment they are in, they also have to deal with external standards, whether it's PCI, HIPAA, et cetera. So they have well-defined policies and use tools to implement those policies, et cetera. And the R in GRC obviously stands for risk. So the idea is you want to be able to identify risk areas and the priority of the risks so that IT operations can prioritize and remediate them as needed. And compliance, compliance is a very broad term. It is used in different ways. In the context of the work that we are doing, when we refer to compliance, we're talking about the governance that we put in place by means of defining policies, or we complain to those policies or not. So this is more focused on, I would say, technical controls and operational policies. So that's really the focus here. As opposed to compliance is also referred to in the context of external standards like PCI, HIPAA, FSMA, et cetera. So that's a different level. So there are two levels if you think about it. And as I walk through the architecture, you will kind of get a flavor of how we address both aspects. So the governance framework that we have put in place, we have several goals there. One is we want, we obviously want to deliver out of the box policies as many as possible for specifically the controls that we provide. So for example, if the OpenShift platform has security capabilities, we want to be able to deliver policies so that those can be configured correctly. And obviously the same set of policies, some of them also apply to vanilla Kubernetes environments as well, not just OpenShift. But at the same time, we want those policies to be customizable. And so that's one of our goals. The second goal is we want the ability to integrate data from third party controls, because we won't be the ones, we meaning Red Hat won't be the one providing all the controls. So and clients typically have security offerings and products that they already have in place and they may want to integrate those. And so we do need the ability to integrate third party. And then we have the overall dashboard and API for the posture and deviations from policies. And the dashboard itself has to be customizable for various standards. And you will see that when I show you the demo. The other goal we also have is we want the ability to add policies for different controls without having to make changes to our core policy framework. And that is another goal we have accomplished as well. And the other thing I also wanted to point out here is, though, you know, this group obviously is focused on security. When we talk about governance, it is not just for security controls. Governance could also apply to controls related to resiliency could be applied to controls related to software engineering standards and other aspects. So the policy framework is designed in a way that, you know, the policies can be applied across the board for all these aspects. So this is the overall governance architecture that we have. So if I start on the right hand side, we have three different ways in which you can incorporate policies into this architecture. The first is using this governance dashboard UI. So we have a policy UI, you will see that in the demo. And you can go and create policies there and then bind them to clusters where they apply using our placement policy. And so that's possible. You can also author policies using YAML and then import them using our CLI. And you can also use this mechanism called subscription, which is another concept we have within RACM. And the subscription allows you to tie a particular store. In this case, I'm showing GitHub where policies could reside, the YAML files. And then you can basically establish a channel for the subscription to go and retrieve the policies from GitHub. For example, it could also be an object store or some other repository. And then the subscription can then apply those policies to the hub, which in turn deploys them to the managed clusters. So the three different ways in which you can incorporate policies. One of the beauties of using GitHub kind of mechanism is it then allows the policy lifecycle to be managed, just like you would manage lifecycle for source code. So that is one of the advantages of using that approach. So those are three ways. And then once the policy is deployed at the hub, and then using the placement policy specify which managed clusters it applies to, the policy then gets deployed on the managed cluster. And that's the one on the left-hand side. And though I'm just showing one box here, obviously there could be multiple managed clusters here. And then within each managed cluster, you have a set of controls. And some could be had provided, others are provided by third party and clients. And then you have policy controllers that consume the policy. And then check the state, depending upon what control they are managing, they would check the state against the control and then return back violations. So those violations can be reported on the Rackham hub. So that's the overall governance architecture. So after this chart, I'll pause and take some questions. So the overall technical approach is to open source the policy framework and the sample policy controllers and policies. And the way we are doing that is through the open cluster management organization. So that is our GitHub project. And then for technical controls provided by Red Hat, we want to deliver out-of-the-box policy templates. And then within each policy there are annotations that allow you to specify their standards, controls and categories. So for example, you could imagine an enterprise client who is in the healthcare industry, they have to deal with HIPAA. They also have to deal with FISMA if they are interacting with Medicare and so on and the federal government. So when they put technical controls in place and they want to govern those controls using this framework, they want to say, okay, this policy applies to this control but it applies to both HIPAA and FISMA. So that is what these customizable annotations allow you to do. So this way they can get an overall posture view for various compliance standards. And this allows them to continuously monitor the security and audit posture. And then as I mentioned, the framework allows you to integrate policies for a third party. And the other thing I also wanted to point out here is we really are not tied to a specific policy language. So the way when you see our AML file, you will see that the specification for the policy for the control is actually wrapped by additional pieces that allows RACM to actually shift the policy to the managed clusters. But the actual policy itself could be, for example, written in OPA. It could be written in any other language. So in fact, you who is on the call here with me has authored a paper that shows how you can integrate OPA policies into this framework. Can I have one quick question there? Yeah. So just on that topic and we've been kind of discussing different policy frameworks and engines. So but what is the native you said with the best practice policies that you would provide what is the native engine used for that? So it's basically our policy framework. So one of the things is we support any policies for Kubernetes objects. So as long as you have a spec for a Kubernetes object, we can manage that. So that is one example, right? So we have a configuration policy controller that can manage configuration for various Kubernetes resources. So but then, you know, we also have policies for certificate management, etc. So those are they are defined using our own syntax. You want to add to that what I just said? Yeah, I want to add that we don't really tie to a single policy engine. So the framework we are providing is the governance framework. And then it allows us to propagate policies to different manage to manage cluster and then collect status back. So in terms of the policy engine, we can include them as part of our policy framework. And then the policy gets applied on the managed cluster and and the engine deployed on that managed cluster will be will be responsible for executing that policy. So we don't really tie to one single policy engine. Okay. And do you and these policies can act as admission controls or as, you know, sort of runtime policies? Do you support both modes or? Currently, our our our auto box policy are most mostly the the runtime ones. Okay. Great. Thank you. Any other questions? I'm just kidding. Is there kind of a dry run or a static analysis execution mode? So before you apply a policy, can you kind of see what the impact would be? Yes, we do support two modes. One is informed and one is enforced. The other is enforced. So informed is kind of dry run. It will just tell you what it looks like, what the violation would be once you apply it. This is kind of like a dry run. Yeah. Okay. I just have a couple more charts and then I'll switch to demo mode. So this kind of summarizes the governance lifecycle. So basically policies are created, as I mentioned, three different mechanisms and they can be propagated to the managed clusters using our placement. One of the things with respect to the placement policy is you can, for example, assign labels to the various managed clusters. So you could designate certain clusters as dev, certain clusters as broad, etc. And then the placement is you can either place a policy based on the label or you can place a policy on a specific cluster, etc. So it's pretty flexible in terms of how you determine which clusters the policies apply to. And then the policy controllers are deployed on the managed clusters. So the out-of-the-box policy controllers that we provide, they are auto deployed on the managed clusters when a cluster is imported or created using RACM. But other policy controllers obviously can be deployed later. So that's fine. And then like you was mentioning, we have an inform mode and an enforce mode. So this allows you to kind of decide how you want to deploy the policies. Obviously, some of the controllers support the enforce mode and others don't. For example, one of the policies we have relates to how many users have cluster admin access. And obviously, if that policy, we only support inform mode today because if the limit is exceeded, then the corrective action will have to be taken by the ops person, depending upon whose access they have to do some investigation to determine why the excessive access was there and take appropriate remediation. As I mentioned, the policy violations, when you see the demo, you will see they are organized by the compliance standards and control categories. And this is very key, which is other policies can be added without having to make any changes to the framework. So these are examples of some of the out-of-the-box policy templates that we provide. And in this chart, I've kind of mapped them to the NIST 853 control categories. And you will see some of these when I show the demo. The key thing I wanted to mention here is, though I've listed a few of the Kubernetes resources here, the framework is rich enough that it can pretty much manage policies for any Kubernetes resource. But I'm just showing examples here based on the mapping for the particular control categories. One of the things I wanted to highlight here is we do have a policy for CIS, but that currently works only for OpenShift 3.11. We are working on enabling it for OpenShift 4. We also integrate with a container security operator for vulnerability scanning. So we have a policy for that. So this allows you to detect vulnerabilities on pods if you're running on the managed clusters. And the certificate management policy, this allows you to specify a time bound within which it certificates expire until flag a violation. So you can keep an eye on certificate expiration. Okay, so the same set of out-of-the-box policies templates are mapped to the PCI control categories here. And this is the last chart. So as I mentioned, we have a community project, which is the open cluster management data project. And so basically RACM is derived from technology that's available there. We have not completely open sourced all the pieces. We are in the process of doing so. Some of them are already open sourced and the rest will be coming in our roadmap. As I mentioned, this technology has been derived from an existing product offering within IBM and it got moved over to Red Hat. So as part of that, the move happened this year. So we are slowly open sourcing all the various pieces with a goal to fully open source it. In this community project, we have a repo called Enhancements. And this is where we invite contributions from third party. And there is also a repo there for how to write a custom policy controller, which you has put together. So definitely welcome feedback on that as well. Before I switch to the demo mode, are there any other questions? And also Oz, you and Jacob, do you guys want to add anything at this point? Not at the moment. Okay. So one of the things, so just to mention, I know the chart really didn't point out the compliance operator work that Oz and Jacob and team are working on. So part of that project is based on the compliance as code community project. And what we are working on right now is to integrate RACM with that. So that basically means that you will be able to come in into RACM and author policies, and then have the compliance operator enforce and enforce session from those policies and return back results. The difference between the policies that we have and the policies that we would author for the compliance operator is the two levels that I talked about, which is the current policies that we have are all based on for specific controls and their operational policies. Whereas for the compliance operator, we'll be authoring think of it as a policy profile that actually specifies a set of rules for multiple controls. And the policy profile could be for example, for FSMA or PCI or things like that. So that's the concept we are going to be introducing. So this will then allow you to layer the compliance policy profile on top of the operational policies so that you can then answer questions such as, is this cluster operating to PCI readiness? Is this cluster operating to HIPAA readiness? And you will kind of be able to say that, I'll take the example of PCI. You'll be able to say, okay, there are 12 controls for PCI. You have governance enabled for maybe five of the 12 controls. And then for each of those five controls, here are the policies you have deployed. And here is the current posture. So this way, an enterprise client who is in the financial industry will be able to then tell how their cluster is operating to that particular standard. The other thing I also wanted to point out is the policy framework that we have here. As I mentioned in the first chart, we want this applied end to end across the entire hardware and software stack. So the idea here is that it can, though Red Hat is going to be delivering policies for Kubernetes clusters, the same framework can be used to bridge, for example, to Ansible and then manage policies for VM layer as well. And also, you know, middleware layers. So and we have some prototypes that IBM research has done that kind of show that that's possible. So and again, done with no changes to our policy framework. So this framework is pretty rich. And so that's kind of the intent here is to apply it across the entire stack. Okay, so let me go to demo mode. So you all can see the screen of the demo, right? Yes. Okay, so this is our Red Hat Advanced Cluster Management for Kubernetes. Also, we refer to it as RACM, which is just acronym. So this is the console. And so if you go to the console, it actually supports three life cycles. The first life cycle is cluster life cycles. So you can actually come in and you can import clusters and or you can create a cluster. We also support an application life cycle where you can come and define applications, which are made up of various pieces like a database, runtime environment, etc. And then you can use placement policy to deploy the various pieces to the various clusters that RACM is managing. And then the third piece is governance and risk, which is what I'm focusing on here. So the governance panel is what I'm showing here. And you can see at the very top, you see a summary that is based on standards. So as I mentioned, when you first deploy RACM, actually you won't see any policies. Because what we provide is a policy framework and a set of order of the box templates, but we don't deploy it, deploy policies out of the box. But what you do is you come in here and then you go and create a policy. I'm doing a live demo. So when you create a policy, you come in here and you give a name for it. And then you pick a namespace. Now the namespace here is the namespace on the RACM hub where the policy is stored. And what this means is it allows you to assign different users access to different namespaces based on Kubernetes RBAC and allows you to segregate policies into different stores, different namespaces. So an example use case of this is if a customer has deployed a cluster and they want to share it across multiple departments and they want to have different policies deployed for the clusters, then what they can do is they can create a cluster for department one, another cluster for department two, and then they can import both those clusters into RACM or they could create both those clusters using RACM. And then they can create a bucket of policies for department one, another bucket for department two, and they could have different users doing that if they choose to do that. And using this namespace on the hub, you can enforce the RBAC. That's what this is doing. This doesn't represent the namespace on which the policy is deployed on the managed cluster. That is actually specified within the policy file itself. Okay. And then you can choose one of the out of the box templates that we support. And this is the specification here. So as you can see, we have a policy for certificate expiration, one for CIS, and we have a IAM policy for the limits. We also have a policy for image vulnerability. And then we have a set of policies all based on Kubernetes objects that for which you provide out of the box templates. So I'm going to choose the certificate expiration as an example. So then when you select the particular template here, the YAML file automatically gets populated on here. And you can see here for the certificate management policy, the expiration time is specified here. And you can actually change that. So this is where the customization can happen. And this namespace here, this is the namespace that determines which namespaces the policy applies to on the managed cluster. You can provide a list of namespaces that are included and namespaces that are excluded. And then you can see here that since we provide the certificate policy template out of the box, we auto fill in the standards categories and controls for it. And based on the NIST cybersecurity framework. But you can come in here and you can add additional standards. And essentially, all the standards are comma separated. And you can specify standards, controls, and categories. And this is how you can take a same policy and apply it to multiple standards. And then you can come and click here, whether you want to enforce the policy, if supported. So this is something, you know, you kind of have to check whether our auto box template supports enforced or not. And if it does, then you can click that here. And we also have this button, this allows you to, you know, if you're still in the process of refining your policy and you're not ready yet to deploy it, you can come in here and select this disable buttons, which means it won't get deployed. The cluster binding, this is the one that specifies the placement. So you can specify, you can select a particular cluster, which is in this case, department one prod is a managed cluster that is imported into this outcome. So you can select a specific cluster, or you can select a label. And so you could say, you know, apply this policy to all clusters deployed on Amazon, for example. Or you can say, you know, apply this policy to all OpenShift clusters. So, so this gives you the flexibility on how you specify the binding for a given policy. So once you do this, then the policy, then the policy will show up in this list. So what you see here is we have defined a set of policies using that mechanism. And as I mentioned earlier, the UI is just one way to do it. You can also just author the policy as a ML file and import it using CLI, or you can use the subscription mechanism as well. So, so in this case, the certificate expiration policy has, there is a violation. So you can see that the violation, so when you come to that particular policy and you click on the violations tab, that's when you see the violation. And in this case, you see that we have a certificate that expires in less than the time period that was specified in that in that namespace. And so it kind of shows you that. Another example is the image. Let's go and take a look at that one. So the image vulnerability policy, it is this one. So if I go and do an edit on it, you can actually see this policy. So what we have done here is we have added a policy to manage the container security operator, which is an operator that is delivered. It's available on operator hub and you can deploy it on the OpenShift clusters to detect image vulnerabilities on the running parts. So what we essentially have done here is we have defined a policy that ensures that that operator is running. And it also detects whether there are any violations. The way it detects violations is that operator actually creates this image manifest vulnerability object if there is a violation. So we have defined a policy, we have defined in this YAML that there shouldn't be any such objects existing. So if the objects exist, then we know that there is a vulnerability. And so that's what you're seeing here, where it's basically saying that there is a violation. So when there is a violation, let's see it should be showing the object name. Okay, maybe you can help here. But typically what you see here is a message that shows the object ID. And I'm trying to make sure that shows up and it's not coming up here. But anyway, so what you will see is the ID of the object or there it is. There always has to be a demo glitch. So what it shows here is this ID. So then what you can do is you can go to the OpenShift console for the particular managed cluster. And you can go and look for that particular object by searching it under this custom resource definition. And then when you go into the details of that object, you can actually look at the violation details. So essentially what we are doing here is the container security operator is running on the managed cluster. We are using RACM to define a policy that detects, first make sure it runs. And then secondly, if that operator detects any vulnerabilities, in which case it will create those custom resource objects for the image, manifest vulnerability type, then it will show up as a violation on RACM. And then that will then give you the ID. You can then drill down using the OpenShift console to get the details of the violations. So the compliance operator work that Oz and Jacob and team are working on, we are going to be doing a similar integration with RACM so that you can define a policy for it using RACM. And then the compliance operator will then enforce that or enforce, inform that policy and then return back violation results. I think it sums up what I wanted to show. And you can see that in the summary, or let me go into some of the details here. So this summary can be viewed either as a standard or you can view it by categories. So when you view it by categories, what you're seeing here is the NIST CSF category. For each category, how many clusters are in violation as well as how many policies are in violations. And you will see that both for the NIST categories and the PCI categories. Or you can view as a standard. And as I mentioned, you can add additional standards here. So if I'm a healthcare customer, you know, maybe I won't be interested in PCI, but instead what you'll see here is HIPAA or FSMA. Awesome. Maybe we can take questions real quick and then... Yep. I was curious, I may have missed it. Did you say that the policy violation results are a custom resource? Or there are some other persisted data? Yeah. So that is really the nice segue, right? The custom resource that I showed you, that is just for the container security operator. So they have defined their own custom image vulnerability type, right, to return the results back. One of the things, the reason I'm coming to this forum, or we are coming to this forum, is because we want to collaborate with you all to define more standard way to return compliance results, right? That applies not just to one control, but it also applies to multiple controls. And also have a way to do that so that, you know, we can slice and dice for standards, controls, and categories as well, right? So we don't have a standard definition yet. And I know that this workgroup has been working on such a definition. So what we really wanted to do was to contribute to that and implement that standard. Yeah, that is a good segue. And definitely we can, you know, kind of dive into some details on that. So one other question I had on the demo and so when you were going through, you know, the policy definition, it had something about enforce if available or there was a label like that. So what is that mapped to? And so I guess going back to also my earlier question on the runtime versus admission control, if some of these are applied as runtime policies, for example, if you're checking for things like, you know, resource quotas, etc., how would those be enforced at runtime? Yeah, so the enforce if available, as I mentioned, not every policy can be enforced easily, right? Some can be. And others are more, a little more complicated. So that's why, you know, if available, basically means the, whoever is deploying the creating and deploying the policies have to check whether this particular policy supports it or not. Right now it, right? And then if it's supported, then they can enable it. But we don't today support it for all the policies. Okay. And if it's supported, it's again, in the context of runtime definitions, what would that mean to a workload? So let's say I deploy a workload in a cluster and it's not compliant to a policy, what would enforcement mean? It will actually, depending upon what you specified in the YAML file, it will reward the cluster to what was specified there. So for example, you what would be a good example we could use to illustrate that which policy can I use? Let me share again. You can, I think you can use any like configuration policy, just enforce it, then it will create an object. So, so namespace is an example, right? In fact, let's see here, if I look at this namespace policy. So the namespace policy in this case, right, it is specifying that there should be a namespace, because it says must have, that has a name prod. Okay. So if the, now right now it is set to inform, but if this was set to enforce, then if the managed cluster did not have a namespace called prod, it will automatically create the namespace. So that's just an example, right? So you can essentially specify here any Kubernetes custom resource definition that you want to enforce. And it will basically reward the configuration on the managed cluster to what is specified here. Does that make sense? Somewhat. Okay. So this is more like just applying, this would apply that configuration that you've set up in the policy. Exactly. Okay. I see. And there would be then, if there's some variations, et cetera, that have to be handled, is there some way to then templatize that or specify those variables? I think so, meaning what you mean is, so as long as it fits into the specification of a Kubernetes resource object, then we are relying on, we'll just create that object on the managed cluster, right? And then whatever the underlying Kubernetes runtime that consumes that CR enforces, we automatically get it. So what we are managing is the configuration to the Kubernetes runtime. Does it make sense? Okay. So we'll just apply this particular configuration that's specified in this policy. Exactly. Exactly. Okay. Right. Yeah, go ahead. We do support multiple verbs. Like here, you see must have, and we also have must only have and must not have. So must have basically, it will make sure the whatever template you specified, the one that's actually created on the managed cluster matches whatever you're specifying the template. So for example, if the template you specified here is a subset of what's actually being created on the managed cluster, then it's compliant. So must only have means it should be an exact match and must not have basically simply means it should not exist. So this is three types of verbs we support. Like unlike OPA, that it actually provides you a language, right? Program, it's kind of like a program language that supports, it's more flexible. So we actually make it a little bit simple for user to use. So without having to learn how to use Rigo. Okay. All right. Thank you. Awesome. Thank you, Yu. Any other questions for us? Just a real quick one. Oftentimes as an auditor, I want to see not only the violations, but I want to see the explicit evidence that resources are in compliance. Is there a kind of a view or a dump of that perspective? Yeah. So right now we don't have the historical view yet, but it's definitely something in our roadmap. And I know Yu is actually actually actually working on that. So the idea there is that when you have a particular policy, it will kind of, because the policy controller is essentially checking the policy's compliance periodically, right? And today it only returns the current state or whatever is the point in time state, right? At the time it did the check. What we are also looking at is to maintain historical views so that you kind of know whether the compliance changed over a period of time, which I know is needed for audits, right? Which is have you been maintaining this for six months or whatever, right? So I think that is definitely in our roadmap and we understand that is needed. And the other thing we're also looking at is further more details, right? As opposed to just saying compliant and non-compliant, being able to have some additional details on the policy itself, etc., which are all needed for audits. So the audit evidence collection is something we will look at in our roadmap as well. But our initial focus is more on ensuring that the clusters are configured properly, right? For best practices and providing that visibility view. So that's the first, then we want to get historical stuff and then thirdly, get into more of the audit evidence collection. Just hoping in real quick, because I think it's really relevant what was the question that RF, I don't know who it is, but Robert, right, that Robert presented, which was my main concern with the proposal. I mean, Erica pointed it to me some time ago. And the only thing that besides wanting to look at more use cases, the only thing that in my opinion is lacking from the proposal, not Rackham, but I mean the proposal for the object that's going to represent policy violations is that exactly that. A lot of the times you don't really care about just the what failed in a policy, but there's more states to it, right? Usually you have like an informational warning or informational message. Oftentimes we'll have the policy passed and so on. So the way that we went around this, I mean we have yet another operator that does policies and like that could maybe in the next weeks explain about it because it's its own separate thing. But the way that we addressed it is by having something called the compliance check result and that contains both the severity of the check, the what do we have severity, the status if it passed fail, errored out, was not checked, was skipped and I don't remember exactly how many states do we have, but I mean I could present actually what we have real quick. Let's see if I can share my screen. Desktop, there you go, share, just something super quick. Do you see my screen? Yep, yes we do. Oh yeah, sorry about that. Gonna remove this one and there we go. So OC get compliance check results. So I ran, I was working on this and I ran a scan and it's just gonna tell me everything that happened for the cluster. So audit rules are failing in this cluster. Some audit B configuration is passing and so on. Ultimately what I do want is to know what failed. So I can see that effectively. In this case, maybe I want to check this specific rule in YAML format, please. Thank you. And this is basically what we have currently, right? The severity, status, what was the suite? We have a concept of compliance suites that run a bunch of checks. What was the suite that triggered this check? It has a name. What else does it have? Yeah, that's basically it's severity, status, some identification that is gonna appear in your compliance document or your audit results and a description of what was done there, right? So that having something like this would be really useful. We could easily just migrate to whatever you guys have, but ultimately it's something a bit more flexible than just violations. Right. Yes, so there is a, not sure if you got a chance to see this house, but there is a new version of a proposal I just shared on the channel as well as I put a link in the agenda. Could you share the agenda because I didn't get that link somehow. Yeah, I can, and I'll share the link again too, just to kind of quickly show what, so this is more like taking into, so I started updating the other document, but then figured it was just easier to write a new one because there were just too many different comments and changes to manage in the Google doc. So this proposal takes, so some of the main comments we got in addition to what you mentioned to provide a way to not just capture the violations, but even the results in terms of which policies were applied, what time, things like that also provide some flexibility in aggregating these results. I think there was another comment from, I think somebody else at Red Hat, right, Erica, if you're on aggregating these for like nodes. So that was, this new proposal allows aggregating results at different levels. And then also, you know, being more flexibility in terms of other custom data that we might want. So just to quickly show an example, and I was just trying to map it to, let's say you want to run a CIS, Gubernati CIS benchmark, right. And what's interesting in this case is it doesn't relate exactly to a Kubernetes object or resource, but it's relating more to components in the control plane and the worker nodes, so on. But the idea here would be to keep this reporting mechanism flexible enough to cover that, as well as I took another example, this is more for a workload like from Guvernum, where we're just reporting failures for pod security, right. So again, we're just showing a failure year, but the idea could be that you could also add success results, like which checks actually passed. And for each, you would have like a pass, fail, warn, info, and we can customize the categories. So anyways, I know we're coming up on time, but would be great if you want to take a look and we can exchange ideas on either the Slack or if you want to set up even a separate meeting before our next bi-weekly meeting. Happy to do that, because I think this would be good to hammer out and come up with something which is flexible and reusable, yeah, from different frameworks. All right, that sounds a bit more like what we were looking for. So I'm going to take a look at that and comment as soon as possible. Thank you so much. As we're running low on time, and if we do have a lot to discuss, we used to meet weekly, and then we moved every two weeks. Should we move back to weekly while we have lots of agenda items? Since we kind of ran out of time today. Thoughts or is it two weeks okay? Are we not that rushed? So I think if we can just maybe get comments and make progress even outside of these meetings, that may be better. Every two weeks is a good rhythm to kind of aggregate those. I think that sounds great. Yes, I posted the link here and again, in the Slack channel, I'll also add the link again and the agenda for the meeting has that. So definitely let's discuss more and then maybe by the time we have the next bi-weekly meeting, we'll have a more sort of finalized version and we can even draft up a CRD for it. Sounds great. Jerry, could you add links to your project in the agenda notes? And if you have, if the slides are public, that'd be great too. I linked the agenda in the group chat. Okay, did you? Yeah, there I see it. I'll do that. Thank you. All right, I'm super excited to see the, do we change the name now? It's policy report. Right, because it's no longer just violation. So yeah, and of the suggestions for better names, of course, we could augment that. Although it's policy, we need to make it sound very bland. Pureaucratic. No, this is, I'm really excited. Again, you know, glad to get hooked up into this work group. And I think the work is very timely because we are trying to, like I said, integrate this compliance operator and drag them together. And this is something we have to do. So it'll be great to make it a standard. So, you know, we're not doing proprietary. So it's great. Absolutely. Sounds good. Thank you for the presentation. That was awesome. Thank you very much. Thank you, Jim. Thanks, everyone. See you in two weeks. Bye. Bye.