 Hello. Hi Anka, this is Jim. Hi Jim. We have a fire drill, fire alarm in our building. So I'll just be on mute and go to see what's happening. No worries. Okay, so I'll be on the call until I see smoke or fire. Okay. Yeah, hopefully you don't and it looks like. No, I think there are people moving in today and I think they hit something. Okay. All right. Yeah, looks like it's just us at the moment, not sure if Jaya or others are able to make it. Oh, Jaya is part of the office, so she will not be joining this session. Okay. All right. Yeah, and I know there were some questions and I think you responded from in a JS team also on GitHub. So I think, you know, and I took a quick look also at Oskal and some of the things around it. So, I mean, obviously, and from like your, like in your example too, it certainly seems possible to use Oskal to at least, you know, one section of it, right? Seems like the spec is fairly comprehensive, but the section that we were most interested in happens to be in the assessment results part, right? Yeah, we can start with that. I think the focus right now was on the results standardization. Correct. So that's why we presented only that. We are also working on the profile on the mapping for a product or service, the mapping from their components to the controls and dependencies, you know, inventory, you know, formatting as well. So all those are available. So I think when we discussed with Jaya and with the Red Hat management, the requirement was to come up with a phased approach. That's why we made the mistake in January with a different team to bring the whole framework and just run away in, you know, fear. You know, there's a lot over there for sure. And the other concern is, of course, you know, somebody's looking at this in Kubernetes, you know, obviously for consuming this in machines and tools, it's all possible, but for to output this in some, you know, format like a YAML, where the configurations or the report results are easy to read. I think that's something we'll have to, you know, look at more examples and see how that works. Oscar comes in all three formats. YAML, JSON and XML with the translations. So whatever we can get the others as well. Right. Yeah. So of course, yeah, if we have JSON, we should be able to output it as YAML. But what I mean is just in terms of the structure of the object model and things like that, right? So how does this, you know, like if somebody just is using a CLI tool and printing this out and trying to read through it? Is it understandable as to what, you know, what the results are? Some of the other things we had in our policy report, that structure that we were trying to define was things even like totals and, you know, kind of, just to, again, make it easy. Then there were things like categories and severity. So I don't know if we'll have to go through and see how all of that maps. So the severity right now is not included in that subset. So if you go into the result group, you will see that, yes, we basically use the findings and if you go down the, if you go down, down, down, okay, observations. So this is what we, and the subject reference is to show the inventory evidence group. But if you go even more down, you will see that there is the threat and risk. So as part of risk, you have risk metrics. So that's where we put the, typically the score. But this is not part of the subset that I, risk is a, you know, yet another slice of the compliance. So I didn't want to, I try to remember if we edit the remediations or not to what I shared. So that's why we say it by piecemeal. So we can do observations with evidence without having risk and remediations. But if, you know, they are relevant and the team is mature to move into that direction and has already the logic to use that, yeah, they are part of the schema. So we can, that's the beauty, we can expand as needed. Right. So I think just to kind of, and maybe you've had a chance to look at some of these other samples, but like sort of what we were looking at before. Yeah, I looked at the, at the one with the summary. For us, the summary is a CLI that we do call, that we do on top of an assessment result. It's not part of the schema itself. Okay. Yeah. So this would be something, you know, it's just a very simple way of summarizing, you know, like what other results of some, some type of grouping of audits or policies. And then there's some details on each, right? But it's really just to indicate like a pass fail, you know, and, and then perhaps give links to others. So as long as we can, you know, if there's a way to map all of this and capture this, then of course, you know, I think going with something with a standard, which is already defined as a lot of value. So yeah, I think I see, you know, we discussed last time that having a summary is kind of difficult because it depends on the context. If I have a partial result, okay, you know, what does it mean that I have three pass and three fails and then I move to a, you know, more aggregated with others and I have different summary, but now actually I see how is used here because this is an example for one single policy. And I think this can be added because the observation is atomic. So we can get a summary to say, oh, this aggregated result for this control across whatever is in the observations, right, atomic with that control evaluation comes from eight pass to fails. And this is why, you know, zero warning, zero errors. And that's why it's a fail because it's eight passes and two fails. The 10 rules that I aggregate give me a fail because I have this eight pass and two fails. I think I can, I'll look how I can add this as a, as part of the properties or because that can be done. And that it's atomic is that observation is not something that is changing whether I aggregate this or not, that, that, that will remain the same, right, that grouping that, that generated for that. Correct. It makes sense. Correct. So this is like a capture of a point in time report for a particular set of policies on a particular set of resources. Like in Kubernetes, you're thinking of mostly of this as a namespace scope, right? So if I have a namespace and I'm applying, let's say, my pod security policies, I want to know a summary result for how many pass, how many fail. And then I have some details to go and, you know, figure out which rule fail, things like that. Okay. So if we are looking at the namespace, this means that our approach here is from the, right, what we call here subject resources, right, from an inventory point of view. These are my namespaces. What Oskal does, it approaches that from a compliance point of view. So the, the display here, you see it is by control. So when I have the summary that I was talking about, that AC three fails, because I have this summary of eight passes and two fails, it would be at a three level. What you are looking is at the summary from an inventory point of view. So again, I think it depends who is the persona that the working group here for the result is targeting. If we target operators, of course, they cannot care less about the AC summary AC three, they are looking at namespace. So I think we maybe need to clarify what are the personas. Yes. So the two, the two personas we have talked about are, you know, the namespace. So typically it's an application owner that might be the namespace admin. And then there's the, there's the cluster admin, the operator, like you mentioned, right? So I think those are the two, somebody is looking at things cluster wide, but then there's also maybe like a sub admin or somebody who cares only about that namespace and they want a summary of okay, what are all the findings? What are the problems or issues I need to fix in my workload to be compliant? But will they be interested in looking from an operation? Oh my God, the fire engine arrived. Oh, okay. The alarm stopped. So which told me, tells me that it was okay. So maybe they're just checking. Yeah. Yeah. So what I was saying, oh yes, from a, from an operational point of view, I think we, we don't need OSCAL, right? We don't need the compliance approach if they are looking at what is failing in my environment, because it will not be only failures from the compliance controls point of view. It will be all their failures, right? There are other aspects then. So would it make sense then from a compliance, let's say compliance operator, to present the information from inventory and namespaces point of view, these are, because it will be misleading, right? Say, okay, this is what's failing in your namespace. Rather, I would like to have the operational operator to present those and here to really focus on the compliance aspects and, and focus from an, right, policy approach or control approach. Okay. So, so what you're, if I understood correctly what you're saying is with OSCAL, it's more you're going through each control or each, you know, policy and you're saying which, you know, workloads or which let's say, let's take pods as an example, right? So across my cluster, which pods are compliant, which may not be compliant. But is there, so then it's left up to some external management system to say, okay, if I want to narrow that down into a subset like a pods within a namespace, they have to kind of go filter through the results and figure that out or, or how would that be done? Exactly. So the, again, is a question of, you know, our goal and how I traverse this JSON to extract what I needed. By default, OSCAL organizes the result per regulation and per regulation controls, rather than per inventory and, right, namespaces or clusters or things like that. We have another, another schema, which is the system security plan, which is the one that includes the scope, the inventory, the subject references for which the assessment is done, right? So that can be the format for the inventory. And then we can extract from the assessment the summaries at that level. So there are different ways to slice and dice here, but the schema itself will provide natively the information per control posture. Hi, Gus. Thank you for joining. Hey, Gus. Hey, Gus. Sorry, I'm late. I am, I'm, if you, if you would like to have some introductions, my name is Anka Seiler, and I'm in IBM Research, and I have been introduced to this work group by Jaya. So I'm working with her on the standardization of the result for the ACM in Red Hat. So that's, you know, how we, I'm not sure if you've seen the recording last time, we introduced the recommendation of result standardization based on subset of OSCAL assessment result. Right. Yeah, that's a good intro. Yes, I work with Jaya. So You are in IBM as well? Or Red Hat? Red Hat. Okay. Red Hat. So yeah, I'm familiar with the sample a little bit. I took a quick look at it and saw it. It was, it was big. So it's big, but nothing mind-blowing. I mean, it's really, you know, pretty straightforward. Okay. Yeah, if you look at this in this format, I think it's difficult. But if you have a JSON editor, you would see there are four parts. There are the properties. There is the evidence. Yeah, exactly. Yes. Okay. Right. So findings are by control. So in each finding, I have one objective status. So then objective status, you see the control AC3 with its aggregated status. And then in the observations, I have all the rules, goals, CAS benchmarks, whatever this control depends on for its status. What are the rules that map to the description of AC3? What are the rules that implement the policies that implement that AC3? So in the observations, which is the last object in this item, you will see all the, because this is a result for the compliance operator, these are all, right, CAS Kubernetes benchmark provided via the OpenSCAP in XACDF format. And we present them here in the OSCAL format. So you see the observations and they have properties, evidence, subject references is inventory, meaning what is the VM, the cluster, the region that I'm getting this rule for. And the observation method is it's automatic or manual. So it's very simple structure. Okay. So this is the campus sample, you know, current format of the response that you get or the results for OSCAL, I guess. Yeah. Yeah. And prior to you joining, we're discussing with Jim that OSCAL, of course, allows for additional aspects, right, besides the evidence, these are the basics, right? You need to know what is my, what is my subject I applied this assessment to, what is the evidence that I got back, and what are the, you know, properties and annotations, what is the test ID or the, you know, time of the day, you know, whatever the XACDF may have their own properties. If we use some other assessment tools, they have other properties. And then we aggregate all this observation to generate per control, its aggregated status. But there are other aspects that we can add to that, like remediations. In our tools, for instance, we use remediations to provide what are the tickets that have been opened when, you know, this failure occurred, what are, what are the script's path in Git or other systems that would run, I need to run manually or automatically or, and the other aspect would be risk. So depending on the, with the score and, you know, so depending on the maturity of the tool, right, this can be adjusted to provide a result that is more or less complex. Yeah, yeah, that's, that's great. That's exactly the types of things that I think could be captured in the policy report. Yeah, I was pretty happy with that she's on board. This is why she brought me in this group to, so the recording from last meeting two weeks ago, it goes in one hour of details across the, all the fields and objects here. Yeah, so certainly this seems very comprehensive. And I think it can cover pretty much everything that we and sort of, if it's extensible to properties, etc, we can even model other fields. The one thing we were discussing, which is an interesting point is like who, you know, it's obviously a compliance report, you know, could be sliced and diced in many ways and consumed by different folks. I think where we started with the policy report was we wanted at least a namespace owner or like a workload owner and the cluster admin to easily sort of view the output of various policy engines, right? So the question is, does this become, you know, in YAML format, would this be overwhelming or would this be, you know, simple enough to understand? And secondly, would this be presented at a cluster wide scope? And then how do we, you know, if we want to present something for a namespace owner inside the cluster, like as a CR, is that something else? Is that a subset which takes from this information and creates a more, you know, simplified report for somebody running an application to say you are your, you know, compliance issues you need to fix in your workload with pod security and things like that, right? Because where we sort of were previously, you know, just going back and browsing at samples was something like this, which was, let's say, then this is if it's a CIS benchmark, we're saying there's two, you know, failures and we're just giving some summary information. But of course, as we kept working with this, there were more and more things that folks wanted to add, right? So I'm sure over time, over time, we would end up with something perhaps at the same level as what's in the OSCAL definition already. So it does make sense to adopt that as much as possible. But yeah, how do we kind of, how do we sort of ratify the two views, right? Are they can, I think what I was just discussing with Anka was like an OSCAL and I think what you clarified, Anka, was OSCAL is more on the, for the compliance admin or somebody looking at compliance, it's giving their view for each control and, you know, what happened for each resource in that cluster. Yeah, yeah, a compliance engineer or a compliance officer, I think more engineer because that's really down into the operational details than an operator, which would want to see really the aggregation only at the AC control level like AC3 here. I think what we can do is to use this model, this data model to generate and store. And then we can have additional functionality, right, CLIs that will represent the capabilities for the various other personas. So like what you said is, you know, give me the, Well, one name space, yeah. Status per name space or per cluster. And that would be a reshavelling of that, right, where the subject reference will be now the root and then the controls will become the leaf that are associated with their status under those. So once the data is in and, you know, being JSON, it can be reformatted as we need. That would be, for me, additional capabilities as we have other personas. Right. So kind of, so then you're saying like a CLI or whatever, some tool can take the raw data and format it and output it in whatever report format. It will be totally transparent because right as I use the CLI to, you know, get my assessment result per controls, get my assessment result per name spaces, get my assessment result. We don't know, you know, how this is stored behind the scenes. Yes. So is there like with any Oscar implementation or are there, well, so right now there's nothing for Kubernetes. By the way, I saw there was somebody who had done some implementation for Docker. Have you seen that before for the container engine? No, where? Meaning that they express the Docker CIS benchmarks using Oskal? Yes. I will not be sure. What they've done for OpenShift compliance operator, right, which is Kubernetes CIS benchmarking. Yeah. So I don't think Andrew is a bit Docker anymore, at least when I was checking online, but this was, and this is about two years old now, but there's some attempt at taking Docker output and converting it into Oskal, right? So, but yeah, I couldn't find, and there's also like a Go library, which I think also was developed by Andrew for managing Oskal formats. So could be, yeah, maybe, you know, so I'm not sure where, if Andrew's still at Docker or I didn't, you know, or if he's somewhere else, but it would be interesting to see if we have some touch points and if there's any other ongoing activity with this, right? We have a Git that is available for that. So to see if we are still active. So I saw this, and yeah, it's amazing. Nine months ago, okay. Right. So it wasn't, so I was calculating. Oh, three months ago. Okay. So it's, okay. So there is some CLI, but some conversion and other things it does. And I think this is all written in Golang, it seems like. Right. So, yeah, maybe we're following up and reaching out to see if there's any activity or Yeah, can we go into the CLI to see what would they, the second folder from the top, I think to convert, generate. Okay. So open controls is the original format for the compliance automation before we had OSCAL. So it's a subset of OSCAL. I would say it covers about maybe 10, 15% of OSCAL. So because the people were really looking into continuous compliance automation and generation of documentation for fire, call fire and PWC, they used open controls to automatically generate that. And now there are converters between open control. So he is into compliance schemas to the core from the very beginning with the open controls. Okay. Yeah. And I see this gentleman who has also committed to the repo is, it looks like he's from Red Hat. So maybe, so that was the one, yeah, the latest commit here. Right. So we had, indeed, in February, we had a major change to schema. We worked closely with David and in the OSCAL team in East, and we provided our feedback that is too much nesting, too much, you know, levels that were really not necessary and made our non-SQL kind of explode. And so they reduced many levels of nesting. So it was a major change in February. So I see they keep it up to date. So they updated their... Yeah, looks like, yeah, this person from Red Hat was the latest and probably the highest amount of commits recently, right? So is it possible to reach out and see if there's interest in, you know, somehow working with this or mapping this to Kubernetes and what we want to do? Gus, is the name telling you anything? It's not someone I'm familiar with. I can look around to see if I can find a way to contact them. Yeah, maybe good if they're open to coming into one of our meetings and just discussing what their thoughts were. Is this actively maintained? Can we leverage it to achieve some of what we want to do? And I'm assuming if we, what we're talking about is saying, is there a way in Kubernetes to generate or to at least create like a custom resource which would be compliant with the Oscall format, which different policy engines running inside Kubernetes could then create? Yeah, one question and a concern which had come up in the past is like, how much data would do we store inside the cluster versus, you know, so at least our thinking was that we would just store the current information inside the cluster, any history, et cetera, can be managed externally. But then it comes down to, okay, what can we do with the data and can we create, either we create other resources from this main report or we have other command line tools, et cetera, to output different formats? Storage is an important one. So this was CLIs, okay. So yes, the storage, depending also how it is stored, we store it with versioning. So we keep only the Delta, but still it can explode if there is a large cluster and so on. So the way that we manage is that we declare TTL in our component definition. It is the format that allows a vendor to describe how their product services map to controls with, you know, their properties. So one of the property that we leverage there is the TTL. And then we provide across, let's say, an inventory, what is the lowest TTL so that we make sure that we collect the data before it disappears from the, so we do not expect Kubernetes to, because we have regulations that require to keep the information five years, right? So the Kubernetes, this way we decouple, you know, the expectations or the requirements on the infrastructure itself and the regulations, expectations. So you can leave it on Kubernetes for as long as you want. And we expect that it's in component definition that is declared. It is there only 24 hours. And then we know that we have to harvest it, you know, daily. Right. Yeah, that makes sense. Because we wouldn't expect the cluster to, so if the cluster just has the current copy and some, you know, well-declared or well-defined TTL, like you're saying, then some upper level management system can do the rest. Right. So then even if, let's say that, you know, it may be a large cluster and so on, we know that is, you know, limited in time. So we don't keep... Right. And just so again, what we're talking about here is that only from all of these definitions, et cetera, we're really talking about, and I can't find where it is right now, but somewhere in here there's the assessment results, right, is where that's what we would store inside the cluster. You cannot see it because you are at a lower level here. You need to go in the left side, you go above and you see assessment result layer. Oh, there it is. Okay. Above. This is just the assessment layer. The assessment layer is your, the tuning for what you expect. Yeah. So that's the... Let's see if we go up. So I thought there was a picture which showed all of the blocks, okay, root metadata, body. All right. You have to go down into the page. Yeah, this one. Yes. All right. So we see assessment. I see. So that's the only layer where the results is really what we're concerned with. At the moment, everything else could be is managed again externally or based on the policy engine or whatever tool is being used. All right. Because right now we are looking at a unified pane of glass to show the compliance posture. So then we expect that the assessment results are aligned. If we want to move into having the inventory presented in a unified format and consume from those various layers, their inventory discovery rather than discover it as the top level, then aligning the SSP in green will allow us to consume that. So again, it depends on the maturity of the framework that we implement. Right. Yeah. I think we are at least our current focus is on the assessment layer. And of course, Kubernetes has its own schema and ways of defining resources. So we don't need to necessarily re-represent all of that inside the cluster for sure. But yeah, if there's a way to, then the idea becomes, can we map this into a Kubernetes resource? So currently, when you're generating this for clusters, is it being mapped to a custom resource inside the cluster or is it just generating some JSON or XML which is consumed externally? Can you rephrase the question? So I think from the work you're doing at the moment and when you're applying this to Kubernetes clusters, has anyone on your team looked at creating a custom resource definition in Kubernetes? Oh yeah. So we have a team that is looking exactly at that and creating the custom resource where we generate the compliance operator model. We generate the assessment result in this format. Yes. So that's exactly the goal. Okay. But this is part of the offerings of IBM Cloud, ROX and IKS. It is not part of, we are looking now to converge with Jaias, ACM, and there is another lady there, Christine Newcomer, that is responsible for a customer, I know better, right? Christine is working with the compliance operator. So we want to align those efforts so that we have the same format, but definitely custom resources is part of the roadmap. Okay. So but that is not something that is open source or yet? Not yet. All right. So I think maybe what makes sense is, you know, just so that we frame our thoughts, what I'm thinking is we could, you know, and happy to outline this, but I think I'm going to try to interrupt. The compliance operator is already open source and it has custom resources for what it does, right? Just at the output that it has is not aligned with the OSCAL assessment result. So one effort is to align it with the assessment result. There are other efforts around custom resources to align also the inventory and other aspects to that as well. But the only CRs that we have today with respect to compliance are open source, but they are not OSCAL. So all this is roadmap. Okay. Yeah. So thinking of what, you know, we can do in this work. So right now we have this proposal for a policy report, which is a fairly simple, you know, kind of custom resource in Kubernetes. And like we talked about, the focus really was to try and, you know, this is for the cluster admin as well as the workload operators or the workload admin who are operating at a namespace level. So I think to kind of just clarify thoughts on how this, you know, how we can converge or how we can move forward, maybe it's worth just drafting a simple proposal or design proposal, right? And saying that, look, if we want to adopt this OSCAL assessment results layer, what would be the pros and cons and how it would work. And I think what I gathered from our conversation today, the way it would work is there would be one sort of OSCAL formatted output at the cluster level. And from that we could have like command line tools or other tools which can extract the information and report the information they want. So if a namespace admin wants to say, okay, give me the report for this namespace, they would have some way of, you know, generating that. Or maybe we have an operator or a controller running in Kubernetes which also generates some subset and stores it inside a namespace, right? So there'll be some duplication, but maybe, you know, that could still be done, right? Because then for access controls and security, a namespace owner doesn't have to, you know, be able to read the full report. They're just seeing some subset of it formatted in a simpler way. Can I ask a question? Yeah, of course. I want to make sure, because we talk about a lot about the summary and the status and so on, which is the control posture. Together with the schema here, we also have the evidence. So when we talk about policy result, is this team having in mind both aspects of posture and evidence, right? Keeping in mind what we discussed earlier that the evidence doesn't need to be stored there for five years, but it's, you know, or is the focus strictly on posture? So I, you know, I don't fully understand the difference between the two. So maybe... So the result, right? The posture is pass fail. Without the drill down on the evidence, right, on the, you know, messages that are associated with that and so on. So in the, if you look in the OSCAL format, you see I have evidence, what I have, sorry, I have the control status, right, which is pass fail, but then below I have the evidence group, which I give the details, right? So I wanted to make sure that we understand the difference, whether the Kubernetes team is looking for having both or not. So I think what we had discussed at one point is to try and again, and there was some concern about how much data we store, what we need. So every policy engine or every sort of scanner may have its own, you know, details, right? So the intent was to store more, I think, from the terminology, the way, you know, based on your explanation, more of the posture and then have some link to go and say, okay, if you need the evidence details, and that could link to some website or some other tool or some management system, which has more details. But really the problem we started out trying to solve is there's a growing number of policy engines and whether it's image scanners, configuration scanners, admission controllers, runtime security tools, and all of these are currently producing outputs in different formats, right? So for a cluster admin, it's like, okay, I have 80 different things to look at to understand what's going on. The intent of what we were trying to do is how do we standardize that format as the policy report? So even if we have some high-level summary to say, hey, on CIS benchmarks for Kubernetes, you got an A plus, but on pod security, you got a B minus and here's the namespace that's violating it, right? So something like that. And then, of course, you would have to dive into individual tools to get to more details. But starting with that summary is what we were trying to solve. Mm-hmm. So let's say my policy validation does not complete. So I do not have a pass or a failure, but I have an error, right, because I cannot connect. And I'll have a message associated with that, which will be part of the evidence, what I call. So you don't see this as part of my result, pass, fail, error, but should be a link to some other repo or, you know, must be. So we did have a message field, right? But the intent was this was not, this would be like a brief message with pointers to more details, right? But yeah, so, yes, it's a little bit fuzzy. So we did have in the, so the way the structure was is there's a policy report. And then there's a summary section, but just gives you a score. So you can convert that into a grade or some simple way of knowing. So a policy report could be at different scopes. The scope could be a namespace. The scope could be the entire cluster. The scope could be just one deployment. It's flexible, right? Yes. But then each result could be, you know, could also point to a rule. So this was the rule or control in the terminology, the Oscar terminology. And then it would have a message. And then status, which would be pass, fail, warn, error, skip, or the five, I think field, you know, and then whether it's scored or not. And then additional data, which is just some free form data. Yeah. What we call properties in Oscar. Right. So that was the result structure. And then you just have a list of these results. And that's, that's kind of where we were, right? So, yeah. So I think it, this, this introduces additional challenge, right? So now if we are, I understand for the, you know, storing local and using it local. Now, if, if I use that in an exchange, right, protocol with some other tools, whether they are, you know, governance tools or, you know, UI tools to display, you know, aggregators, the question is, now I have to manage the access not only to my environment, but also to that repo, where the evidence link in, in this format here is, is provided, right? So the more references we bring into that, right? It complicates the access. Yes, I think there is a challenge between being complete and being concise. Right. So how do you, yeah. And it's a good question what that balance or where that line should be. I'm wondering if it's not an advantage actually, because that evidence is the critical sensitive data. So I may need all together a different level of access than what I need to get the posture. Okay. Past fail. I can work with whatever credentials I need for my, you know, view my cluster, but then this separation will enforce a level of security on the evidence, which is my sensitive data. So maybe we turn that into an advantage and we make it more compliant. You see what I mean, Jim? This will force people to have a different access model to data that is more sensitive. Right. So if the evidence is, so is the evidence more like how the check was performed and details of what was scanned, things like that? No, the evidence is the actual message, the actual result, the actual API reply from the policy check or, right, it's the actual evidence. Evidence sometimes may even contain PI, right, depending on, you know, what is run there. I think I like very much your suggestion to have, to have the evidence by link. It doesn't change the structure of OSCAL, but it is just brings that my evidence would be, I'll use the href rather than having the whole information there. Right. So if I, but let's say if I'm, if I'm managing or if I deploy some workload, right, and that I have some pods which violate some security policies. So I'm seeing my results, it's telling me that, you know, but with the message here, then at least give me some information that this pod requires, let's say, run as non-root to be set to true. Would that be considered evidence? So the evidence would be, let's say that these are all CIS benchmarks, and that they are generated using compliance operator that is built on top of openscap. So the means by which I do the assessment is by running openscap, and I have the logic to check all those CIS benchmarks. So what I get back is that basic authentication file argument is, is not set, right. This is the message that I get back. And this is the evidence, while the, the status is pass fail, right. Now having pass fail, right, is, is one thing or error, right. And then the message is something else I wasn't able to reach or, you know, the, I don't have access or, you know, whatever, whatever it is. It can be that that message for some people is very sensitive. I don't want, you know, maybe this to be in a place where the operators can, can see that and make public that because this is maybe a breach that can be immediately leveraged to, to some damage in my environment. Yes, you're right. If somebody gets access to the cluster, you're telling them exactly what. Exactly. That's my point. So I think I like to pass it by link. So instead of the message here having this to say evidence is this in this age ref, and this is, you know, a g triple with a different level of access, so that only the, the people that I control can see that data will get access to. But if I want to, let's say, but again, like taking the persona of like a workload operator or user, I want to know what. So, so if I'm treating this as things I have to fix to be compliant, what is the easiest way to. So, and again, this could be combined through a CLI tool. So the CLI tool could go fetch that message and that information and or which is perfect. It means that if that person has to do remediation based on that, they will be approved to have that access. What I try to say that we bring into a picture, this, you know, this minimum level of access that is one of the, you know, controls that everybody wants to have. This will support implement that minimum level of access. If I'm an operator that I need to, maybe I don't need to have access, let's say I have 10 repos, one is for network, one is for DevSec, one is for IKS, one is for storage, and they will all have different levels of access. So I'm not going and poking on, you know, what are the failures on databases policies, right, because I'm only responsible for network. So I get access only to that. Okay. Yeah, and that would help sort of keep at least this portion, keep it more concise, right, because otherwise you're repeating the same message. And it keeps it more concise. So what it started as being an inconvenience, I think it's actually a good thing. Yeah. So I hear you are hesitating, Jim. So what is in your mind? Yeah, I'm just kind of thinking about the user experience and if there's some, you know, like a CLI tool or something, which can combine that and present it in a manner which is actionable, then it's fine. Of course, like, you know, the nice thing about having the message here becomes like I can just do kubectl get and I see everything and I know what to do, right. So that's the trade off. I think we'd need to think through, but I definitely have some interesting pros and cons. I can see that CLI like, you know, get the assessment result and I get my assessment result and now I say get my evidence for this assessment result for these credentials and the CLI is intelligent enough now to apply my credentials across, you know, everything, but my evidence will come back in this assessment result only for the items I have access to. The other ones will say, you know, no access granted or something like that. So if I'm a network person, I'll see all the policies, evidence for network, but not the one for databases. Right. So this was your concern that we will need, right, definitely we'll need to have the capabilities to allow the operator to complete their job into remediation or exception, whatever they need to do, right, definitely. Right. But now it would be not in one pass, it would be in two passes, right. Sure. Yeah. Yeah, and certainly I think we could, if there's tools to combine that and if somebody wants, you know, to just output everything into one report that can be done and there is some, yeah, definitely the security benefits are interesting, right, and just a separation of concerns there. All right. So yeah, I'm still thinking maybe the best next step is to try and formalize some of these thoughts into perhaps, you know, kind of a document or some structure and then we can see, you know, if there are folks who are interested in working on various pieces. Do you have a template for this design proposal? Hi, Jay. I've seen you joined. Hey, Jay. My name is Anka Saylor and I'm in IBM. I'm working with Jaya in the Red Hat. So she invited me to share on the OSCAL result format here that we used. Oh, yeah, I just joined late, so I was just eavesdropping. Are you with the Red Hat as well? You know, I'm ex-Red Hat. It's weird that you asked that question. I worked on it for like three years when Kubernetes first came out, but now I'm VMware. Okay. Nice to meet you. Yeah. Quick question, which is SIG network related that might be a collaboration across us, but I don't want to interrupt this conversation. So maybe after it's over, I can ask my question. So maybe just to wrap up on that, what you were asking Anka, so I can share like the format we used before and there is a link somewhere in our repo for the policy. So it's very informal, right? There's no one standard format or anything, but it's just some typical sections that we have seen in most of these type of proposals. And happy to kind of, what we can do is maybe we can just at least capture some sections and, you know, try and just maybe from a thought process and point of view that will help us frame some thoughts and ideas and kind of be concrete about what we want to propose. Yeah. I will start with a draft based on everything we've discussed today, with the, you know, personas, CLIs, personas, storage, the different levels of access and so on, summary. So put everything there and we can refine it. Maybe offline, I guess. Okay. Good. Okay. So Jay, just looking at your question in chat, so from the DNS policies. So we have not discussed that before in this working group, but that is super interesting. I know that there was some work done with core DNS and OPA policies. So I had read something about that and we had discussed that also in the multi-tenancy working group. So what did you have in mind? Yeah. So, okay. So, yeah, we started the SIG network, network policy API group about, really, it was about six months ago unofficially and kind of more officially in the last couple of months. But, you know, it became, it became, you know, I don't know how much you all know about network policies, but, you know, they're very granular, they're very low level, they're very CNI centric. And it turns out that most people want policies that are way higher level than that, right? And so we have all of these, like, user stories that we've, like, that people have given us over time, which are, I think kind of, I'll just, you know what, I can link you all to them, policy that are just a lot, a lot higher level than what, you know, the SIG network is really capable of supporting. And I was like, well, at some point, rather than just always bike shedding about whether or not we can do this or not and whether it should be in the API and whatnot, I feel like, you know, I'm like, well, I feel like we're kind of like, you know, we're kind of like beating around the bush instead of actually solving the actual problem, which is creating a more unified network policy, like lower case, you know what I mean, like a unified network policy model for Kubernetes clusters, like, you know, which a lot of the implementation details might be implemented by SIG network using our network policy API, but like there's a higher level layering for DNS being a great example, where people want to very simply say, I don't want people to access this site. And a lot of that could be built with controllers and operators, right? You could presumably envision a world where you could query core DNS, you could take information about, for example, services is another one, right? Look at Kubernetes services, create a network policy against the service. Say I don't want these pods to access this service. That's not supported by the network policy API. But again, that's a use case for an operator where you could look at pods behind a service and then create those more granular rules underneath. And so I just think there's a lot to, there's just a huge impact to be made there. And I wanted to make a little sales pitch around it. I mean, we have, I know a few people personally that would be interested in working on this, but I wouldn't want to go down this road alone if nobody, if everybody else only had passive interests. So what I'm looking for is active interest in this, in designing something like this, prototyping it and stuff. That's kind of where I'm at. So just planting the seed. I know it's the end of the meeting. I don't want to like dump decisions on you all, but that planting the seed and we'll see where we can go from there. No, that does sound interesting. And so, and maybe it's something that if you want to also share on the Slack channel, right, so folks, and we can update the meeting notes and we'll have the recording posted. So I think it'll be good perhaps even in our next meeting to sort of revisit this since we're almost, I guess we're over time right now. But yeah, that's some interesting ideas, things we can also think about. And I'm not sure if you're, you know, looking at more like constructs like custom resources or external policy engines and there's all sorts of interesting things to dive into. So yeah, any canonical solution will work. It doesn't have to be an API. If you want it to be an OPA thing or whatever, I don't care if we had some centralized place where we could build those higher level policies. It would just, yeah, it'd be really cool. Okay. So yeah. So next time we can talk more. Sounds good. Thank you for sharing. All right. So yeah, uncle, we can collaborate on the document maybe as an action item for the for next time. And then also we'll I'll put this topic on the agenda for our next session. Okay, excellent. Thanks everyone. Thank you. Bye. Thank you. Have a good day.