 My name is Robert Ficalia. I'm here to talk about policy governance with Oskow. The first pro tip I realized as I walked to the door, long titles don't fit well on these boards. So I've shortened it. Just a little bit about who I am. I'm the co-chair of the Kubernetes Policy Work Group. I've been working for the last several years with the CNCF tag and the Kubernetes SIG security. I do have a day job and in that job I help companies through their SOC2, HIPAA audits, and things like FISMA and CGIS FedRAMP. And in that journey, that's where I encountered something we'll be talking about today, Oskow. And then in my copious free time, I do play a lot with machine learning and especially with graph databases. And you'll see some of those touch points in the presentation. Just a quick pitch for the Policy Work Group itself. We started 2019. We've been focused on what we like to call Big P policy and Little P policy. So Big P is kind of the human readable, human understandable policy concepts. And Little P policy is that configuration policy. More if you're in the DevOps and working in manifests and Kubernetes, these are the types of policies you're probably referring to. We have created a CRD and we're working on a cap for that. That is aligned with Oskow. And I'll get into what Oskow is for those of you who don't know. And that helps standardize the output. Our CRD and cap is to help standardize the output of various configuration and policy checking tools and policy enforcement engines. So you can get a consistent aggregation data structure. We've done several white papers working on a governance white paper. And some of the folks in the room I know are contributing to that. And there have been some open source dashboards and adapters for the various scanners and policy checkers. So more to come. We are working as best we can with NIST and trying to do more with Oskow in Kubernetes and provide real world examples. And of course, many of you might be aware of a common expression language cap that was recently, I think it's alpha, maybe it's beta. Anyway, it's a new expression language for admission control in Kubernetes. So we're definitely trying to drill down on what we can recommend around using that, best practices, patterns, et cetera. So let's jump into this talk though. So all of you are probably familiar if you're at this talk with basic policy questions. So these are questions that you probably deal with, especially managing a team or interacting with auditors or compliance subject matter experts. These are the type of questions you might get at that big P policy level about your Kubernetes cluster or even your infrastructure as code or infrastructure broadly. I would like to highlight the last point because I hear this even most recently on a call today with a bunch of agency folks, a bunch of audit folks. There's this lack of trust in what they see, what the data they're presented with is, the documentation. And so this is a foundational concern for how can you build a governance program around your policies and controls, which we'll get into and build that trust so that they can trust how you're presenting the system to the auditor, how the auditor can trust that you're giving them the correct information and vice versa. At the lower level, the Kubernetes level, those configuration policies are where you see a lot of the tooling that you've probably seen at the conference. You're talking about checking admission control, you're checking configurations against baselines like CIS benchmarks, you're checking resource limits, perhaps you're even looking at cost controls, all expressed at the Kubernetes manifest level in some way or form or operators and other ways to automate that. You might be interested in what your workloads are doing in aggregate and how you're isolating those, how you're defining network policies around those. You may be interested in verifying the identity and contents of those workload images. So these are the policy, little P policy issues that we'll be wrestling with. And then again, I'll call it that last as Kubernetes itself is deploying Kubernetes and deploying infrastructure as code and is becoming kind of a meta provisioning or meta admin entity. There's that kind of bootstrapping process and then kind of the ability to meta manage things. As Kubernetes itself becomes the control plan for other control plans. My basic view of how I look at the model here is we're gonna stick to that declarative configuration idea that Kubernetes brought to the forefront wasn't the first but has certainly arguably been the most successful. And I apply that in the realm of compliance by comparing what you might be able to do with GitOps and what I have seen done with GitOps versus what a traditional GRC is the de facto reality today. So I'll talk a little bit more about what that future state might look like. So obligatory slide on what OSCAL is. If you're here, you might know it a little bit but OSCAL is an initiative from NIST. They've been developing a standardized schema for expressing controls. So these are configuration checks or requirements at the framework level for what you must do as a system owner or a system implementer to make sure that you conform with something like NIST 853. It's not necessarily specific to NIST 853 or any particular compliance framework but they are NIST so they started with that. And it has a broad acceptance in the US federal government and even elsewhere in healthcare, high trust is also based on NIST 853. So it's a good place for them to have started and they essentially layer on different schema models, a catalog of all of the controls that you might need to apply to your system. A profile, we'll talk about what that is. Component model is kind of the functional things in your system, the elements, the subsystems. You have to coalesce all of this into a system security plan. And today, before things are all get ups and automated, a system security plan might be a thousand page word document that expresses all of the diagrams and all of the controls that you have collected from your catalog and your profile and verbose human descriptions of how all of those controls are implemented referencing other documents, referencing technical specs, referencing things that only the subject matter expert might be able to express. So one of the goals is to put that in machine readable format and then allow organizations to exchange that internally with auditors, reuse that system security plan to get authorizations to deploy a system. And then quickly, just auditors have to be able to assess this. So you need an assessment plan and then after you've scanned things, you're producing an assessment result, essentially a report. And then you'd have to track your risks. So they have a plan of action milestone. So those of you familiar with the government led con-law and activities and know and love poems and managing those risk logs. The Every Osco file has a hierarchical set of elements, lots of linking across different elements and they have a universal unique ID that is essentially the identifier that you can link across components to. And you can add various supplemental Rata in the back manner. So when you kind of merge Kubernetes policy, both kind of the big P and little P policy with Osco, you get essentially compliance as code. If you can express your system in a machine readable Osco form and you can merge that, we'll talk about various ways, merge that in with your policy as code and all that is very traceable. Now you get compliance as code and you have a governance structure that is standardized and reusable and shareable and auditable. And just to call out, the policy report CR that I mentioned earlier, we aligned that with Osco and Red Hat and IBM contributed some code to extract from that an Osco assessment report. Those of you might be familiar with platform one out of the DOD, they extensively use Osco and Kubernetes and they've got a lot of GitLab examples of Osco for various CNCF or Kubernetes related projects. Call it some of the folks from Defense Unicorns who are working on open source as well, both with a tool they're building in the Coverno open source engine. And then myself and a number of the CNCF projects are open sourcing more Osco and more tooling around Osco and we call that sledgehammer, state local education, government enterprise and healthcare. So not necessarily specific to government but highly aligned and likeness will be focusing on this 853. And I've got a link at the end of the slide to that for those who are interested. If you don't know why you would need to manage your policies, then you may not have tried to run a Kubernetes cluster under a compliance regime like FedRAMP or FSMA. Over time, these policies change, the requirements change, incidents happen, vulnerabilities happen and so you have to be proactive in how you govern and create and curate these policies. Reacting to that, it just amplifies the problem. And so especially if you're talking about making changes to kind of a steady stream of vulnerabilities, you're never gonna catch up. I think, again, situations almost on a daily basis where folks are trying to exchange information securely and is that again, sending an Excel file or sending a Word document, sending YAML or JSON, I mean, this across boundaries, across agencies, across organizations who may or may not have different levels of clearance. You know, this is a real problem. So, you know, having a standard format where you can exchange over APIs is very important. I guess last point there, I don't have to convince anybody that, you know, it's much more cost effective to identify where your policies are gonna need to change and control that change early in the process than later when you're trying to operate under, you know, time pressures and cost pressures and everything has to be reported out on those monthly con loan calls. Yeah, and policy at large, it gives you consistency. The policies code gives you guardrails. I think most of you have seen this and there are various reasons that you want to centralize, control is the wrong word, but centralize governance of those policies so that you have consistency across your workload, identities, your access, and then again, this notion of having a continuous assessment of those controls and those implementations. And a number of other things here, all the interest of time, skip forward. And then I think one real world problem that I see often is that, you know, as their communities clusters grow, certainly the apps in those clusters grow, you know, the challenges become exponential and policy itself reduces that and kind of bends the curve, but policy governance taking that to the next level really kind of helps you wind up that complexity and control it in a more linear fashion. And if you're, I kind of think of that, maybe not exponential growth as I've looked at real world complexity grow over a number of clusters, it's more like a Fibonacci growth curve, but you kind of twist that by meta managing your policy, you can hopefully get into a positive spiral. This was just a quick kind of textual heat map that I put up. Just the idea being that governance of policy, policy itself has different utility at different time and I would point out that, you know, there is some very obvious negatives as you're going through the early phases, right? You're gonna take a hit on the ability to deploy things quickly by having to reason through all the controls that apply, by having to map this to different frameworks, by having to build that security in at the beginning, obviously it will have benefits on the tail end, especially as you start talking about sec ops and incident response. But it's not always a silver bullet and it's not always fun. I'm gonna let everyone take a deep breath and kind of channel their inner bob before I show you what I'm calling the compliance canvas and we'll dab and take from the pallet as we kind of work through this mess. But at a high, high level, you kind of want to understand your system from the user view. So, you know, on that left, you've got the mission, how are we gonna accelerate this? You know, then we have to worry about our threats and I'm gonna talk through each of the different policy ops that we observe as being essential and make sure that we cover at least some high level. It's gonna be impossible to go through all the deep dives and talk through various concrete examples and everything, but I'll try to call it a couple. And then on the bottom, you know, the substrate is, you know, as OSCAL tooling becomes better. So whether it's through the policy work group or some of the other open source projects I mentioned or commercial tools, the tooling is gonna enable the generation and consumption of these various models, the catalogs, the component definitions. We'll talk about different variations of that. And then ultimately kind of that system security plan, really system model becomes, I think, the central governing resource that unifies everything together. And in terms of security operations, those, you know, the assessment plan, assessment results, that's where you're gonna kind of live on a daily basis. So I'll try to connect all these together as we move through the rest of the presentation. I would, you know, remind everyone, probably everyone here has this experience that this is a journey, you're not gonna go from zero to everything in one step. Folks who try to do that often find out that it kind of implodes and you have to argue for trying again. So let's talk about these essential ingredients that I've called out. So first and foremost, you have to collect and curate a policy library, right? And this very much maps to your control catalog, that's an OSCAL artifact for enumerating all the different controls and different families or groups. And we'll talk a little bit more about the profile, but essentially it's a tailoring of the different controls that you might have under different frameworks or for different systems or for different use cases and you can parameterize that. The current benchmarks like CIS, even some of the more cloud specific benchmarks are useful fodder for starting this curation process. They've typically mapped all their cloud specific or Kubernetes specific controls to NISC 853, EKS, AKS, Google config, they've all done some mapping to NISC 853, but more broadly they've mapped it to other control catalogs, be it PCI or others. So that's a great way to kind of bootstrap your catalog review process and start using the tooling to generate OSCAL catalogs. And then in those control sets, you're gonna obviously look at preventive controls, detective controls, I'll have a little bit more about the kind of threat modeling around that. And ideally, again, getting back to that idea of GitOps versus GRC, all of this is being managed from day one in a Git repo, you're managing PRs to change the OSCAL catalogs to manage your profiles. And there may be some challenges with Mono repos for those of you who love those. I don't recommend that folks to Novo create policy from scratch, copy and paste when you can. There are various strategies for this, this should be maybe commonplace for Kubernetes resources or manifest themselves. So there are different mechanisms of the policy engine support, the policy languages, and then there are different API and tooling frameworks they can use with those. I'll also show an example of how you can kind of use generative or transformation code to kind of create OSCAL on the fly from one of those policy engines. And of course, there are many other DSLs or you can use Python or other things to generate your OSCAL. And then consume the OSCAL and generate the actual little p policies. And we'll talk about parametrization next. So those profiles, remember, selections of specific controls and then you're parametrizing those so that the variables are not defined statically, those can be defined at deploy time or even at post-deploy. And so in support of that, all of the major policy enforcement engines and languages support some form of templating with parameters. So gatekeeper as a constraint template allows you then define constraints that invoke specific parameter values. Coverno has rules that support kind of open API V3 and OCM, which is open source cluster management project, has policy templates as well. Now that you've got kind of all these templates and parameters, you've got to kind of bind it together. That's what I call policy assembly. So you can map controls. And the example I'll show you is kind of mapping it to threat indicators. So you can annotate your controls with threats. You can then annotate your policies by mitigating different threats. And then that way you can kind of use that to bind the two together. You could do things and projects like GovCar and others have kind of used keyword regex or search and replace or even NLP to match up kind of descriptions of your controls, description of your system and kind of do that mapping. And they might add real world threat indicators from real systems and kind of build a scoring rubric, build a heat map of which attack patterns match which components and which mitigations. So that's another option for kind of building an automation framework to generate your policy to control mapping. Let me just see if this will cooperate just to show you kind of what that could look like. So here, those of you who are familiar with Rego, it's from the open policy. So essentially just, you can think of Rego and OPA as transforming one JSON to another JSON. So here, for example, you might have your control catalog and this is very kind of demo Oskow, pseudo Oskow I'll call it. But this is kind of a typical JSON look for what a control might look like. And then what we've done is broken it out into different components. And some of those components are functional components but some of those components we actually define as security capabilities. So there's a good publication if you've never read it, NISTIR8011 goes in a great detail around this but you're defining the capabilities, the security features that you would need to implement. So encryption could be one, RBAC could be one, right? Checking integrity could be one, signature verification. In this case, you have something like preventing unauthorized containers. You would connect that to different threat IDs or TTP name, hopefully you can see that. Mitigation or defensive techniques like execution prevention. And then you'd map that up with your controls. So let me just go to the next set. Here's where I was talking about kind of mapping with heuristics or you can use different types of modeling for threats but you're going to want to define in Rego your different threat scoring, that heat map and then you can actually express how you're doing that threat mapping in code. So here you're looking, now this is fairly demo hard coded but you can easily call this from external data or call an API but you're looking for a specific set of TTPs and you're matching that up to the security capabilities and controls and then you can classify that further into is this a protective control? Is this a detective control? Or is this a responsive control? And then you can map that to your particular framework. And then after you have that, where is the, sorry. Yeah, you can add some validation rules. Oskal, this was fairly new. I think in the last four or six months the validation rules has become a new part of the schema. And then I think if I run this, let's see. Yeah, so here and now on the output side it's spitting out what would be components that you would insert into your Oskal, typically as a component model and then ultimately will generate a full SSP in the next step. But here you're describing how you enumerated the protect, the respond, the different rules that were triggered and then what the threat coverage might be and you're feeding that into the SSP. Here you're mapping those control implementations based on those scores that were generated in the previous step and then you're making sure it has the right amount of coverage. And then this should produce, yeah, now we've combined the security capabilities that address those different threats with different levels of scoring with the functional control components who needed those controls and now we've created our control implementation in Oskal in this bottom output. And so it kind of maps it all the way back up to the control ID. What were my big P policy requirements? What rules did I use to actually determine that this threat and this capability match that control? And then I get an actual Oskal readable definition of that control implementation and then at the very end you put all that together. You can actually use that to generate your assessment plan. And so this again as another Oskal artifact gets you to the audit phase where now you're defining how am I mitigating those threats, how are those controls providing that implementation and how as a tester can I write validation rules to support that. And I just as an aside, that is the one cool thing about Rego that I like is it can generate JSON from JSON. It can generate Rego from Rego. So just something you guys might be interested in in other use cases. So let me go back to the palette here. We talked about those profiles and base and parameters. I think the thing to call out all of the languages and engine support, some sort of variableization, parameterization. But again, managing the variables gets very complex very quickly as well. So while it helps bend that Fibonacci curve, it can quickly get two out of control. I wish there was a silver bullet answer for that. I think it's something we still wrestle with because I've seen templates that are just nothing but variables. And it's like how readable is that, how maintainable is that? You obviously want to do policy validation like any other coding exercise. You want to do local tests, Coverno, Opa, Gatekeeper or do some dry run. I'm sure Qboard and others do as well. Conf test, all these have some sort of local test capability. Unit tests, usually in a pipeline. That way you're consistently, before you're updating your policy, distributing policy, you know that it's being tested. All of them support some sort of mocking and replay testing. And I'm not sure all of them have kind of a convenient coverage output, but I think at least Gatekeeper or Opa has a convenient coverage metric report. And then so now that you've curated and mapped your policy library to specific controls, you've got to distribute this out. And again, Opa, Gatekeeper, Coverno, others do support either an OCI bundle or kind of not proprietary, but they're project specific bundling. You could obviously manage all these things in Git. I think again, it goes back to that branching strategy and how you're going to connect that up to your policy enforcement point. I'd say OCI bundles, probably the thing I'd recommend. Racko, sorry, OCM has their version of this is just placement and then they have a policy generator component. And then there are cloud specific ways you can do this. Again, Google config, you can use S3 as that bucket for, from pulling everything from Git into the bucket and then Opa and Coverno can pull it from there. And there's an Azure policy extension as well. I would note that as you distribute out policy, be sure to sign, digitally sign, cryptographically sign, and then make sure verification happens that the engines support, Coverno, Opa, I should say Opa at least, Gatekeeper support it. All of the engines support some form of enrichment where you can, instead of statically, as I showed in my demo, statically saying, I'm looking for this TTP idea, I'm looking for this resource attribute, I'm looking for this namespace label. You can pull external data and enrich the decision-making process. There are various ways to do this, kind of similar to the policy bundling itself, you can layer it in, you can have separate bundles, you can put it in secrets. But I think the interesting thing in kind of real world situation is that you can, if you have a multi-tenant environment, multi-cluster environment, you can now really tailor those policies to different deploy time and even operational time policy choices. So this becomes a big thing, if those of you have had to wrestle with, say, data isolation for GDPR and across different country boundaries, this is a very helpful pattern to use. The policy assessment reporting, I think there are lots of tools out there. We obviously, the policy work group, like the ones that support the policy reports, we've tried to build some adapters for those that don't have it. We're trying to get it into that cap, so hopefully it will become a standard Kubernetes API and then everything will support it. There are Prometheus metrics, other ways to manage your assessment, execution, and aggregate data about what violations you have. As I mentioned, there's an open source policy dashboard, but there are a number of other open source and commercial dashboards that can roll up the policy violations. And as OSCAL support becomes more prevalent in things like EMAS and other GRCs, we expect to see more of that display, the dashboard display of your assessment results. Gosh, time-wise, this is really more of what I call policy adjacent. This is not policy, per se, to remediate. Obviously, if you have everything controlled and on these guardrails and constraints, it makes remediation a lot more manageable and a lot more deterministic. But I think you can use that policy report output or the gatekeeper audit output and you can then generate PRs and then that would create new sandbox rules, network policies. You can add different labels in response to different attack indicators through admission control, mutation. Yeah, and then you're obviously gonna have to track your poems and risks over time. There are some cross-cutting concerns that will obviously be touched multiple parts of that policy management process. So those heuristics that I mentioned, they really do cover that whole MITRE attack framework. So you're looking at this some, slice through many different pillars. It's not always kind of neat and compartmentalized where you can build a policy snippet for your volumes and a policy snippet for your image management. A lot of these are gonna have cross-cutting concerns. And this where having a good model to drive that policy generation helps. So static analysis, very infrequently, but starting to see some interest in formal methods for that. My personal area is graphs and kind of using ML to do graph embeddings and graph mappings. And actually even lower-level large language models for machine learning have actually been recently shown to be able to model out different attack paths and defense sequences. So that's interesting area of research. So what's next? We need more real-world examples. So I mentioned open-source project that we have a number of CNCF projects contributing sledgehammer. We'll be open-sourcing Oskal for component doubts, there's Rego policies that do the generation or the actual low-level policies. And yeah, we need more tools. Thanks to, again, folks like Defense Unicorns and others who are building open-source Oskal tooling. And we really wanna see some real-world audits and we're gonna try to surface as much as we can within the context of sensitive host-specific data. But we're gonna try to surface all of the assessment plan, assessment result artifacts and sledgehammer and more meta information about the processes that got us through that audit. And I think that audit community is right now even less aware of Oskal. Oskal is new for everybody but the audit community is not well-tuned and ready to receive Oskal. And certainly not in a Kubernetes cluster environment or a cloud-native environment even. So we're hoping that this will be a resource for auditors. And if there are any in the audience, please, I'd love to talk to you. But getting them trained up and familiar with Oskal, with Kubernetes and Oskal and then how to consume these artifacts. And with that, I will open it up to questions. Yes. I think there's gonna be a lot of competing approaches. Everybody's looking at the same problem and feeling it from different perspectives. And so there will be a lot of competing ideas. Oskal is very new, it's very open, developers are very flexible. I think in a very specific segment, i.e. government, it has a very strong endorsement in that NIST is the standards body that most agencies use. And I don't think it's any surprise anyone that FedRAMP is a very popular framework for cloud-based services that government are procuring. And FedRAMP has officially adopted Oskal as it's emerging and very soon required format. So I think that the vendors, the open source projects, enterprises, homegrown, if you're going to play in that area, you're going to have to embrace Oskal. But the good news is they're very flexible. So lessons learned from other projects and other approaches should be incorporated in that. And I think they would welcome that. From the CNCF policy work group, I mean, we're agnostic to the tooling, the specific frameworks. But again, we just look at Oskal as a very nice fit to the problems that come up in our work group. Yes? Yeah, so Jemar, I'd say if I can go back and if it will pull up here and if it will show. Yeah, okay, so here we go. We're going to re-home that. That's in my current GitHub. So when we file the sandbox ticket, we'll be putting it in a kind of a .org or .io, but oh gosh, is that going to be readable? Probably not. But yeah, so essentially we want to deliver all of the Oskal and any tooling that we create or any configuration for other open source tools so that one couldn't, in theory, generate all the Oskal or use the templates that we create. Apply it to a fairly vanilla Kubernetes cluster and then use that either in an agency or as a host and go through an audit. And like I said, we plan to put as much as we can from a real world third party, we call them three pals, a third party audit organization. As much of those artifacts out onto the open repo de-identified from any actual IP addresses or sensitive vulnerability scan output, et cetera. So that in theory, especially agencies, we talk to a lot of agencies who are just getting caught up on dev ops and getting their heads wrapped around how to do Kubernetes at all. The lift to go from nothing today or very little today to a full, reliable production ready Kubernetes operation is pretty huge both in terms of staff and money, but really it's staff. So they're looking for a bit more of an easy button that they can say, give us some best practices that we can take and play with in our lab or another ATO cloud provider can deploy. Again, similar to Platform One, Platform One is pretty restricted and who can use it and it has some significant costs, which again may price out some of the smaller CIV agencies. So again, I wouldn't, and I would just as a coded to that, I wouldn't see this as being competitive in any way. I think it's more a way for someone to dip their toes to really understand what they're getting into and then make a very informed decision, maybe with a couple of pilot projects, get success and then say, okay, we wanna go to Platform One, we wanna go to a commercial kind of software factory, DevSecOps platform engineering substrate. Now we know what we're getting into and more important, our AO, our authorization folks know what to expect and how to consume that Oscar. We know if our GRC can support it. So they do the discovery with Sledgehammer, get really comfortable with the operating model and then can scale up into something more, I guess, enterprise scale or enterprise grid. Anything else? I'd say the, I mean, as we're planning to generate the Oscar and the corresponding policies and I think in kind of a similar way, the Coverno effort you guys have, and which we'd love to collaborate on that because again, we're not picking OPA versus Coverno or any particular tooling. I'd say we'd wanna take it one step further from just the policy generation and actually the Oscar generation. So deploy, maybe to answer your question. Deploy the cluster, run the Oscar generation which then will generate the little P policy, run your Conf test for lack of a better word, your assessment and then generate your SAR and then figure out what you have to fix. Could that be done in GitHub actions or some other CI CD or Hargo? Sure, I don't see why not. So as long as because it's gonna be generated code as much as possible through, the idea is that you wouldn't have to do point in time updates and kind of that branch and merge PR change. But TBD, it's early days. Anything else? Yes, please. Yeah, and that's the kind of Rosetta stone we were using was that threat model, right? So if you can model the specific of a particular framework, map that to your threats, right? So TTPs or you can use other frameworks but specifically, right? Annotate those to specific threats and on the mitigation side, right? You're annotating those with specific mitigation techniques, defense techniques, right? Then all you have to do is maintain that mapping of these threats are mitigated by these defense techniques. And of course, MITRE is doing that for you if you're using their framework, right? So that becomes the glue. So if you wanna drop in, if I understood your question, if you drop in PCI, you wanna drop in HIPAA, as long as you've annotated your requirements and then the specific controls for those requirements with those threats, then in theory, you can just assemble the OSCAL and the policies from that mapping of threat to defense IDs. There's no formal, I mean, something that we can consider with the devs for OSCAL to introduce, but there's no formal kind of threat to control mapping. We just kind of added extra properties to the OSCAL. Yeah, so we could, but we should maybe submit a PR to get that formalized. We wanna get more comfort, again, through the Sledgehammer real world testing to before we probably stand up and say, this is the exact way to do it, let's codify it in OSCAL. I know you've had your hand up a couple times. Yeah, the first epic, if you will, is that we're gonna go through threat modeling exercises, something I've personally been involved with the TAG and with security. So we'll do project specific threat modeling, those artifacts will be released then from that we'll do the mapping, those OSCAL generation tools and artifacts will be released and then I think everybody's welcome to participate in those processes to create, review and curate those. But I think it starts to get interesting for the operator community and maybe other projects who are looking to emulate the process. Once we've got that full OSCAL component model, the policy libraries and we can demonstrate how those are actually running. So I think, I hope that answered your question. Basically we need a little bit of dev time from the team and go through some reviews of what the OSCAL looks like for their project. Okay, and I'm told to stop, so thank you.