 Hey everyone, it's Jim. Hi Jim, how are you? Good, how are you Robert? Hey Jim. Hello. Hello everybody. Hey you. Check on Slack if Erica is able to join. All right, I messaged Erica on Slack, but maybe we can get started in the meanwhile. Let me pull up the agenda items. So a few things I wanted to just do. So Jaya, there was a few more comments on the, in the document so we can quickly go through those. I have also another PR submitted to add the selectors. So you, you were looking for that feature, right? So we can take a quick look at that and then see if there's anything else to discuss. That'll be great. Basically, you know, we have been socializing this internally with the red hat and there are some other use cases related to observability where it made sense to you, you know, use the, use the same CR. So we got some. Okay, so yeah, let's go through that. I'll pull up the proposal and. So I think the first comment to you. The recent one you had added was on. Severeities or categories right one. Yeah, so right now category is in the free form section right and the request I got from one of the other days and then red hat is to make it. More a well defined scheme element right so that because anything that's in the data is going to be free from so we really cannot count on it. So, so the idea here is that I know you had a comment on you know, then we'll have to come up with the list of you know, what are what are the. Right. So, yeah, I think a couple of things that we thought about where some of them could be security related others could be health related like health checks. And those were the two we came up with but we could definitely come up with a few others based on our experience. And that was, that was one of the, so this way you know the idea here is that if the CR is generated you know who it is processing that information can filter out right the ones that they care about. I think that was that. I think one question to think about is, is it still up to the engine that in terms of what categories to define like say for example if this is a CS benchmark report with those categories potentially be different. Or are we defining some high level categories and we want all reports to fall into those right so. I think that it's the latter right so what we were thinking is that we would define some high level categories. Okay. And I agree with you that you know we don't want to over architect this and you know, right, if I know the next level categories and all that right that'll become too much. So that means if we can put it at a higher level where we know that these kind of reports should be routed to say maybe a security operation center versus these kind of reports should be routed to insert management tool right. Okay. That's that's the level we are thinking of right. Okay. And would it in a zero way to tie those into like something like we had talked even about a red discuss at one point scoring and something like you know the CVS system. So I don't know, I haven't researched this enough or looked at it deeply enough to know but if there's some way or if there's some precedence in terms of what you know what some standard categories are then it would be easier for us to say okay, we're going to adopt that model right. I think if you have to come up with some categories. The challenge always becomes do we make it extensible is this a string or is it a enum field where it's a fixed list. And the law reports, you know, kind of fall into one of one or more of these categories so let's you know maybe if you want to propose propose a list that we can discuss and review and not sure if others like couple Robert, if you guys have any thoughts on this how to manage. On the vocabulary for the data. Yes, so the, the, what Jaya is and is proposing and I think we've discussed this before is making category top level attribute so fixed attribute within the policy results. And, you know, having a set of, I guess. Some bell define categories which reports would fall into, or each result element would fall into which makes it easier, I guess if you're searching or trying to scan the results externally. If you're adding the top level metadata, it might also be useful to have some sort of tool or vendor thing here or some notion of names facing like the work with a number of tools that do broad categorization with metadata and similar to onsite installations, it ends up being most works end up being customization to their own reflect their own internal or standards. And so, if we're adding it as a top level tool, a top level key to the metadata, having the submission of either names facing it or having an additional qualifier of vendor or tool of origin helps sort of disambiguate a little bit. Yeah, yeah, so we do have a, you know, like in the metadata at the top level of the report, there's, there's, you know, what is, yeah, we could put engine in there and certainly different engines can put, you know, put other metadata as required. I think the, the additional question is in within the results for each element of the report within the results array should we have a well defined category, and what would that list look like. And just the feedback from the other P at Red Hat was with regards to the desire for having a top level category was on the basis of, like, basically we're talking about moving out of data to the top level and I'm just trying to understand the differential context both where just what's the, what's the differential value add. Yeah, so this is yeah so I think what we are saying is that if you obviously you know right now the proposal is to put category within data right but then whatever is in data is all free form so we cannot count on it being there. So if we are in category as a top level item, then the consumer of this data could then filter and route it to appropriate destinations because I mean the whole point of generating this data is we want it to be actionable and we want to take some actions of it right and the actions could be taken through the incident management tools or using security operation center or you know, depending upon what is emitting this CR right so having that category will help us route or filter out things that we need to process. So the other constraint the vocabulary constraint here, like at some level, they're, you know, having like, are we just concerned about being overly prescriptive when it could be an operational concern it could be a cost concern to be a security I mean, the, and so category then becomes almost reform on the value perspective. And we're already to be indexing other fields in this. And so I'm just, it's a little bit fuzzy to me on on like I understand what you're saying but I feel like we're going to have to be indexing these fields anyways. And there's also the concern around what is the constraint vocabulary that actually makes sense. Yeah, that latter, latter part is what I'm concerned about to so I think there's two ways to approach right one is we could still make category, a top level field within resource but leave it as text which each engine could decide what categories it's, you know, creating a reporting. It's not slightly better than leaving it in data where there's data is, you know, kind of optional and not structured like just pointing out. The other option is we make categories sort of a strict set, which I think yeah that would be a concern because maybe not everything falls in that set unless again there's some standard we can point to as a precedent. So what about categories of top level field unconstrained text. Also, and also optional. Right, so that that could be the way to do it and then maybe it satisfies both, both perspectives right where for engines that want to have some categories and count on those we know that. So if it's coming from Rackham that there will always be a category and Rackham can publish the list of categories that it allocates or assigns to reports with something like that potentially work chair or. I think what you're saying is that we'll make category, move it to the top level but then don't be prescriptive in what is put there is that what you're proposing. Right so making an attribute of resource of sorry of results right so each result will have a category attribute, just like we have status and scored. And it will just be text and it will be able to say, at least in this first release each engine will be responsible for, you know, managing its list of categories that it publishes results with. Um, that's a good middle ground and I think you know maybe you know base then based on our experience of applying this you know we can, we can come back and be more prescriptive right. Yeah. Okay. Yeah, if you see some standards sort of de facto approaches emerge right we can adopt those and standardize on those. But yeah I think that at least gives you attribute to rely on versus not having anything defined in the schema. Yep. Sounds great. Okay. I'll make a note of that. All right, and then I think the other other comment was on the policy attribute. If that could be a you ID so versus right now we have it as a string and in the examples we're showing, you know, names right so. It should be possible to also convert to you ID to a string format and put that in there so once again it seems like this would be more of a concern for depending on how you're what which engine is reporting. If you choose to use you IDs, you can. Was there something that would prevent, I guess, using a you ID in that policy field right now. Um, let me take that suggestion back and come back to you brother. Again, you know I'm looking at it from who it is consuming this right right. Um, so I think the you ID maybe is giving us a more more prescriptive than just it being a string, it could be anything in a string right so. Right so like when when we were looking at other mappings like for, for example for Kevano, we were thinking of doing policy name slash rule name in this field right. So, in your example if you want like a you ID and even then if you want to append or have like a name to make it more readable you can or if it's just a you ID that's fine too. Okay, but maybe some way to index back to like a rule or a policy element that created this result right. Yeah, could we put some examples here like the example that you mentioned right. I think that will help us because then we can say you already have it here. Yeah, so here it's just bought security colon check so it's like a there's two names appended with the colon year. Let me see if there's something else up here. Yeah, so here we have similar like API server and then so you're this was like CIS benchmarks or it's the category and then the the actual rule or the check. But then in metadata we have like the index right so it's certainly possible you could you know even do colon one dot two dot one. In this first example, if you want to put the index in the name so it's just free form text over here. Okay. Okay. Yeah, let's leave it as a string and let me take that back and see whether it's really really needed. Okay. Okay. All right. That sounds good. And I think the other comment I had was about execution count and looks like execution count is already dropped. Right. Yes, so I think I might have briefly mentioned that in the last meeting is. So when, when I moved things over to the repo and to the CR, because, you know, the, the timestamp and that information is already in the resource metadata. I removed the both the creation timestamp and the execution count and if we need something like that we can bring it back but for now, didn't seem like there was a specific need for that. Yeah, anyone has any other thoughts or comments we can decide. Yeah, so there is, you know, one thing I want to do is, at some point we should just start, you know, using the get repo for comments and PRs right so maybe we'll after this next set of changes we can lock the document and just move things over to the get repo because it'll just be easier to track and comment on the change and things like that. And I think there's a few, you know, one issue I just submitted a PR yesterday for the resource scope and selectors. I don't know if anybody knows of a good way to generate documentation out of this, the open API schema or the go tags but looks like could build or had some facilities but they no longer work. So, you know, if there's a good way to generate docs that's something we need to do so we can review each field and the data easily. And I think we're waiting on additional samples like we've talked about right so if there's anyone has samples to submit that would be good to create PRs on as well. And we can make sure that they you know we can test them quickly with the CR. So currently you can you can install the CR you can create a YAML that matches that CR and you know see what that looks like so that's probably the easiest way to try things out. Um, is the next step to take this. I know I was not there in that SEGA the meeting, I assumed you and Erica went and presented this that is that correct. We did briefly talk about it and you know at least socialize the idea so I think what we want to do first though is show more samples and then also propose. How we write like perhaps adapters for or show that you know different engines are using this. So one of the, you know, on Kivano we will, you know, start. So we currently have a CR for policy violations in Kivano, which we're going to replace with this policy report and produce these reports. So we'll work on that and then if there's some like another potential adapter we had targeted was using something like coup bench right for CIS reports. So I think once we have some some examples of that we can go back to SEGA then you know show that there's actual usage and ways of adapting different reports and what that looks like. We can also work with I think there's a list of other projects we had listed in the document as potential for using this. I did reach out to the stirrer folks and I'll see if I can engage with them to get some feedback. Yeah, I would like to see how we can look at a gatekeeper maybe. I'm one of the maintainers on gatekeeper. I did comment in the dock as well. Thanks for driving this by the way. Yeah, so definitely that would be great. I did have some, I guess, questions around. I know you replied to my comment regarding, you know, writing the violations of CRs. And we actually had a community like in last week's gatekeeper call we actually went through all the options in terms of like, you know, usability versus scalability versus security, all of these type of concerns with our current organization for writing violations to a CR versus, you know, Kubernetes events versus, you know, someone even suggested like, oh, what if you just expose an endpoint for people to like write it on, right. So I created like a table that looks at all the options. And I kind of want to make sure we sync, you know, either here or another call, whatever makes sense, just so that we can kind of look at, you know, this proposal you're looking at is very much still writing to a custom resource, right. And I think based on the usage that we're seeing and the comments from people, at least from gatekeepers current implementation is not, is not really scalable for large clusters. So I want to make sure we touch point on that and make sure whatever is in this proposal addresses that concern as well. Right. One of the evolution of this, I think, you know, from some of the early discussions and we've been through in prior meetings a lot of those same options and discussions on what would work best. And looking at whether it's, you know, events of course and I think there's some commentary in this document to on the pros and cons, looking at things like creating custom resources and what we had sort of settled towards or evolved towards was trying to make sure that here the reporting was reflecting current state for admins. Right, so we were not not focused so much on any history or historical state because the thinking was that could be done outside the cluster and is best done outside the cluster. You know, although again, we're not mandating one way or another. And then the evolution of this. See, initially, if you recall this started out with just reporting violations and that's what, for example, Kivorno does today. It reports violations at, you know, the policy rule level. So for each rule and object, there's a violation CR. And that has some pros and cons too right so here what we will towards is allowing the flexibility of aggregating whether and allowing that also the flexibility for each engine to decide whether it's just violations, whether it's violations and a success summary, or, you know, some other level of detail and at what scope and granularity they report. So there's a lot of flexibility, but it's more or less left up to each policy engine to decide what works best. And that's, you know, I guess if you're trying to build a common structure seems like that would be the the most, I guess agreeable option overall, but yeah at the same time. Of course, you know one this could also be used in a manner where there is still one violation created per per policy rule and per resource, which, like you mentioned, could lead to scaling problems. So that's where there would have to be some intelligence in the engine to make sure that if there's grouping done, let's say for at a namespace level, or if it's something related to the cluster it could be at the node level, or it could be at control plane and nodes, things like that. So, yeah, I think what we wanted to make sure is there's enough flexibility to slice and dice this in many different ways, and then leave it up to the engine to determine what's the best way to report results. Focus more or less on current results, not so much on history, but would love to see you mentioned like the comparison table and perhaps if you want to even you know if we want to put that in here or link from this document and then we can discuss. Yeah, we can definitely do that. Again, I did take a look at the agenda. I didn't see a lot of other topics. So, if you don't mind, I could share the table right now. If your time permits. Okay, cool. Let me just, oh, I cannot start sharing all the process. Okay. One second. All right. Can you see my screen. Let me know if you can see my screen. Yes, we can. Okay, cool. Again, this is learning. This is basically learning from running gay keeper and getting user feedback. Right. So definitely this may not apply to other projects, but I think some of these use cases and concerns might be applicable as well. Right. I mean, we have two approaches in terms of reporting violations. So as you can see, currently, we write the violations to the constraint and gay keeper is the, the, you can think of them as a pop a policy right. In it. So let's focus on in a large cluster scenario right so let's say you have a constraint that looks at, you know, do you have resource limits on your pod, right and let's say your cluster has like thousands and thousands of pods. And they're all violating. So this, you know, status of resource could grow quite, quite, quite fast, right. So what that means is, essentially, if you use this approach in your cluster in a large cluster, you could run into that SCD one megabyte object limit. Not only that, you know, this could also have a huge impact on the API server, because you know you're basically you're basically writing updating the policy, depending on the number of policies you the cluster has violations for right. And I think this is probably the closest to what your, what the proposal for policy reporting looks like, though I understand, you know, it does have the flexibility of allowing the policy engine to decide how you want to basically basically sharp the updating CR process, right, whether that's by, you know, like you said, namespace or GVK, right, version kind, you could, you could slice it up in a different way to reduce that impact, right. But even, even with that, you know, there that that SCD object limit can still be pre can still be there, right. And the other the so we so for gatekeeper project we came up with another approach, which basically writes all the violations for both emission time and you know audit to write the violations to the gatekeeper logs. And that approach does not run into these limitations because we're writing to the log. However, you know that will require the consuming solution to basically parse the log right. And then there's of course the Kubernetes events. We really like this approach and this is something we're working on. Now, what we like about it is, is the fact that, you know, by default, Kubernetes will remove these events right that the default TTR is one hour. And the impact on the API server is the number of violating resources. And, you know, the fact that it doesn't have cluster space cannot associate cluster scoped violating resources that can be mitigated by associating this event object to a resource in, like, say the gatekeeper system namespace. So we think this is actually quite nice because we can leverage a native Kubernetes object. And the fact that it has a TTL actually, we can ensure that these objects will get cleaned up. And it is kind of similar to the policy reporting report proposal, because, like you said earlier, it's only looking at the current state of things rather than a historical record of every violations ever, right. And some of the other options that were considered, you know, a new violation resource, so another CR. And then, of course, like another end point, and that allows users to query, but that will, that will require MTLS and just a lot more work to make sure this is actually production ready, I guess. I just kind of want to briefly go through this and did you guys have any questions or thoughts. Thank you for sharing this. This is a really good summary and touches on a lot of things that we have discussed either in some of the document comments or in prior meetings and sessions. I just thought and would love to hear from others as well but I think the way we were approaching this as events. It's not so much as whether you would want events and resource but it's almost like most engines will end up doing both right because events are necessary. And they seem to be solving different, different goals. One is to give the cluster admin some state, which they can easily, you know, collect through tools like kubectl etc to see what's going on with policies or policy engines they may have configured. Whereas others are if you're looking at something like a pod or deployment, obviously you want to see events on that to say, maybe there's some violations, etc. So we were not thinking of that as either or or solving the same goal but the fact that most engines, you know, to be used in production systems would report events, Kubernetes events, which is like you mentioned a great mechanism for what events are meant for, but it didn't seem to be solving at least, you know, the goals of what we were thinking for the policy report. So, so I think for the purpose of consolidating and ensuring that all the projects have the same, I guess, spec for APIs. Do you envision that something like gatekeeper will then create these new CRs based on the proposal. So that's one option where if some, you know, policy engines or tools like gatekeeper natively create the reports and that's what we were thinking of doing for Kivorno for other tools. Perhaps, you know, the other way of doing this would be to write adapters which are consuming. Like say for example for Falco or Cube Bench, those could have adapters which just produce reports or maybe over time they also have native features to generate these reports. I think there was Liz that also added some comments for the project they were working on bear there was interest in also standardizing on the way of reporting so maybe with that project. Also policy reports would become native over time. Okay. Yeah, that that's helpful. I still think, you know, the scalability concerns with and you know the impact on a CD side of it is still one that you know I would still go back and think about how to mitigate that. Yeah. I would love to, you know, get some one one thing I do want to mention and sorry Robert, I think you would also meant trying to mention something but one. So you from, you know, from the Rackham team it also suggested and we just updated the document and as well as submitted a PR for this is adding selectors for so in your example like let's say a few thousands of pods. If there's a way to group these pods based on labels. So, right, one option would be in a result element or even at the scope level, instead of, you know, naming an object or referring to it by GBK or something similar, you could then use a label selector to group several objects, maybe hundreds of pods. If it makes sense. So, I think they're so again if there's other ideas for how we can add that level of flexibility to be more concise in the reporting, but point to a larger set of objects. The other thing that one of the options problems we run into like with Kivirno is Kivirno also does background scans right so it will periodically scan the entire cluster and there of course it's not just the point it was picking up on even pods that were not being scheduled not that were failing but for those sort of things of course then that, you know, the engine would have to manage the state and make sure that it's not reporting on objects which are not active, but I totally understand, you know, the concern, and I think it would be good to see if there's some other ways to make the report scalable by pointing to a set of objects. So label selectors was another option that we recently added in just to help address that. Can you elaborate a bit more on this limitation and SCD that you refer to this one megabyte limit. Yeah, so, so that's the size limit for SCD objects. So basically, you know, that's a constraint of, I guess, a CD. So when we think about, you know, any of the Kubernetes objects or CRs, they, they all basically, as the size of the object grows, you're going to eventually hit that limit. So when one megabyte size limit, and I can find a link to that, if it helps, where the docs says that if it helps. It's not a limit in the number of objects, it's just the amount of data encapsulated in each one. Right, so there's size limit per object. But I believe there's also a limit for SCD as well. Yes, I would translate to number of objects. Yeah. Right. There are limits for both. Like, I think it's 1.5 or something like to that one megabyte order per object and then there's some other limits for the total number of objects, etc. Interesting. Yeah, I just linked it to the docs. Cool. I'll take a look there. I would have thought that other projects would have hit, you know, tens of thousands with not hundreds of thousands of objects. Yeah, I think that's why the recommendations to keep the object small. Right. If you search for like config maps, I think there was an issue in Kubernetes repo where someone says, like, how big can my config map object be. And it was also linked to this size limit for SCD. Cool. Thank you. We've already hit that limit when we did like large cluster, like low tests. So we're already seeing the errors. Okay. So I think, yeah, on the scalability, this, you know, it would be good to see as we look at different examples to really test out the flexibility of the reporting. And if we can, you know, make it find that right balance with for based on either namespaces workloads or other levels of grouping, you know, for some of these common, common type of policies and reporting that we want to do. So if there's any other ideas, yeah, let's, you know, definitely we can discuss further on Slack or, you know, just like either through git, or even just comments on the document. All right. So I did see Kristoff you had added, you know, an item to the agenda so want to make sure we have some time to cover that. Yes, hello everyone. So, first, for those of you who may not know me, I want to introduce myself. My name is Kristoff like her. And I am a member of the committee. So I wanted to come and introduce myself and quickly speak about a newer initiative that the Kubernetes during committee is undertaking with annual reports from our various community groups. The goal of this is we are noticing across the community. Kind of an evolution of just like how information is moving throughout the community. When we look back, you know, over the last like five years, six years or so we in the early days of Kubernetes we had a Thursday community call that most people in the community would end up joining and and things and working groups and stuff would would be giving rotating reports in that particular meeting. However, we used to have those every week. We've moved monthly and as far as like presentations and that kind of stuff. We're now up to 50 plus different community groups. When you take into account figs working groups, user groups, etc. So trying to get reports in that fashion just isn't, isn't tenable. So we are trying to do this kind of later more like decentralized manner that that the various community groups can submit reports. And these are then visible and and and we can increase that kind of community cross communication between the various things and working groups that we have. If I can share my screen briefly. So, an email went out the yesterday to K dev that kind of explained some of this. We are starting with working groups this year in 2020. The reason we're focusing on working groups is working groups are in particular a group that requires lots of cross communication between different things and that kind of stuff because by the nature of working groups. They are, they have many stakeholders themselves. So we're we're trialing this with the working groups in 2020 to ensure that we kind of have a picture and we have like clear communication on what all of our working groups are working on. And really asking the question like our various community groups and working groups, are they healthy. Are they following best practices as far as like pulling meetings on a regular basis recording them putting them on YouTube. Do you have an open mailing list. Just kind of going going down that that checklist. I've also included links to these in the agenda. So, so folks can kind of pull these up and read them on their own. So the scroll down the questions. So these are the kind of pieces of information that we're looking to get from all of our various working groups, things like. Yeah, are your owner's files up to date. Are your sub projects mapped. You know, are your is your meeting times is like the listed meeting time, which is something that I actually found with with the policy working group like your listed meeting times and such are not up to date in the community repo. So it could be hard to find and encourage people to come to the meeting if there's, you know, still things that need to get updated as far as your, your times and who the organizers are and things like that. So, the reason I'm in particular coming on to to the policy working group is because I've been assigned as the liaison from steering to this particular working group. It's kind of your friendly point of contact as far as working with you to get this report done. Because we know like the nature of 2020 right now, there's a lot of things going on. We're not setting like specific, like a specific deadline of when this needs to be done by is something that we want to kind of work with the folks that are involved in the working to get it done. I'm but like my kind of like vision would be sometime in the next couple months that that we can collaborate together, go through and get the go through this annual report process, and then kind of gather any feedback of like, is it useful, and do you find like, after we collect this working the various annual reports that we kind of see some action from them like the, you know, if there is ways that either the community can help you or you can help the community that we see those kind of things happen. Yeah, I guess I wondering if there's any questions. One specific question I think there this document that you're looking at. I think it was linked in either the emails and anyway I was I was trying to get to to the list of questions and I didn't have access so I can follow up with you later and find out if that's something on my end. This document is in the community repo so I've also linked it in the in the meeting minutes, let this one and redirect to our community repo. So there's a public open source so anybody can see that there's no authentication or anything for this. Okay, there was there was one link that it said the like click here the list list of questions and then I clicked it and it said there was a Google Doc and I didn't have permission so I'll maybe I'll follow up with you and see if that's even necessary. But overall yeah I applaud the idea. I think it's good to collect. I would say that hopefully it's data once collected is consumable and easy to analyze. I guess when when everything when I have the hammer dev ops and everything looks like dev ops. If it was somehow codified you know scripted and available in some repo or data source or something that we could slice and dice that would be cool. Yeah, all of the all of the annual reports once they're like finalized will be like openly published. There is. So there are some sections in here where like we offer the opportunity for chairs and organizers like, you know, to basically to come with us with things that might be sensitive initially. And if there's, you know, if there are problems like because they all, you know, there's a bunch of questions in here but the key thing that we're trying to get at from a from a steering committee perspective is is the group healthy. And that question can mean lots of different things it can mean. Yeah, are you meeting on a regular basis. When people come with new ideas is there like an openness to them. It can mean that there isn't like, you know, weird dynamics between either the people or the companies that are involved in a particular community group. We want to know if things are healthy and sometimes, you know, at least initially, there might be things where you're where like, you know, the organized the worker for saying like, No, we're not healthy and here's the reason and maybe we want to be a little bit more we want about what's actually going on then something we want to initially publish publicly. So we do offer like there is like kind of a private review period for for these annual reports. But once they're finalized and any private information is like, you know, removed or sanitized in a way that like everybody including like the working group and steering field that's like ready to go public. We'll be publishing these publicly so that again, the entire community can kind of consume that information. And we're also trying to like the email the parents that kind of goes into some of the gold but like one of one of the things that I am also like I'm personally like really passionate about is connecting, connecting positives as well like not just the negatives like we do want to know if a group is unhealthy, but we also want to figure out like, if something that a group is doing really well is really working for them. Is there a way that we can either like make that a best practice if we can share that information with like another community group or working group that isn't so healthy. And that kind of like cross pollination of information between them, as well as just the raw, like, what are you working on, and making sure that everybody knows what what the various groups are working on and if they need help. No, this is good. Thank you. And yeah, certainly happy to help and we can work with Erica and Howard also to help fill in some of this. Yeah, so yeah, this this was just kind of like an initial introduction and discussion. I will be following up with some more like directed emails to to mailing list and to work the organizers and chairs and such to kind of get the ball rolling on this. But in the meantime, if there's any like questions or comments or that kind of stuff. You have you have a face and a name like you can come and contact me and I can kind of help answer or like direct those questions to the right role and in general even outside of this particular process. I am your liaison. I don't make decisions on behalf of the steering committee. But I am kind of like a communication conduit that like if you need something from the community as a whole, or from the steering committee specifically, I am a person that you can kind of come and talk to, and I can point you in the right direction. As far as any of those kind of wider government issues or, you know, resourcing issues and such. Alright, fantastic. Thank you. Great. Thanks everyone. Okay. Any other items or things we should discuss today. Alright doesn't seem so so thank you everyone so I will make updates in a based on what we discussed for the categories. You if you can, you know, take a look at the PR on the selectors and see if that addresses, you know what you were looking for or if there's anything else to be done and we can get that merged. And, you know, let's read if there's any other ideas on the scalability would love to discuss and we'll follow up on that as well. Okay, I'm going through the doc again to look for the namespace like the thing you're talking about. Thanks. Okay, I'll ping you if I have any questions. Thanks. And I think Jim, you would like all parties to post examples there right. Yes. Yes. So just please create PRs with some samples and, you know, because that will help us form up the structure. And, you know, also if we can start, you know, so just, you can just create a PR directly on the get repo. Yeah, and if there's anything else that comes up that we feel we need in the reports we can, you know, that will help flush that out. Alright, well, thanks everyone. Thank you. Bye. Take care. Bye bye. Bye bye.