 Hello team. This is JJ. Give it a few more minutes. One more minute for late commerce to join and then we'll get this started. And in the meantime, people who have joined if you want to take in the dock with your name, please do. Also any volunteers for scribe, scribing today's meeting. Highly appreciated. Who's running our meeting today? Are we waiting for the moderator? No, I just wanted to give a few minutes for people to add their name so that I can call out their name and harass them with an update. Yeah. Do you want only people that have something to say to put their name in there? Or should we say no update in there? Yeah, no updates. Probably fine. Yeah, okay. I think you don't have to walk around the room. Yeah, I'm also doing the in the chat. Yeah. Thank you. I'm also doing the mimic thing because my wife at home is not good. So I'm using laptop for no for doc and phone for the call. All right. So as people have their name, let's get started on chicken. Emily. Emily Fox. Hello. It's Emily. So lots of great things that happened over the past week. We got all the CFPs reviewed. We got an agenda put together. It's been put out all the attendees have accepted. And we have great sponsors, which means that we've got lunch and happy hour. So lots of great things for security day. Yep. Thank you. Yay. Yeah. It's a, it's a happy half-heart essay here. So. Oh, we don't have two scribes. I'll, um, I can be a scribe as long as somebody else can do it while, uh, because I'm also going to talk about the open assessment. I do need somebody else is willing to type. Yes. Can someone volunteer for state? Please. I'll try to cover in for scribe. Um, so I don't, uh, next to me, so I'll just give my update. I don't have much of an update. We, I'm working with the Harvard to see if we could actually pull in all the policy documents that are lying around and then into the six security repo so that it'll be easier to track and discover. Six security. Emily Fox gave an update on that and, uh, we'll be Sarah Dan and I will be meeting up with, uh, with John Liz on 30th and we'll circle back with the team with any updates from there. Daniel. Next. Yes. So, uh, unfortunately for the last couple of weeks, I couldn't attend the meetings. Now I'm fully back and I saw that. Falco assessment kicked in. So I need, I will need to catch up with this, since I was one of the volunteers that wanted to do it. Yeah. Yeah. I think there's a, um, issue open now that, um, Robert opened. So if I'll try to put it in the notes. Um, but yeah, that would be great for you to, you know, time in on the issue and make sure that you're on the list. Perfect. Thank you. Because I found it. Yeah, if you can link that. Yeah, I'll stick it in the notes. Thanks. Do you want to go next? As I come off mute. Yeah, we've got a lot of work for the security day at cube con. And one other note, we will need someone to be able to do an update at our next meeting on the first. So taking volunteers for that. Oh, these are the updates for the to see about like, what are the six doing like all the things. So, um, so yeah, so JJ did that last time we wanted to iterate on the format a bit to make sure that like, have a little more lead time so thanks for mentioning it, Amy. I mentioned it, I was able to go to the policy team meeting in the afternoon last week and mentioned it to Howard and and team and Erica are the leads of that team so where it would be great if we could try to get that PR done. I'm mostly talking to JJ because Howard's not available this time zone. If we can get that PR closed I get much feedback. Then that would be sweet to link to it's really formalizing what's been going on for a really long time. But it's nice to surface that I think and that group's been doing some great stuff. So, um, I think that I'm available. Oh no I'm not it's IIW next week. So I can help with the slide JJ in terms of iterating on the content but I can't present. I'll take it I'll take. I'll do the presentation I'll work with Sarah and Howard on getting the content on the slide. So I'll just take the air on that. Great. Thanks. Thanks for bringing it up Amy. Am I up JJ should I just go for it you're already on it. Um, so I've, um, I mentioned that the syncing up with the policy group. I also had, um, I've been catching in advance of IIW which is October 1st to 3rd if anybody's local. Um, I highly recommend that it's an unconference focused on identity that's been happening for 30 years. Like the OAuth standard came out of work at that group to, you know, get everybody to stop sharing names and passwords. And they're doing a lot of really interesting thing. The last couple have really a lot of people are focusing on self sovereign identity, which is pretty interesting to track. So catching up on reading I dove into verified credentials which are a new W relatively new W3C standard that is emerging and Howard actually chimed in on Twitter and asked if that would be relevant for the group. So, since I'm learning about it, I thought I would ask other people here we can talk about it later but just kind of want to put it out there that I'd be up for seeing if I could get somebody from that effort to present to the group of people thought it was relevant and interesting. And, and then my other update is also on the agenda which is I've been helping contribute to the open assessment and JJ's asked me to talk about it a bit today. Thanks. If you can post the IAW link on the It's in the agenda chat. Oh, perfect. Okay. So I just stuck it in an ounce. FYI. So, who's next Christian Kemper. No updates. Ray has no updates. TK has no updates. Roger has no updates. But if anyone wants to chime in or chat on anything that that's been mentioned, please feel free to. Otherwise, people that have not added their names that have anything else that they want to talk about these two. I do see. I do have a question. And I don't remember seeing it and any of the guides or docs that we have as the security work group is working on assessments and evaluations of projects and those documents are in draft but available to the group. What is the standard practice for using the information within those draft documents. It's a good question. Colin user one. It's Emily, by the way. I know. I have to say that. So what exactly are you is the question. So for when when security is doing a draft is doing an assessment of a particular project. And we have the draft and all of our recommendations and commentary and updates that we're posting through the get flow is publicly available. Anybody can really go in and see the PRs they can see all the comments they can see the dialogue that's going back and forth. But I don't know that we've officially documented. At least not that I can see what the expected use of those draft documents are or if there's a caveat on them or disclaimer that all the information contained within this PR is draft until officially posted and made available on X site. I didn't know if that was something we should be discussing concerned with take advantage of like how what is the expected use for security members or people outside of security when viewing the contents of draft. Unofficially published documentation. I think that's a really good question. Well, I think I actually think it's a it's a I think it would be important to have a caveat like, I've just been kind of like well it goes without saying that all this stuff is unverified until we all approve it. But if somebody dropped in from, you know, wherever and wasn't aware of it, it could be amplified in a way that would be undesirable or reading, right. And so we get access to interesting information about some of these projects that's not necessarily publicly available until we start interacting with them and actually writing it down and creating a ticket and submitting a PR on it. And like, how do we provide assurances to those organizations with our due diligence, ourselves, as well as outside of that somebody coming across it. Well, I think it might be worth looking at actually. I think we have. Let me look at if we look at the, let me see if I can get on zoom and share my screen. Okay, are people saying this this is that the security assessments for the repo. So, um, so we have this like this language of caveatting the whole thing, right, that it's at least the intent of this description was this is both to give you a path into thinking about the security of the project, not replace your own process for determining whether it's a fit for you. And so there's, you know, there's framing the assessment in general right that we had a lot of discussion early on that we didn't want these assessments to be approved. If we don't believe that they're binary. And that, you know, we've been careful to say like just because the project has work to do doesn't mean that's a negative thing it's in fact a positive outcome of the process and so forth and so on. So I think it might be good to, I don't know, reflect on this bit right to make sure that we're, you know, looking at this six months after we wrote it I'm not feeling like it really conveys that aspect of the caveat aspect. And then the other thing is, I don't know where we would put. I think it would be good to have something in here that at least says if people are reading all the words while things can draft. They're just an individual's opinion and should not be taken as truths or something like that, which is basically I think that the spirit of the team like if somebody asks a question. If the project reviewer doesn't have an answer, that doesn't mean that that is unanswered right. It doesn't mean that the person has raised an issue that's a real issue. It could fall out that all that thing was a misunderstanding. And that needs to be communicated somehow. I might like from a disclaimer caveat for people that are coming to the repo and coming across this information. I was thinking maybe a line or two in the read me and potentially expanding upon the code of conduct because we are a security focus special interest group that above and beyond the normal humane code of contact. So if you have caveatting in there, if you're a member of this group, the information that you're going to come across is always in draft and not be considered actionable or taken and running up the flagpole, for instance, but something to that effect. If that's what we want to do. I just don't know and I hadn't, I was thinking about it earlier today and went looking and didn't see anything beyond specifically what you had cited. I think you'd be up for doing at least an issue, if not a PR that sort of proposes something I think that'd be great to add to and I love the idea of having because we've, we've, you know, we we've gotten presentations before where people have said, okay, don't tweet about this yet. It's not published. And people are generally very respectful of that and I would love to see that reflected in our code of conduct right that there's like, you know, there's two sided thing like of course, if you know people don't engage with us positively we may publicly document our findings, but until it's officially reported it's not confirmed or whatever. There's like, this is a reporting rigor. I think we have a we all practice and newcomers should know that. I'm working on that. So that's what I just described it. So, if there isn't anyone else who's willing to check in, then we can dive into the opera. I think let me ask if there is anybody from anybody from any other working groups, you got cubes, you got the policy working group that anybody has attended that wants to give an update. I have a question regarding our partnership with other six. How does it look because there are some issues in for example signal that security issues that are hanging there for proposals that are hanging there for like years and is do we have any way to influence it somehow for example. Wait, so where I didn't miss her. For example, I found one interesting thing in a signal. There is a proposal for for a runtime change and it's there for like two years. I was wondering how what is our relationship with Kubernetes 6 if if we can influence something or change. I don't know. If you can point to the specific issue and talk about it, it will be good but the overall stance. So that is essentially cigar top rates on its own and influencing cigar. It's not the goal of this group. It's rather to help cigar then surfacing what the issue is to the rest of the rest of the Oregon community. That's that's the objective. But if you do want to bring it up into this group and talk about the specifics of it and why you think it should be prioritized and we can surface that to wider audience. I think when I always come back to our charter and mission right so our mission is to reduce the risk the cloud native applications expose and user data or allow other unauthorized access. So if there is an issue right in the world. Particularly in CNCF projects because we're part of the CNCF so if we if one of our projects is has a issue that is we think is risky to cloud native applications and the ecosystem. Then I think highlighting our concern like we have a forum here right and we have the ability to we could invite SIG off or a project to discuss issue that we consider to be risky and we can talk about why we consider it to be risky and what they know about. And I think that forum creates opportunity for action. Daniel if you know that issue and yeah I would create an issue in our report to bring that issue up and talk about that in this group. Perfect. Then I will prepare it. Thank you. Yeah and I think wherever possible we should like you know we can plan ahead and like you know invite people from the relevant projects or other SIGs to have a discussion. Yep. Yep. Yeah. Okay so if anyone else has anything to talk about if not then I'd like Sarah to give an update about the OPRA review learnings from that. And for the rest of us to chime in to see. I think it has an attempt. Yep. So I participated in this security assessment. For those of you who are might not be following the details here. We're on our second so all of the assessments are tagged under this assessment tag. I remove the is open. We have three assessments in total is completed. Open policy agent we are on the verge of completing and Falco we are on the verge of starting. So our goal is to have five assessments and then reflect on our process. Of course if anything is in our way we can update our process but we are you know maybe steps here we're doing our second assessment. And then we want to talk through our learnings but not you know deep dive too much in maybe we should do x, y or z we just capture those and so we also have another label for the assessment process. So don't check this. So you can see there's a lot of open issues, right, that are like, if you're participating in the assessments or observing them or you know hearing about the meetings and you're like, Wow, they should really do x, y, z. You can look at everything labeled assessment process and this is the time for us to be capturing what we're learning or ideas about how to improve the process and then we'll review all of these issues. So we'll go back to these first five assessments and then do some improvements. So that's kind of like the big picture of where we are. If we go back to the issue right so we're here going through this checklist of things and now we have the PR out for the assessment summary and we will Amy need to schedule a to see presentation shortly. Whenever there's an opening on the calendar and if we can wait until after cube con I would be delighted. So touch base with Liz I think Liz would like it to be not waiting that long. So let's not go into the details of scheduling here. Okay, happy. Let's just chat offline but just to let you know that we want to make sure that at least Justin and ash and one of the co chairs is at that presentation whenever we decide to queue it up. That is totally fine. So, so this is the assessment we opa. Some of you may recall we had a presentation by opa some time ago. And they presented their, you know how it works. It's, we have a background where basically policy is a big part of security right we have a breakout group that focuses on policy. In order to say that you have a secure system, you need to make sure that you actually have some policies and they're being followed opa is a project that helps with this by having a by making it so that you can write your policies in this rego language and then validating that like doing the policy enforcement and implementing those controls in ways that are like can be machine you can reason about with machines code. So that's kind of I kind of went through the summary but I'll go through this now a little bit in order. We have this maturity section which is kind of a, if we don't quite know how to define it. So with each assessment we make it up. But we have this idea that as context for how we think about the improvements that we'd like to see, we want to have some indication of how widely use this project is. It might not be the arbiters of success, but rather, echo that information because it, it affects how what recommendations we have if this is very early new technology that's experimental we might have different recommendations then this is used by almost every, service on the internet so here what we did was we collected a set of companies that are used by opa which you know sort of indicates that it's under like quite a bit of use and linked to their list of adopters and then also they're getting community participation from a wide range of adopters, although you know like the noting that they're primarily from styra, because there's been a bunch of conversations in the to see about wanting to support open source projects that are primarily one company to have participation from the community that that's robust, if that company decides to do other things right. Getting to the sustainability there's a little outside security but it's a port force affected, it affects security because we've seen a large, a significant attacks of late that are based on something becoming not maintained anymore and nobody paying attention right. And so, so that at least seems important to me. And I think I saw a question in the chat. No, thanks. So, so I kind of went a little bit over the design I think the key takeaway from our perspective is that if you have heterogeneous infrastructure or a high rate of change where lack of policy enforcement would create a big business risk. That's when the added overhead of implementing OPA would be valuable. So, this is a common situation right that people have on premise cloud or multiple clouds or different way or different services that all need to have similar or the same policies. And whenever you what we're seeing is in that heterogeneous infrastructure that presents risks because people can't reason about their policies or know that they've been implemented. So, and that's sort of common in this cloud way. And so, the, the, you know, the added benefit of OPA also presents risks right so it's great we have this policy as code expressions that you can, you know, you can implement the same across heterogeneous systems and separate your security code from your application application code. But then these are really like they it requires the same care as code and some, you know, and there's concern that the imp that there will be a false sense of security just because you're using OPA. So, we, you know, a lot of our discussions were really around how do we how do you think about this policy language and make sure that it's saying what you want it to say, and that people are understanding what they're expressing when they express policy in this language. Sorry, do we have a feel of who the target persona for OPA is in our security persona that we have. This looks like it would fall into the platform implement. Well, it's so I think we have this in here somewhere. This is their self assessment. We have the goal somewhere I thought we had the target user. Well, we might not. If we can write that in the notes. I want to just double check because I thought we had that somewhere, but it's a good question from what what I remember. I think the target user is the operator or the developer and that you could use OPA. I mean, like Netflix uses OPA and they're not a platform per se. I mean, I don't know, maybe they have APIs, but it's primarily to secure it. No, no, but the platform there could be people at Netflix that implement the Netflix platform for use of other Netflix engineers, right? Right. Offers the platform to outside users. That's true. That's sort of an interesting. Also, Christian, if you want to go chime in on the PR with comments about this, that would be super valuable to I mean, I'll do that if you don't. Yeah, yeah, sure. Oh, that'd be fabulous. Thanks. Yeah, yeah. For pointing that out. So, yeah, I wanted to have everybody in the use cases doc. We have these personas that are different users are operators, administrators, developers and users and platform implementers and the security assessments are supposed to focus who uses this stuff. And so we should make sure that we covered that. But I think that's interesting. There might be opportunity for looking at who's using OPA to find some of those platform implementers you've been looking for Christian. Yeah, because I know in the gatekeeper project, they have separated these personas. Gatekeeper is one of the OPA sub projects. Right. And then they have separated these personas and I was wondering what the official. Okay, thanks. Cool. So, um, so do you do, are people feel like they're familiar enough with OPA? Do you want me to talk through some of the self assessment to talk about what it does or should I go straight into the recommendations? How do you Anyone has any Or maybe we should start with any, any further questions on what OPA does. Yeah, anyone wants any basic info about OPA. That'll help them understand what the review is about. I would say no, because I think we've talked about OPA or why don't you take like, yeah, it'll be worth the 30 second intro to OPA before we Okay, so so generally it's for controlling access to a service. And with the copy up that I am not an OPA person I've never actually used this technology hands on so people feel free to correct me. The this separates the data coming in from the policy. And so general the data and the policy are combined into a intermediate document that's evaluated for a decision and one of the And so what you're writing this policy in a regular language in this regular language and then your data is expressed in something like JSON, and then they're evaluated and the decision can be yes, no, or I don't know, so that you can compose these policies together. So, and then the this OPA piece is generally deployed as a sidecar, but they have some libraries and different deployment models so you can, you know, you could bind it into your service or, you know, run it as a sidecar. Whatever mode you want to run it in. Okay, any questions or observations. Yeah, I do have a comment to a question possibly. With the OPA, because from our security, since we're the security working group, we are expanding the attack surface, meaning, you know the OPA itself will be opening up for some vulnerability in being attacked and the policies could be manipulated. And I was wondering if, have you seen anything specific as to what maybe the preventative measures that OPA is taking or recommending. So I think that with the addition of any part of your system right your, if you add anything you're expanding the attack surface, but then you have to think about like, is the issues you're mitigating. Okay, bigger right then what you're attacking what you're, what you're adding and so that's part of our record our analysis where you shouldn't be you probably shouldn't be using OPA. If you have very, very simple policy and a homogenous system right just because it would add more complexity than is merited was sort of like our analysis and to answer your question though, we have we went through this process of kind of articulating what things are risky, right, and that if OPA is successfully attacked right this is your point of policy access and that is, you know, pretty risky. And so we went through there's actually like a lot of sharp edges around, have you set up OPA correctly, and are you managing your policies effectively because OPA isn't a policy management system. So you have to outside of OPA, figure out how you're going to distribute your policy that's this gatekeeper project that Christian mentioned. So it's very, it is a piece of the puzzle, it is not the solution by itself. And I think that's the key thing that we want to surface so that people understand what they're getting when they adopt OPA. And then go ahead. Yeah, that's a good point. I kind of assumed that that would be the case. They probably decoupled. I wonder if OPA is also giving some recommendation as to for the implementation. So for example, if you have a centralized policy engine for a complex environment such as what you were leading to earlier, as far as where OPA could be implemented, multiple clouds, multiple, you know, bi or different type of environments that all trying to consolidate the policies so that we have a uniform and consistent policy rather. And when you do that, obviously, you're bringing that in a kind of a brain or some sort of a think area or somewhat centralized in some sense. And, and that's becoming even more sensitive to the operation of the whole enterprise and the users. And I wonder, does OPA going to the implementation part of it as far as how to any recommendation. So they actually have pretty comprehensive deployment docs that go through a couple of different models. And I mean, I think that this gatekeeper project is really about them saying, Okay, well this is a common, like, expanding beyond the, the agent right that evaluates policy to the other parts of the ecosystem, because there is a need there. But I think there's a really good questions. And it really like, I think that's kind of a good lead into our recommendations. Which, like I mentioned, focus mostly around like the potential for confusion. And the, there's sort of an assumption, I think, in this whole project, which, which maybe understates your point to K, which is that you like, you need to. You need to manage your policies really well and make sure that that doesn't become, you're not just moving your, your, you know, like where you're being attacked to someplace that is less secure. Right. Yeah. But what we're like, just on a personal note, there seems to be this sort of common pattern in a lot of the exploits I read about, which are that things are not configured. Things are people, things are not configured the way that people think they're going to, they are configured right that the systems become so complex people have so many VMs and services running that it's easy for something to not be secured at all. And that things end up being wide open on the internet unintentionally. And I think that one of the things that makes me interested in following opa is, you know, that's the thing that you're mitigating is the sort of, oh, oops, forgot to secure that right forgot to update this, you know, I have to update my policy in 15 places and it's different formats. And now I'm just, you know, it's too easy to make a human error that misses those so. So it might be good to Kate or somebody like, you know, like if in reading the overview, like do we address those points? Yeah, I think it is also called out on the review in terms of what scope of opa is and what it solves and what it doesn't. And there is some call out there but it'll be useful to chime in on the PR and add to that as well to say to be clear on an increase in attack surface versus scoping it down to thing and tooling to mitigate some of these. Once we chime in, for example, on this and are we going to consolidate all our issues at some point and feed it to the people. So all of the opa, this is this process has led to these writing up or highlighting opa issues. So many of these opa issues that are in the project recommendations came out of the review. And so it's the reviews really owned by there's a self self assessment where Ash is, he's a contributor to opa and he owns like getting that over the line. And then either here we report issues into opa so that this once this PR is in there are open issues tracking everything we raised. So if we chime in we're doing sort of two things one is producing this document which which is kind of like anybody's guide into understanding the security profile the risk profile, the benefit of this particular project, but also allowing us to track these open issues. And it's sort of chatting about like writing these issues such that if there's been talk that we want to re review these assess like assessments periodically like maybe annually that maybe if a particular project hasn't added any features in a year right that we could do a cursory review and just look at the issues and do a quick update right whereas a project has added a bunch of features related to security then maybe we would do a full assessment. And so we're trying to like sort of queue this up so it's easy to update. If that turns out to be about something that we know it's reflected in reality. Does that make sense. Yeah, yeah. No, I think that's that's good. So, at this point, the only code that has been contributed is very style. I know it's the contributors are mostly styro. So there's basically there's a chef. I just looked at these 77 contributors with ash. You know, kind of like look through the top contributors. And I don't remember which of these people it was but there was somebody in these top four that was from chef, which seemed to me to be a good sign. And then the there's somebody from Google who's pretty far down because it's mostly spec stuff here we go. This. Tristan has worked on mostly the rego to spec, which is also kind of like a sign that it's not just styro but it does have the vast majority of the contributions are styro so overall for the ecosystem, provided this, you know, continues to be adopted. I'd like to see more contributions from other companies but that's a, I think it's a process, but they're, they seem to be making good progress on getting wider contributions. Did we have any confrontations and during this assessment in a way that some findings were conflicting with the idea of hope or whatever. Well, I think confrontation would be too strong word. We had some good discussions around what as as as now, this being our second one has become the norm, which is like where are the edges of opa's responsibility, right, particularly around usability, and around defaults right so it's very challenging to make things secure by default, because the most secure thing by default is to just turn off access completely and that's not useful. So how do you make it less likely that somebody is going to do something incorrect, because they don't know what they're doing. And how, and so we talked about, you know, really I ended up having a brainstorm because the, like, ash came in with a stance which I think is, you know, sort of reasonable from their perspective, which is that. Well, you know, we're giving you a sharp knife, people need to do these different things. So we can't, and we don't know what your policies are so what, you know, there's not much to do there other than and, you know, initially all of the project recommendations were turned into documentation improvements. Right. So very quick time check. We have 10 more minutes. And a couple of things on this. If this discussion that you already had, if it can be captured and put it on to a GitHub or if you have already done it part of the assessment. So that is, I'm talking about to a here. Okay. So, so I'll just wrap this in the next few minutes. All I wanted to know bring up is like we only have 10 minutes if you had more things to cover then I would offline this and then cover otherwise if this is what you wanted to do then you're more than I'm more than happy to. Yes, so I just, this is the whole thing, which is that we shifted some of the thinking around, was it possible to make code changes that would make this easier to use. And so we've linked some of the ideas here if people have ideas in this realm. We can dive into the specific issues. These are really starting points that then the OPA team and anybody who wants to get involved can, you know, sort of add in ideas. And then I'll just round out by saying talking about the CNCF recommendations, similar to our last project in total. There are certain things that the project is not well positioned to do. And so if the CNCF wants to support this project more having a study of user practices around this. You know, whether that people are catching common patterns, and then also learning from the end user companies where there may be specific integrations that should be higher priority based on what people are doing with that would be more impactful than maybe other things that they might do that we that we don't have visibility into what the CNCF end user companies may. So that's That's us. Yeah, so was it Brandon who asked that question. I forgot I didn't catch the name. So this is first time brought up about the discussions that happened, or the controversy. Yes, Daniel. Daniel. Okay. This first time. But that might be also an interesting thing too. I don't know part of the reflection about, you know, what do we really learn. What was, you know, what were the things that were maybe unexpected by either the product team or us. I like that way of thinking about review of these security assessments. Yeah, those are good inputs. So we have seven more minutes. Any thoughts questions on the process itself the project. And again, I think the reason reason for bringing this up in this forum is the PR is open. And I'll be I'll be doing a review of that but anybody who wants to chime in and add any comments on that that needs to be considered and it is very helpful. If not, I'd like to give a couple of minutes to Michael do see who joined joined later in the meeting to do a check in. Sorry, JJ. That's yours, Michael trying to give an update. Yes, on the security day. Sorry, I missed the first part of what you said. This is, I just want to give you a couple of minutes to give a check in and an update. Emily Fox gave update on security day at least touched upon it. So if you want to give a little bit more detail on that. Yeah, sure. Sorry. So as Emily probably already said, the schedule is not there and published. We've had extremely good response from sponsorships, which has been extremely positive and that means that we're able to provide things like lunch for attendees. I, Amy, have we gotten an update on registrations. We have not. Okay, last I know it was around 81. And so now that we have a schedule out, we feel like we'll be able to push even more on registrations. I think we were thinking about 150 capping it somewhere around then. That's kind of the next thing we need to start figuring out is how much room we have and then what we can effectively do with the space that we have. So that's probably the next priority that we're going to be working on on our weekly calls. Outside of security. The FACO project we're coming up for our yearly review. And we're getting ready for that, which is going to be on October 15. I believe that's correct. Amy, is that right. Sorry coming up from you. Yes. So we're excited for that. We've made lots of good progress. We're shipping some very cool interesting features, one of which is we're actually probably merging some of this code in today. So we've done GRPC based outputs. So this is kind of been one of our sticking points is that a lot of our outputs and alerts have been kind of done in a more synchronous fashion. And with GRPC it allows us to kind of offload the alerting engine from the main FACO engine. And then we can have subscribers that are written in whatever language. And then those subscribers can then forward the events and alerts into whatever system like Elasticsearch or Kafka or whatever it might be. But having this kind of GRPC based streaming service is going to be really beneficial to the project. I was running some numbers just around like how we've been performing sandbox in the sandbox versus pre sandbox. And one of the interesting metrics I saw was the four sandbox, we had about 34 daily users in our Slack channel. After sandbox, we have about 104 daily active users in the Slack channel. And then a weekly active user perspective, it went from like 60 to 200. So the community is really thriving. We've got a lot of activity. We got a lot of stuff going on. So we're really excited about how we can kind of see the CNCF engine really helping us out and benefiting the product. Any other questions I can answer? So on that note, I think Krishna was scheduled to have a demo of Falco next week. So it would be worthwhile for you to coordinate with her to add any of these stats to the demo presentation. And we wanted to push it off till next week. We wanted to have the GRPC code kind of a little bit more finalized so we could actually show that off to the members of security. I think we are right about time. Anybody else has anything that they want to bring up? In the next two minutes. If not, it's a wrap. Thanks everyone. Bye.