 Hi, can you hear me? Yes, yes. OK, let me quickly check screen sharing just to be sure. These slides, can you see my slides? Yes. All right, perfect. Thanks. Hey, everyone. Greetings. Give me a second. Let me pick up the meeting notes. Hello. All right, so I think today's agenda is going to be the keynote presentation, right? Let's see. Yeah, it's JJ here. I think he said he was going to facilitate. But if not, then I can do it. All right, thank you to whoever copied up the template. OK, we would need one or two scrubs for the meeting. Scribing now is not going to be as much a heavy lifting because we actually have access to the transcriptions now. So it's just a couple of action items not taking. So if anyone would like to help out with scribing, that would be great. I'll take the scribe. This is JJ. All right, JJ. I see you're trying not to facilitate, so I'm good. If you want to facilitate, I can scribe. No, no, no, go ahead facilitate. OK, let's give it a couple more minutes for people to trickle in. All right, so let's get started here, everyone. So today it's going to be a presentation. It's going to be for the key cook assessment that was recently just wrapping up. So I'm going to go through quickly kind of check-ins. I think there is no check-ins for the most part. Mark, I see you posted a second message that talks a little bit, kind of summarizes some of the points we talked about the last meeting, which I think is also a good kind of list of topics that we will be interested to hear as well. So I think it's a great list if anyone has any ideas or like to kind of talk about. If one of these topics are interesting to you, please feel free to kind of comment on the thread. And if we see that there is particular topics that are of interest to people, we can dedicate some time during the session to talk about them. If not, I think let's just go ahead with the presentation. Actually, hold on. Emily, Vinay, JJ, do you want to give a quick update on the White Paper or shall we do the next week? Yeah, I think we are going to give just one minute over for you to say that there is a presentation that will happen in the next meeting. There's a group that's been working to put together the White Paper. And Emily, the link is available for people to ask for. Yeah, the link is in the ticket. So if folks are interested in seeing where we're at, I don't have the issue number in front of me right this minute. But you can jump on over to the White Paper channel or go through the GitHub issue. Everything's to be linked there. Right. And Vinay will be presenting like a five minute version of it in the next weekly meeting. So please feel free to pass the word around. Please feel free to show up and engage in the conversation and discussion. That's all ahead, Vinay. Thanks, JJ, Emily. All right, let's go ahead with the people assessment. Brandon, I got one other thing real quick. Go for it. So Security Day North America is going to happen. So if you, yeah, Justin's getting thumbs up. So if you're interested and Vinay could talk for it, keep an eye out. I'll be posting more information in the channel. And we should have hopefully the website updated within the next week or so. The program committee is solidified and firmed up. So keep an eye out. And if you have questions about it, please feel free to reach out to me. Awesome. Thanks, Emily. Cool. All right. Let's go ahead with the presentation. So the both side and what is it? Sian, I'm not sure if I'm pronouncing that right. Will you be looking at this question? Sian? OK. Great. So whenever you're ready, please feel free to share your screen. All right. Just sit. Yeah, take it away. All right. Cool. Yeah. So Ambalek, Sian, who is with me, he's actually a project co-founder. We are both with this project since the beginning. Yeah, we'll do a quick overview. We'll focus mainly about security degradation parts and typical deployment configuration mistakes. And just to highlight, this whole assessment document turned out to be pretty lengthy. It's over 30 pages in Google Doc. So we are skipping a lot of parts. So internal design, customization, and several other topics. I'm not addressing here. Feel free to go there. And I made a PR with Markdown conversion. I know I need to fix some stuff in the PR. But yeah, it should be on its way to merge into the repo as well. Sian? Yeah, can you hear me? I was troubling finding the unmute button. Yeah. Yeah. So giving a very quick introduction to Keycook. So Keycook is obviously an identity and access management solution, providing various different protocols and integration options. We do OpenID Connect, SAML 2.0, UMA, OR2 on the integration with application side. The one point to make there is that the core of Keycook is made protocol agnostic. So we are able to add additional protocols in the future if we need to. Then we support integrating with external sources of identities. So we do ALT-UP Active Directory. You can integrate with a relational database if you want. We have various different integrations with social networks and also integration with OpenID Connect and SAML identity providers. So next slide. We aim to make it really easy to run Keycook and have it ready out of the box with a single command. We have container images, of course. So Kubernetes template examples. We have a Kubernetes operator and so on. Next slide. So just to illustrate how easy it is to get started using Keycook, this is showing our client-side JavaScript adapter and basically wiring in login to your application. So next slide. Obviously, once you're logged in, you are redirected to Keycook on its login pages to do the login. These are highly customizable and extensible. It can be changed to kind of match your looking feels. And you can see here, there's quite a lot of options that are enabled, but various things can be enabled and disabled as a configuration option. We do stronger authentication, of course. So OTP, web button. We allow support for custom flows. And we are continuously extending in this area to improve and make sure that we're kind of modern here. Next slide. We have various different options for authorization. We have roles groups, obviously anything that you need for like A-back type roles. We also have a fully centralized authorization service, which has integration with the UMA. Next slide. So as I said, we have federation of users, so I mentioned that before. Next slide. We have a pretty extensive admin console. It lets you manage and configure most aspects of a key clock service. So you can register your clients, manage your users, change configuration for your realm. You can manage authentication flows. You can manage sessions. You can view audit logs, and so on. Next slide. We also have a console for self-service, so where end users can manage their own accounts where they can reset their credentials, enable OTP or web button or whatever. They can also view what sessions they have logged in, and they can remotely log out external sessions and so on. Next slide. So since we have end user visible UIs, the login pages in the account console, we have a theming system that allows you to easily customize it and integrate it well into your existing applications or your organizations of look and feel. We also have a huge number of extension points where you can add in a custom code to extend the server. Brief look at just the currents of statistics around the project. We have, on Docker Hub, we have about 68 million pulls alongside of that comes quay.io. And then we have about 7,000 stargazers at the moment, 430 contributors. We have about 80,000 or so unique visitors to the website each month. And from the website traffic, we can kind of see that the interest in the project is increasing on a yearly basis. Was this yours, Balak? Yeah, just to highlight, KickHog doesn't mean to be cover every single standard. We try to remain fairly slim and focus on the modern ones. Also, because we stick with OpenID Connect, we also don't try to provide SDKs on the integration libraries, but just rely on proven ones from good technology stack. Certainly not implementing any crypto algorithms ourselves, not a Kerberos server, not an LDAP server, not a LAV, not a certificate authority. So we try to be fairly focused as well. On the, so that's wrapping up kind of like the project overview just to be quick. If you have any questions, please do interrupt. I think slides are fairly self-experimentary. So if you have any more focused questions, that's also welcome. Just to mention, there was one more proper security audit founded by one of the customers or users of KickHog, which is Reva Digital. And there were a number of CVAs identified. I think two of one of the info and one of the low nature are still hanging. But everything else was addressed. And in essence, that the picture was good how they described the project. As part of this assessment, there was one CVE identified, which was patched. It was both in a community site or in a product distribution Red Hand is doing last week. And in essence, it was a DOS attack, do it a week out of the box settings. Probably not affecting proper departments when the configuration was tweaked or properly fronted. But anyway, it was actually a fairly serious one. So we fixed it by changing some out of the box server settings. Around security degradation. So obviously it's a critical piece of infrastructure. It's an identity provider authorization server. So it's a tempting piece of infrastructure to attack. And the compromising KickHog potentially has fairly devastating effects. So taking over user identities, rendering applications unavailable and pretty much the whole critical, all of the critical systems can fall apart if KickHog is compromised. Some of the predisposing conditions at each level of the technology stack, CVs do happen. So OS layer, JVM layer application server we use as our runtimes, or especially various libraries we use for configuration parsing or for networking. Those get patched once in a while. Lack of SSL, I'll talk about it a bit more. It's a wonderful foundation of OAuth2. So because OAuth2 is redirect based protocol and so SSL needs to be in place. Also the redirect to your eyes needs to be secured properly. So there are several best practices around OAuth2. You ought to follow and you ought to be aware of. And at the same time, like not securing the backend properly or simple stuff or not keeping configuration up to date or not keeping up with server updates. We see some of those examples once in a while. So rendering server unavailable. That's potentially the easiest attack. And ironically, like that's what been identified as a CV as part of this assessment. Impact and users not being able to authenticate to applications depending how applications integrate and handle authentication, potentially also impacting availability of those applications itself. To prevent, the best would be to do a proper HDI deployment with clustering, ideally with separate data centers and, you know, geo separated. Also from the kickoff with proper rate limiting solution, load balancer and et cetera. Especially proper, properly tweaking the config and size of the deployment. On the server integrity part, obviously it contains user sensitive data. So impact, a different level of granularity, gaining access can be very impactful. So, you know, user profile information could be leaked. Our user identity could be taken over with, you know, if tokens are leaked, potentially you can take over existing session user has with various applications. And the most catastrophic possibility is gaining administrator access. That's probably, you know, end of it. Especially if it's a administrator of master real, that this can be devastating. So on the mitigation, there is a bit of a similarity of not using the root account in Linux. So not leveraging master real admin account in case having separate admin accounts with more fine-grained admin permission constraints that's possible to configure. Key rotation, both regular and in case of a bridge that's also possible. In case of user session takeover, it's possible to also enforce not trusting refresh tokens which were issued before a certain timestamp. TLS is a must also strong customer authentication not only for application and users, but also for key clock administrators themselves. Good algorithms for password credentials hashing. Also there is a plugable SPI. So first party votes and different sources can be used. Limited token audience, database level encryption. There are also several options in key clock itself. You can enable like a simple brute force detection and so on. There is a whole section in the documentation called threat model mitigation which tries to go over typical attack vectors and discuss what can be done. Like the proper hashing in case of database being compromised or compromised keys and so on. Now on the typical deployment and configuration mistakes which are possible which would increase the risk of attack. So when you provision a server there is a restriction. So admin password, initial admin password can only be set via local host or via browser, well, either via scripts, via common line or in a browser situation need to be local host access as well. Lack of TLS, like I said, that's a cornerstone for anything auto related. But what often happens is that in R and D situation developers just go without TLS for experiments or while working on it. And then you need to remember that TLS is a must when you move to production. With admin passwords, there is a fairly sophisticated set of password policies you can enforce but those are not enabled when you said the initial admin password. So this need to be taken into consideration as well. Yes, I was already talking about the master real admin admin. And yeah, so it's a fairly deep software stack and bugs and CVs do happen at each layer. So keeping track of CVs and updates as with any software is a necessity. What we also see is some password hashing is a key but there is a trade-off to be made and you can go off both extremes. If you go too far, you can impact performance. If you go too weak, in case database is leaked, there is not enough protection. Also not maintaining proper size of the token if you store too much of unnecessary information in a token or old clients. It impacts memory consumption. And in such case, it will be way easier to do denial of service attack. Yeah, properly managing the strong authentication and then not limiting the token audience and scope of applications. So there are ways to prevent API misuse or limit the damage if access or reference tokens are leaked. And but again, developers need to be aware of it. Usually the biggest enemy of security in case of solution like KeyClub is a convenience. So again, like it's developers pick up KeyClub, they experiment with it, they don't configure TLS, they go with lazy path like using public old clients for everything because they don't require credentials or using confidential old clients for everything. But those cannot be securely start in client side applications. Again, like for lazy, nice and convenience during RAD, you can configure redirect URIs and wild clients. But then there were actually a lot of CDs or pretty dangerous things reported even against Google or Facebook where redirect URIs were not verified properly enough. So this is also a huge attack vector. And then, you know, there are certain capabilities not enabled out of the box, which you can leverage in production but you need to be aware of needs to enable them. And going to the end, so OAuth to an OADC, those are constantly evolving standards. So what was considered a good practice when OAuth was released is not a good practice anymore like implicit flow is not considered something to be used. At the same time, there are a number of newer RFCs like Pixie, which are now kind of considered a standard best practice to use. There is a new effort to OAuth 2.1 to kind of refresh and include newer RFCs and kind of remove the older outdated mechanism. But what I try to point is that, you know, you need to be monitoring the spice and keep up with current situation. Stan, you want to take this one? Yeah, so we have a bit of a focus going forwards on ease of use and making it easier to scale KeyCock out of the box and also making more decisions towards secure by default. By that, we are going to require ACPS by default, for instance, and we're going to require developers to explicitly configure KeyCock into Dev mode. And we're also looking at introducing policies, framework around clients that can allow you to have quite strict requirements on what client configuration is used, as well as how clients are using OIDC and SAML. And this can be leveraged then to, for instance, require all clients in your realm to use Pixie, for instance, or to allow requirements on what kind of redirect you arise and all that kind of stuff. So we do have quite a lot of plans in this area going forwards. And try to keep it brief. Any questions? I have a question, if I may. I'm just wondering if you could just compare and contrast KeyClock with a lot of the other native capabilities in Kubernetes, as well as, you know, with other service mesh technologies that are providing authorization and authentication frameworks. So maybe it might be helpful, especially for me, to just see the specific use case and also how does it compare or does it complement Spiffy and Spire? Does the question make sense? Yeah, in a way. So we focus on authentication users and APIs and delegated access, right? So open and deconnect and all that kind of stuff. While Spiffy Spire focuses on client to client to application to application. So, you know, there is no overlap in those two. They are kind of complimentary rather than anything. Did you have other things in mind? Got it, yeah. And then I'm just curious, do you get similar capabilities, like let's say you take Istio, right? Istio has an authorization and authentication framework. Is this, does Keycloak have a lot of those capabilities? How do they position themselves? Are you complimentary with those kinds of capabilities? Or is this essentially having, for different use cases where for example, if you're not using an Istio kind of a service mesh, you can supplement with those kinds of use cases. Istio leverages OIDC for end user authentication, but it kind of ends at the inverse level. So Istio has OIDC integration, and then you can map also claims from the token it receives into their airbag policies. Right, so you're saying it's a little different? I can probably try to explain it so a little bit differently. So Keycloak is your aiming to be your identity provider, right? So it is managing your users and it's validating users, and then issuing information about the users. And then once the user is authenticated, then now you get into Istio and that will now, based on the tokens from Keycloak, will now verify whether or not you have access, right? So we focus a lot on the server side, and we, you know, on the client side, then we are more trying to integrate what there is existing available, right, rather than trying to duplicate that. I think also there's a bit of a fundamental difference in approach. So Keycloak has more sophisticated authorization capabilities like full A-back attribute by Istio's control, but then it's, you know, enforcement happens at application level. With Istio, the control plan and, you know, proxy, a speedy spire kind of could approach it in a way that you don't do any security within those microservices or application, and you kind of apply it on top. Which probably for some, for some actors, it's a great approach because, you know, you kind of like take over security out of hands of developers and then you apply the policies on top of your graph of endpoints, and then force them at the sidecar level with proxy, and then do policies with this. Totally makes sense. Thank you both. Thank you very much. Yeah, so I would like to think, so Istio, you have peer authentication and request authentication, right? So peer authentication is mainly used for the identity of the services inside Istio, which is handled by Istio D, which is the identity provider or certificate authority inside Istio. But if you look into request authentication, which is basically based around JWT, and that JWT can be issued by any IDP, including key clock, right? So key clock can act as an IDP, which can issue the JWT, and that JWT can verify by, you know, on-voice sidecar inside Istio cluster. So there are two types of authentication in Istio. Exactly, no, perfect. Thank you so much, Vinod. So I have a quick question about that actually. I just want to clarify. So you're authenticating identity space on the set of identity providers that you've registered, and are you just following the tokens and the identity material fall to the application, or are you also creating a token to kind of say that key clock has authenticated this identity and this sum of the claims that it has? So it's, if I understand the question correctly, then key clock obviously issues two different tokens. So one is the ID token, which is there to establish your authentication, which is typically aiming a front-end application, right? And then you have the access token, which is used to typically invoke a REST API or a service or whatever. So Istio being a service mesh, then that is a front-end application invoking Istio and passing on this access token, right? So that can then contain whatever information you need in terms on the policies that you set up inside Istio with regards to authentication, right? So it can contain that subject of the user, it can contain roles or groups or whatever, if you want to do that, or any arbitrary claim. And then it's up for you how you set up the authorization in Istio on what claims do you need and what is needed for the request to be permitted. Gotcha, all right. Thank you. That's the answer to the question. I had one question in terms of account revocation and the SLA or quality of service around that. Obviously, I think for maintaining session, is there a guarantee for the time between revocation and the time it gets enforced or is that documented somewhere? Don't quite understand the question. So let's say you are brokering with the multiple different identity providers and then I revoke access or I shut off access to somebody. You're gonna have a long-lived session for a user, user session, right? And what is the session TTI? How do you deal with it? Okay, so with OIDC, for instance, you have front and back channel logout specifications. So you are in some way reliant on the external identity provider. So obviously, Keacog is registered as a PEO as a client in the external identity provider. And that identity provider is then when the session is now invalidating at the external identity provider is now responsible of letting Keacog know that the session is invalidated, right? So it is somewhat weak in that regards, right? But this is really, you know, what there is available in OIDC, right? There isn't any sort of pulling or additional pushing mechanisms on top of this. So you are basically relying on the external identity provider calling you and letting you know that it's no longer valid. But obviously, you know, we have mechanisms on top of that. So you can manage your sessions inside of Keacog and we have session idle capabilities where if a user doesn't use the session for X number of minutes or hours or whatever, then that session will be invalidated. And you can also limit the individual sessions to individual clients or how long the refresh tokens are valid. You can also, you know, require re-authentication after certain amount of time. And we are also, we are going to put a little bit more into this as well going forward. So we do have some more thoughts to kind of narrow this down a little bit more, right? But yeah, it is a continuous thing to manage sessions. We have a try though because at the very best level you apply with the lifetime of a refresh token and access token and, you know, you can make them very short lived but then the chip chats and back and forth between application and server will be and IDP will be way more frequent. So there's a different detail between request tokens and session token is what you're saying. So yeah, there is a user session at the top and then there is refresh tokens and then there is individual access tokens. All three have different lifespans, right? So, and this enables you to do things like, when I say a client exploration time, it's a client session time, that's then the refresh token, right? So you can have a sensitive application have an hour long refresh tokens which means that it constantly has to go back to Key Coke. It can also ask for the user to be re-authenticated after a certain amount of time, right? While you have other less sensitive applications that may have six month long sessions and be perfectly happy with that, right? So obviously the, when I say that there was areas that we want to improve on here, that's around, you know, the concept of these days people are expecting their sessions to just last, right? So on a mobile phone, you want to be just logged in and on your desktop, you kind of want the same experience. You don't want to start your working day by having to log in everywhere. So it's finding that trade-off of how secure does your applications need to be and how often do you have to bother your users for things, right? So this is an area we will be doing quite a lot in over the next year or so, where we'll be working on, for instance, like step-up authentication where, you know, an adaptive authentication where you can have certain rules saying, you know, like ask for OTP again, but you don't need to ask for the password, right? You have these kind of concepts in. Yeah, there's also the, there is an extreme case if you want to leverage a lot for service to service, there's what called service token, and then, you know, you can say like for backend system, the session lasts for a year on the extreme, right? And then, so it's kind of the same like, if you use corporate G-suits, like Google stuff, like you probably need to re-authenticate every once like every two weeks or every once in a month, then on an adaptive kind of more smart authentication scheme, if you switch a browser, if you come from different IP, it pretty much always ask you to re-authenticate because it wants to confirm, right? There is a lot of flexibility. No, it is, yeah, it is like what Stain was saying, an open problem, it needs a lot of work. I just want to add a few things like, you know, in OIDC or all of this, enable the access token to be short lived, so you can make a short live access token and you can keep your refresh token to be long lived, but, you know, the trade-off will be the more frequent recurs for the, you know, the token exchange, but there is a possibility to do that and there are like a PST2 or in the FAPI compliance, there is a 90 days re-authentication requirement for, you know, refresh tokens and things like that, so it is possible to make short live tokens, but it's a custom set of things maybe. Yeah, thanks, thanks James, thanks for that. This is Chen, I have a quick question regarding the OSZ part. I was impressed with the OSZ ability provided by Key Cloud, but I wonder for the OSZ part, is anyway I can use some, for example, like a plug-in model for if I already have like a, I'm using OPA for like OSZ, is that possible? I can integrate with, I use OSZ from Key Cloud, but use OSZ with OPA, is that possible? I'm not fully, because I'm not sure I'm kind of understanding what you, when you say OSZ, I don't quite know what you're saying, or I'm not understanding what Chen is asking is, could you layer authentication and authorization as two separate layers? And for authorization, could you make it be legable to be able to use OPA or any other OSZ provider? Yeah, so obviously the authentication part is step one, you know, on Key Cloud, so the client is or the application is free to use OPA if they want to use that, right? No, I guess, Stan, the question is, you said you have an A-back based authorization model currently, so is there a way to add something more pluggable, more powerful instead of their A-back model? That's the question. Okay, so what we have basically, we have the A off the back, so we have the attributes, right? So you can manage your roles within side Key Cloud basically adding roles to your users, but there is nothing assigned to them, right? It has no meaning until you have encapsulated that meaning in your policies and your authorization on your client side, right? So yes, roles are just metadata, right? And it's up to your application then to interpret those roles, unless you're using our centralized authorization services, where you can utilize roles and you can define policies on access and this kind of stuff. But that would be an alternative approach to OPA, right? Yeah, if you don't lose any information, if you're just propagating out the attributes, then you can use a policy decision after that. So I wanted to kind of take a moment to see whether we make sure we have enough time at the end to kind of look at the recommendations. Ash, do you have something that you'd like to talk to about that? So yeah, we essentially went through a long process with both law and teams, so thank you very much. And we gave them some initial recommendations for the self-assessment, which they've addressed in the document. So after this presentation is done, the review team will sync up again and we will provide some final recommendations to the project. So that's the next step, I believe. Okay, great. Few things I would suggest based on the questions that we got is clarifications around the user-based authentication versus service-to-service based authentication and then clarity and separation. I think although it's there, might be useful to highlight because of the questions that we got. The second one that I had was the one that Brandon and I think sort of what I was asking as well was the tokens, token life cycle different tokens and token life cycles and how they are managed and make it more explicit. The third one is the layering of authentication and authorization and either have the ability or we don't have the ability to have a pluggable of the things, those things came up as highlights. So I'll just mark that in the notes and I'll probably address them. JJ, can you add them as commands to the assessment doc? That'll be like more helpful. I'll do that. Yeah, if they're in the chat, they'll link to the assessment doc. Got it. Thank you. Yeah, that's all I had to add to Brandon. All right, cool. So any more questions about the assessment or K-Corp? Maybe more from my end. Sorry, Robert, go ahead. Okay, just a quick... More on the process of the assessment itself, kind of any kind of metrics of how much time, effort, any suggestions for improving the process. Is that a question for the reviews or the project leads or both? I would be interesting to know both how much effort was spent on the project and of course on the assessment end. I would let Bozla and Tim answer that first. I was a bit surprised when I realized it's over 30 pages in Google Docs somehow. So between initial drafts and then incorporating feedback, I think at the end of the day, if I combine everything, I spent few days worth of work writing this down. Can I ask, this is Michelle Traborca and to that note, so you're on the project itself and you did your own assessment? That's the, well, it's a conversation. So there was a lot of comments and feedback and it was kind of like a living working document. So some of the changes were applied by the reviewers and kind of like this cast and some I wrote myself based on the feedback and then kind of like back and forth, like does it look better? Does the clarify and so on? Yeah, I guess my feedback on that is that that's typically not how a security assessment is done in my experience. I find some of the security assessments to be more threat-model-ish and then I find some of them to be more like a rapid threat assessment, but it's just not clear to me what the rules of the road are and so it's just a little fuzzy. I'm concerned about, I mean, I thought it was a good job. Actually, I was, you know, you called out a lot of the qualities and concerns, areas of concern that I probably would have called that as well, but I just wonder about the sort of separation of duties in that and how that could appear. Thanks. Maybe I can make a quick comment there that might be helpful. I think we have done quite a few of these assessments and there is a ticket open to hopefully have a single document or somewhere that we can capture the objectives and the goals and the non-goals of these assessments. And I think at one point we did, I think Justin Kappos, I don't know if he's on the call, you know, he does a great job of articulating the goals and it's not to do like a penetration test kind of an approach, but it is to perform more of a best practices assessment. I'm probably not doing justice to it, which is why the ticket's there and we can capture it, but one of the non-goals is that it's not like a red team pentest kind of an approach and this assessment is done. Yeah, I don't think that was my point, Vinay. I am pretty familiar with, I've worked in security for more than a decade and I'm pretty familiar with the difference between a pentest, a threat model and assessment. I'm just, I'm concerned that it is misleading to use the term threat security assessment given that the roles and responsibilities aren't clear. I mean, maybe I sound like some pissy finance person and guilty as charged. I just think when you use those terms, that such a term like that, you should be clearer in a little clearer from what I've seen than, because you're saying you're doing an assessment, a security assessment and, you know, I'm not trying to be a dick here, but it's, I feel like, you know, if you're going to say that you're gonna do it, then we should have it clearly defined. And it just, I feel a little uncomfortable with it as well. Yes, so Michelle, these are all really good points that you brought up both at the last meeting as well as here. And I saw that you had submitted issue 415 about this. So we actually have this whole assessment reevaluation coming up in our September 23rd meeting. So I think that'll be an excellent opportunity to dive deeper into a lot of this because we've been doing these assessments for a while, like was previously had mentioned, and a lot of these concerns have come up in the past. So we wanna make sure that we're getting everybody's feedback on the same page. So next, the next meeting that we'll have to talk about that is September 23rd. In the meantime, I'd be interested to hear more from the Key Coke team, how they felt the responsiveness of the assessment team, the volunteers working with them on creating the documentation and does the Key Coke team feel that the work that was currently done and the product that was actually produced as a result of what we're currently calling the assessments, was beneficial to them or to their customers or to their security understanding of the cloud native community? To wrap up on my experience, I think the responsiveness part, like no compliance, I think it was very good. So I think like ironically, the core part of this presentation is of this whole security degradation and kind of like what are the typical scenarios and what are mitigations? If I recall correctly, initially it wasn't there that much. And I think I was forced, well, forced by your comments, I'm able to do it this way and to structure it this way. And well, it was a bit of work, but it was actually a good thinking process to go through, kind of like, okay, so are we doing this, what do we do for this scenario? So on the project side, actually it was, I think it was a good exercise to be asked to and kind of like pointed to do this, consider it, but we'll consider how we address or how we look, how the project fits this frame. Thanks, Bolsa. Sian or Ash, any comments of the kickoff that's been? I don't have anything to add, it was Pollock from our side that did most of it. Yeah, I'd just like to add that Bolslo and team have been super responsive to all the suggestions we've provided and there have been a lot of back and forth between the two teams. So I really appreciate on behalf of the review team, the effort by your team, Bolslo, so thank you very much on that. I think that's pretty much my assessment of this. Well, and also I know a couple other reviewers on the call as well, so feel free to kind of just give a few words if you have anything in mind. All right, so if there's nothing else for Kiko, I'd like to thank Bolslo, Sian, and the key club assessment team Ash, Krishnan, Irina, and Emily for the hard work here. I think it's a great assessment. I look forward to kind of doing a deep dive on this. Kind of hits up a few other things that are coming up. So we have the white paper discussion that's going to be happening next week. And also we have a discussion on the three types of stemper that kind of touches on some of the things that we started talking about around security assessment. We have kind of reached our initial target of the initial assessments that we wanted to do, the five initial assessments that we wanted to do and then to do now we can take a look back at them, see whether they worked well or what we want to do better in the future. So that's a discussion that will be on Twitter of September. If not, any other ideas, questions that we want to bring up in future meetings? If not, I think we can get five minutes back. So thanks, Brandon. Thanks for joining. Thanks. All right, thank you everyone. Thank you all. See you. Have a good one. Thank you all. Bye-bye. Thanks, guys.