 I should be back and I can participate, but my audio has some background noise. Okay, I'll just kick us off real quick then. So everyone, please go ahead and add yourself in. And we'll just go down the list of names in attendance and just do some quick updates, Sarah, since you just talked, do you wanna go ahead and do so again? Yes, so I've been traveling, so no real updates from me. Sarah Allen, normally facilitating this meeting. Thank you. One of the co-chairs of the workforce group. And thanks, Justin, for helping out. Okay, great. So I'm Justin Capos. I guess something moderately exciting is that our Optane, which is the automotive version of Tough Project is we've now finalized all the legal parts and we're now part of the Linux Foundation's Joint Development Project. And we also did some other things with officially, officially, officially having the IEEE Isto standard for the current version of what it is we've done out. So yeah, that's been one of the major things I've done. All right, why don't we have... Sorry, where's the... Sorry, just one question. At which part of the Linux Foundation is Optane in? So there's something called the Joint Development Foundation that was recently as maybe a month or so ago. Was a separate effort, but kind of was absorbed into the Linux Foundation in some way. There was, I think maybe the question that isn't being asked here is why isn't it part of AGL. But there's, AGL is not a home for specs and Optane is a spec with a reference of a limitation that's already in AGL called Actualizer. So as a result, the JDF was a very natural, perfect home for us and has a path like ISO standardization and other things. I think that Tuff and other projects like this will retain their current homes, but may also benefit from some resources that the JDF provides like ISO standardization in the future. And I'm happy to talk more about that if that's of interest to anyone else. But maybe we can either do that after the check-in or take it offline, either one. Okay, great. So Brandon. Hi, I'm Brandon from IBM. So I've been working on the image encryption stuff because of the recent acquisition of Red Hat. Now we are moving to OpenShift. So we're moving a lot of the SAC to the Red Hat related technologies. So we have implementations of the Encrypted Container and measures as well and the Red Hat SAC and then we are going to collaborate with them to kind of push it up for them as well. So if all goes well, hopefully we'll see this in both container D and Red Hat SAC within the next couple of months. On top of that, so the key folks for the security as well as the assessment said that they'll be back around August. So probably next week or so I'll give them a ping, see whether they can start working on the outline. Great, great. Yeah, and we'll need to talk more after the check-ins about where we're at with the assessments and what to do and how to push things a little further along. Craig. Hi, I'm Craig Ingram. I'm with Heroku, part of the Kubernetes Security Audit Working Group, which has been wrapping up and is almost ready for release. And I'm excited that we have some folks here today from the TrailBits team that did the actual work to give an update on that as well. Great, great. Yeah, and we'll hear from Dan in just a moment, but first, Justin Cormack. I haven't got much to report this week, so... Okay. Oh, wait, Ray. Hi, my name's Ray. I'm kind of new to the group I joined last week. It's my second meeting, so I'm just here just to check it out. And I'm kind of new to the cognitive space, work for a cognitive consulting company. And so, yeah. Awesome. Dan, good to see you here. Yeah, hey. So I'm Dan Guido from TrailBits. I brought Steph and Edwards along with me. We just wanted to drop in, introduce ourselves, share a little bit about what happened during the Kubernetes Security Review and offer to help with other projects and problems that you're trying to solve. Awesome. Thank you very much. Mark Underwood. Great. I'm at Synchrony, representing myself and also the NISVIG data working group. So much exciting stuff that I'm not gonna say anything. All right, well, that opens us up for Gareth to say something exciting. I'm not sure about exciting, but I can always be guaranteed to say something. Probably was, I guess, related to this. So people might not have come across there is a new SIG in the process of applying for a charter. Another CNCF SIG, specifically around application delivery. The intention is somewhere to house, I guess like guides, papers, but cross lots of different projects. There hasn't, so like there's a draft charter going out to the TSE today, tomorrow and this week. I think it's interesting. There hasn't been a lot of conversation there about security yet. It's very new anyway, but I think there's an interesting relationship between this group and that one in that if there are guidance, white paper type aspects going out about how to do cloud native applications, if like them having a security review component would be great versus then not having them and people head desking because they've got, the guidance in them is leading down people down an insecure path. Like I don't think that's a, that's basically a problem we're well ahead of. And I hope we are by me bringing it up today. Awesome, yes. Thank you, Christian. So I'm finally making some progress on issue 165, the platform implementer persona. I tried to chase down somebody here at Google that apparently wrote use cases for that persona, but those use cases weren't quite usable. So yeah, I'm making some progress on that and I have another lead here at Google that I wanna query some of our own corporate people that basically try to make Google's cloud platform work for our corporate needs. So that is pretty similar role. So I'm hopefully making progress on that till next week. Okay, thank you. Michael. Yep, working on release planning for Falco to hopefully have a release around the time of October, November timeframe. And then also have been working on the security day, which we have a, which is now called the cloud native security day at KubeCon. We have a call to discuss that after this meeting. Great, yeah, I think that's gonna be a terrific event. I'm definitely been watching that closely and looking to see how that all turns out. And I think we have also Michael Hausenblass. The other Michael exactly. Hi, my name is Michael Hausenblass. I'm a developer at the container service team at AWS looking after container security. And I still owe a bunch of work around the microsite. Launchers keep me busy. So I will do that, definitely, very soon. Okay, great. And did we miss anyone? Yes, I'm Martin. I am from VMware in their open source team. And it's my second time here. And yeah, I will see, I hope that I will be able to help you guys in the future with something. And yeah. Oh, we've got Stefan here. I don't think he said anything, but I... Yeah, I was just about to intro. My name is Stefan Edwards. I worked with Craig on the Kubernetes working group for actually auditing both the technical side as well as the threat side. And then obviously I work with Dan at Trail of Bits where I am a principal consultant. I do a lot of our threat modeling, some VCSO and other things here. So I'll be giving some talk today. I have some slides presented. Nothing super heavyweight. But I just wanted to talk about what we did for threat modeling and how we ran it and what we were looking at so that you all can have some insight into where that report went and what you'll be seeing soon. So. Great. That sounds terrific. So now is the point when I think Sarah, who knows best the check-in from partners, SIGs and working groups is gonna step in and take over. Otherwise I'm gonna show my ignorance about what's going on. Thank you, Justin, for now the background noise has gone into another room. So I'm back. So I just captured a couple of things for the agenda. I'm not sure. I'm actually gonna skip or move to the end. I mean, it does say as needed. Since we do have the folks from Trail of Bits here, I'd like to invite you to be on deck first. So we have plenty of time for that. And cause I know people, it's come up in conversation over, as you've been working on it and I think everybody's curious about it. So if you, you know, like if there's stuff that you are prepared to share today, I think that'd be great. If there's, if that's something that you wanna queue up for another time, that's cool too. Now we're ready. Thanks for giving us the floor first. Stefan put together like five short slides, but just help visualize what happens. And I'll let him kind of prep his desktop for a recorded screen. I'll do you one better, Dan. I will share only the slides that I intended to. Yeah, I just, I really wanted to walk through this. We don't need to do an introduction to us since we did the round table. But I wanted to walk through how we ran the threat model because I think threat modeling open source software, especially of the size of Kubernetes and then other similar components can be very difficult. We had people from companies that are technically competitors working with us and trying to represent everyone's understanding of Kubernetes and everyone's different viewpoints and from component teams, from competitors, those sorts of things was an interesting challenge here. Just one thing I want to note beforehand, technically some of this information that we are discussing hasn't been released publicly yet. Craig and I are working to get this all ramped down, but some folks on this call have already seen the report. I just wanted to scroll through some sections, but there technically could be things in here that are unpatched vulnerabilities to Kubernetes at a design level. So I just ask that you please don't share this too widely. Obviously this is a recorded call. There's people from the community on here. So just don't crow any of the things that you've seen here, but there will be some vulnerabilities that are not public currently at this time. We lose tweets, sink fleets. Yes, exactly. So methodology was very interesting. I think as most people have seen with Kubernetes, it's a very networked, very intricate system there are many different state machines. There are many different things such as there are multiple public key infrastructures within Kubernetes itself. So having a data flow for the restricted set of components that we were interested in was fairly interesting. I'll talk about what components we had when I switch over to the report itself, but there was actually having something that is a documented from Kubernetes from the Linux foundation's viewpoint as a more canonical data flow was the first step here. So during the technical assessment and talking with Craig and other folks on the team, we came up with what was a rough data flow model for this. I actually use PyTM if people are familiar with that. It is a Pythonic threat modeling framework to get the initial pass out so that we could iterate programmatically on that instead of having to redraw slides and redraw things there. So from there, we went to go on and understand the connections between components at a logical level, understanding trust zones, the threat actors therein and what concern points we had between those. So for example, we wanted to understand what a external user meant, what an internal user meant, what a malicious internal administrator meant and what components and what items they could touch within Kubernetes, both from a technical component side as well as a trust zone side. So we undertook that analysis. I will show you some of that shortly. And then we situated each control family within the larger Kubernetes system itself. And when I say control families, I mean this in sort of a like NIST 800 series style of control. Like, are we talking about audit family? Are we talking about multi-tenancy? Those sorts of things. So going through, and I will show you how I address this, going through each of the components and understanding what controls they implemented, how strong they were implemented, what they were attempting to protect against and where those were implemented. We then went through and situated those as best as possible to pre-game for all of the meetings that we held with SIGs. Craig sat in on some of these meetings. Jay Beale, one of our counterparts on the working group, Aaron from Google, we all sat in on meetings with the SIGs and I went through and asked people from SIG API machinery, how do you handle multi-tenancy? How do you handle various other things like authentication, authorization? What do you do for this? What control failures do we see? And I documented that entire process. We captured the output of those meetings in what are called rapid risk assessment documents. I specifically designed some rapid risk assessment documents for this assessment. I'll show you one in just a few moments. And then I collected the output from those, all of the findings, all of the really great notes, all of the how does this service work sort of things and I threw them into a threat model, added a bunch of findings, added a bunch of risks and concerns that we had and then worked with Craig and other folks to actually contextualize those risks and come up with an overarching design critique, I guess you could say, of where we can go, what we can do better. And some of these things have already been fixed in more recent releases of Kubernetes, but it was a very interesting process because there are so many components. So in terms of control families here, the audit working group asked us to focus on roughly six control families. Now, this doesn't mean that we weren't interested in something say logging, right? There were several times where we talked about logging, we talked about non-repudiation, we talked about other control families, but the main focus of our assessment was these six control families. We're interested in authorization, authentication, cryptography, et cetera. So for each of the SIGs, I came up with a series of questions, I worked with Craig and others to review those and then we sent those out to members of the open source community and pulled them for answers. So if you go back through the archives of some of the SIGs, you'll actually see me, Craig and other folks reaching out to many people of the open source community and literally asking them for time to just sit on a meeting and discuss some of these things or to look over the questions that I was asking and provide any feedback. So we received some from the community directly itself, folks had some corrections, they had some comments and things like that. And then we also went to some of our constituent organizations and asked members of the community who worked at those companies to sit in on meetings with us and to actually review each of these control families. Now, we did rapid risk assessments. If you're not familiar with this, this is a process that Mozilla actually came up with. Rapid risk assessments are a very quick way of understanding your CIA triad for a threat model. So if you want to, if you have a data flow and you want to understand what the impact of a certain set of risks are and come up with recommendations quickly, Mozilla has worked on this process. Now, it's interesting when you're doing this for normal threat modeling, but it's not necessarily the most useful for what we wanted to do. We were doing control focused, we were interested in a specific set of components. So I actually modified this process and I can switch to that right now. One second, I find it. Hang on one second. Chrome does not want to show. Ah, here we go. Perfect. So we actually came up with these RRA documents. If you look for Mozilla rapid risk assessment, you will see their document. They have a very nice Google doc that you can use, but it was not as useful for us within the working group. We had to work with many people from the community. We had to, we coordinated many of our activities via Git and GitHub. So we wanted a very simple markdown document. So basically for each one of the RRAs and each one of the SIGs, we went through and asked them the normal threat modeling questions that you would ask, such as how does the service work? What sort of data does this operate on? What, like why do I care as an attacker and as a threat modeler as to this component? Like what am I looking for in this thing? And then each one of those answers I collected. So for, as everyone knows, for example, there are sub components that exist within the same host but are technically within different trust boundaries. So a qubit, it would be on a worker node, but then there are also maybe pods or other things on that worker node that transit trust boundaries there. So we wanted to capture the answers for each one of these situations and then go through and ask about each and every location, what data is stored, where does it store it, what's the sensitivity of that data, et cetera, as you would normally do. And then I captured meeting notes, any findings that were there within the system itself. They're quite extensive. Jonathan has actually seen some of this already and Craig obviously has lived through this, but there's quite a number of things that we asked, captured and went through with the teams that we will be making public relatively shortly as we go through this. And the reason why I'm showing you these things is you may be able to build off of the RA notes that we have, you may be able to build off the data flow diagrams and other things that we already created to help speed up your threat modeling processes and whatnot. So a number of notes were collected. We went through and analyzed all of the control families. We did talk through some threat scenarios. We did talk through findings, et cetera. But all in all, we captured a large number of notes across the entirety of the system. If I go back here, you can see that we actually had a rapid risk assessment document for each item within the system. So we were only focused on a certain number of components within Kubernetes, not the entire system itself, but we did capture an RA for that. Now, does anyone have questions before I actually go on to the report itself? Well, I have just a question about what do you mean by the entire system versus the components? Sure, so Kubernetes actually has quite a few components that we did not look at itself. So for example, when Kubernetes talks about networking, there are a number of components within the system that handle networking. So you'd see something like Kube Proxy and Kube Lit and other items actually interact with the Linux routing table or other commands at the network level. So we would focus on Kube Proxy and Kube Lit, but say we wouldn't necessarily look into the various CNIs that are out there, the actual container network interfaces that were there. So Calico was not in scope, Flannel was not in scope, those sorts of things were not there. So any vulnerabilities or any design issues that we've seen with those were not in scope for our assessment. Does that make sense? Well, I guess the question is in a norm, like I know there are a lot of components that are optional, right, that you don't need, but like with this assessment, I would like I'm trying to figure out like what are we supposed to take away from it? Does it mean that I could use Kubernetes with only these components and my own code and then I would know something about this, then this audit would be meaningful or this is a baseline, which by itself doesn't really tell you anything about the system. So it tells you about the most critical components within the system. So we were interested in those components that Kubernetes itself controls. So like Kube scheduler, is there an issue there? Is there an issue with any of the controller managers? But where we did not dive into is say something such as Docker. So we were interested in how Kubernetes interacts with Docker or how Kubernetes interacts with Calico and Flannel for networking, but we were not reviewing Docker and we were not reviewing Calico and Flannel themselves. So in order to answer your question, it's really more that we had a baseline pulse of design concerns within Kubernetes itself, but there are other wider concerns based on your choices there. So there may be issues that we've seen with Calico that don't exist in Flannel and vice versa that you still have to, if you're using these components in your organization, you still have to understand risks that are there on top of Kubernetes risks themselves. Does that distinction make sense? Yeah, so is this the complete set of components that are like in the Kubernetes org? I'm just, and we can move on. I'm just trying to get to that scope and I want to let other people ask questions. No, that's fine. No, these are not the complete set of components within the Kubernetes organization. These are the most critical ones that we wanted to get some lens upon in order to understand technical concerns as well as design concerns within these components themselves. Okay, so this is really more like a foundation for a company doing their own audit. Absolutely, it's a foundation for Kubernetes and the CNCF to review some design decisions that they've made within Kubernetes. And then it's also absolutely, a company could go in and look at these sorts of things and see these sorts of design concerns and then design around them. One of the mandates that we had from the audit team was to not use previously existent vulnerabilities. So like we weren't binging them for things that were known issues within Kubernetes itself. Yeah, I do want to jump in for a second too. A lot of the mandate was also to provide kind of a foundational set of information that we could use to drive further security review inside Kubernetes since even in the vast amount of time that we had on like a subjective basis compared to the size of the rest of our projects, the level of complexity, number of components and just sheer amount of code involved in Kubernetes is you're not going to get to the whole thing in that timeframe. So we really wanted to make sure that we're lifting all boats in the community and helping them understand where else they should search next and what a good prioritized outcome oriented list of tasks to work on would be. And this threat model document that Stefan's got up now represents about one third of the total deliverables that we made for this project. I think the threat model is particularly interesting to this SIG but we also have a list of specific security issues that we found about 30 or 40 of them as well as a white paper document that summarizes a lot of what we learned doing this project that we hope other people could use to kind of repeat and then advance upon our own work. So this is one of many outcomes. I have a quick question. So have you, I'm curious, have you looked at the CUBE controller manager because I would assume that's one of the main components in Kubernetes, right? So we did KCM and CCM. Okay, okay, cool. Yeah, and so our interest in KCM and CCM was, and we actually can, I can dive into that. Our interest in KCM and CCM was intriguing because they're very focused components, right? But the interesting thing about KCM and CCM is they actually violate the principle of least authority in some ways. So there are components within KCM and CCM that are more privileged than others. And the only thing that's really stopping a component from calling another subcomponent is trust currently. So there was very interesting discussions that we had, and this is the sort of thing that I mentioned, please don't tweet about this just yet. But there are very interesting discussions going on within the SIGS as to how to design around some of the concerns that we came up with during our discussions. To be honest, as much as I'm very happy with the work that I did on the threat model and there's a bunch of cool tables and I did all sorts of really neat stuff, I think the most interesting output from this was actually having folks from the community sit down on a call similar to this one and talk about what does KCM and CCM do for authorization? And that was extremely interesting because different folks had different perspectives on the various components and getting them to talk about it in one single space was extremely enlightening. Sure, definitely. So just flipping back to where we were, as we're all aware, Kubernetes is quite large in the number of components. So I had to situate components themselves, the ones that we were tasked with reviewing within planes and then situate those planes within trust zones. So if you've done threat models before, you've seen trust zones like there's a database layer, system administrators may have access to that, what components sit within those, what planes are they in, those sorts of things. So those sorts of analyses are here and going forward, you may want to lift some of these trust zones because they've been vetted by the community, they've been vetted by the working group on the Kubernetes side. There may be some interesting things that you can build off of from these trust zones as well in your threat model as well. And then there's obviously a connection analysis. This was actually fairly interesting. We did find a number of locations that had weak connections, so they may use HTTP by default. They may have the option of using authenticated TLS connections but do not, those sorts of things. And then sort of sussing out all of those connection types and sussing out all of the various zones and how an attacker can transit those zones was extremely fascinating whilst talking with the various SIGs themselves. And then we have threat actors, obviously. This then drove all of our findings. So if you go through the findings once this is released, when I talk about who can undertake this, I use just these users themselves. There are no other users used throughout. So that you have a standard set of folks that you're actually attacking a system. And then I wanted to talk about obviously where they come from and where do they want to attack throughout the system. And then I talk through the controls, which ones we focused on, which other ones we had and then did an actual analysis of these. So if you go through each one of the components, you'll see that it has a control family. It has a subjective strength category that talks about is the satisfactory, what is satisfactory means is defined up above, and then gives you a description of each area there. So there's quite a bit of data in the threat model that you may want to look for either in your own individual threat modeling activities within your organizations, or as you carry out like a CNCF threat model here. And obviously, we would be happy to answer any questions on that. And then we actually get into the individual component level findings. What I actually did was broke down findings by component. So rather than just have a giant list of components, a giant list of findings, I should say, each finding is situated within the component section that is there. And then there's a general architectural summary of the component and of the findings that we have there. So there's quite a bit of data in here. I think there's 56 pages currently. There's a few other changes that we've made. So roughly 60 pages worth of information here and capture. The other thing that we will be doing is once we've released this report, we will almost certainly release the raw data. So all the RAs, all the pie threat model files, those sorts of things will also be released. So you can build off of those either for your threat models internally or for the CNCF threat modeling activities that you may be undertaking yourselves. So that was a lot of information for everyone. But does anyone have questions on what the output was? Or is there anything that Dan and I can answer about how we ran these sorts of things or anything we can do to accelerate your own reviews? Yeah, I'd like to ask a question and make a mention. So first of all, when we do reviews of our own in the security in the SIG, they're open for anyone to participate. We'd love to have folks from your group go and take a look at this. After we do about three or four more of these, then we're gonna take a step back and try to look at how we change our process. And I'm sure a lot of your invaluable experience with doing this will be well invaluable and helping us adjust that process. And then with that comment, I also wanted to ask a question, a more focused question. Sure. Which is, so when doing security assessments for like Spiffy, Inspire and the projects like that, similarly we had lots of components that interacted with each other in interesting ways. And one thing that came out of that process is we also had to look quite a bit at an attacker that got access to one, like one component could often then use that to fairly easily take control or gain access to other components in the system. And I'm wondering if the way in which you did things with Kubernetes modeled that in a straightforward way, whether that was difficult to do, whether that was out of scope or what your thoughts are. So without getting too much into the technical side of it, we actually have a finding where a internal attacker, let me pull up that table, but there is a finding wherein an internal attacker is actually able to parlay their access, is actually able to parlay that access to wider cluster access. So we have a technical finding for that sort of thing. In terms of what we were looking for, when discussing the authorization and authentication components, I was always really interested in hearing the SIG's thoughts on who should be accessing this and what they should be doing. So for example, when talking about the API server, this is the real heart besides Kubelet of the Kubernetes system. So when talking about it, there could be a large number of people who have to interact with this. There could be developers that have to have access to this, there could be administrators that need to have access to this. So when we're trying to understand who can do what to this thing, there's the coercion aspect, there's the malicious aspect, those sorts of things. So yes, absolutely, we did talk about who can do what from where, but we also have the technical side behind us because there is a whole other report with about 40 findings in it. We were able to leverage the two. So when we had technical findings that became larger design issues, we made them into architectural findings. And then as we saw things on the threat model that looked a little interesting, that looked maybe like they could be a technical vulnerability as well, we fed that process there. If you're familiar with like NIST 800 115, NIST 837, those sorts of things, we really followed those sorts of like OODA loops. Like we see something in the threat model, we see something in the technical side, we kicked that back up as a discovery item and then feed it into the other process. So everything was feeding everything else so that we could look at who can parlay their access. And then we also looked to see if there were technical or design issues from each of the other two assessments. So very long-winded way of saying, yes, we did look at how an attacker can move laterally within the system, both from a design level, as well as from a technical level itself. Does that answer your question? Yeah, that was great. Thank you. Of course. And I think Sarah wanted you or someone else to post in the chat the NIST references. Yeah, absolutely. Dan has heard me and Craig has heard me do this. I will very frequently sit on phone calls with clients because of my government background and say things like, oh, you do NIST 812, which leads to NIST 830, 37, 60, 61, 115, 53, 171, et cetera. But yes, I can certainly post all of those to folks. That's not a problem. And we've had some people from the US government in this group and there are other governments in the world and I used to work for the US government too. So it's always intriguing to sort of cross-fertilize the industry norms with the government norms and share internationally because I think we can all learn from the kind of different types of documentation, especially as more of cloud gets into the highly regulated space, all that stuff becomes pretty applicable to the industry. Absolutely. There's actually a great paper from the Swedish Police Authority Academy. I can find that reference. We reference it in this document. But they actually talk about how they used Kubernetes in a highly policy-driven, highly regulated environment and what the implications to doing that were. That's a super fascinating paper, but I will get that reference for everyone. Are there any other questions or comments that I can address for folks on the call? Thank you very much for having Dan and I. I do think we will continue on with this. I think it's very interesting, especially for me, because I worked with Craig on these sorts of things from the Kubernetes side. I think we'd be happy to continue on working on these sorts of things. I will get you the references that I made during this talk. But if there's anything else, please feel free to reach out to Dan and I via email. We'd be happy to pick it up. We're also on the Kubernetes Slack if you're there. So you can reach out to us and say, hello, I will include our handles in an email after this with the references. And then we will also be joining the CNCF Slack as well. But if no one else has any other questions, I'd be happy to turn it back to the moderators. Yeah, I'll just say for Justin, that we're happy to take a look at the process that you're using. Absolutely. There are some folks that I could throw your way. Great, yeah. We have the in-toto assessment already up and we have an assessment for OPA OPA that is basically we're just kind of waiting for a tiny bit of discussion with that community, which has been a little off and on. And then we're ready to push that one over the finish line. So you, I think reopening the in-toto one maybe isn't the right thing to do, but helping us, you know, if there's minor changes to what we've done with OPA that would be great or more substantial little tweaks to things for upcoming assessments that we're doing would be very, very welcome. Yeah, I think that'd be fabulous. And just, I don't know if we mentioned it, but our plan is to do five of them and then really do a retrospective on the process. So having, you know, kind of like look at what we've done and also forward-looking and we have an issue you can chime in on or email or whatever is good for you. We just really welcome your participation. Thanks. Yeah, we'd be happy to participate. Cool. So we'll be seeing each other again. Excellent. Any, are there any other well-asked chance for chime in on our questions? Super. So you should see these all go out sometime soon. I guess Craig will know better than we do, but all the documentation should be out. I think we were originally aiming for before Black Hat, but I have no idea what the time looks like. Yeah, I think everybody would be eager to read whenever you're ready. So Craig, if you're on the security slack or if you emailed me, you know, actually email is terrible, but I think that it'd be great if somebody dropped that as a slack when it's ready. Yeah, I think the plan is still sometime next week, last I heard, and I'm on the slack, so I'll drop it in there as well. I think the CNCF is gonna have a blog post and stuff for that too, so. Thanks, Craig. All right, I think there was a question or comment, Justin Kappos, on the upcoming assessments and prioritization that you mentioned and other people I think are interested, or maybe Brandon mentioned it. I think we both said something quick. I think I said most of what I wanted to say, maybe Brandon can step in. Who else is about the key club? I think it was just basically like if you could give a little update, since I did it right down exactly, but if you could give a little update on what I think Justin just talked about, OPA, but of the next ones that are coming up, what is coming up and kind of like status on things. Yeah, so I think Robert put together pretty nice table as well on that. Let me just bring it up. Yeah, if you could screen share it, that would be great. Yeah. Just saw that before the meeting. Thank you, Robert. What the, fine, okay, got it. So this is difficult to read. I don't know how to make this nicer. Oh, there we go. Yeah, so Robert put together this thing, which shows kind of like the NAICS reviews coming up. People who signed up basically, they commented on the issue. So for Key Coke, we have quite a lot of volunteers early. So it seems like we have a group of people will be contacting the folks again next week. They were all on vacation away this past month. On Key Coke though, I think we need one more of our original team for continuity. So it needs to, we need to have me or Justin Kormak or Justin Kappos. Isn't Mom JJB, is that Brandon? It's Brandon. I thought we said we were gonna try to have two of those. Yeah, that's right. We're gonna have a lead who's done it before and then some, then have another person who is, if we, you know, maybe something, something. I just wanna mention that. I think having one of us look over it would be good. I don't, like I would be happy to be somewhat involved with that, but I'm, yeah, I think we'll, we shouldn't put too fine a point on it to make it. I don't think that there's this conflict of interest given that Brandon works for IBM and it's an IBM project now. Well, so I think that might be viewed as a conflict, it might be viewed as a conflict of interest. Ah, so I hear of volunteer to lead. Well, so then in that case, I think Wolfhasha and I wouldn't be able to participate in this then. I think it's fine to participate. I just think, I just think as a lead, it's a little bit. So Justin Kormak, since you were part of our original assessments, will you take the lead on Keyflip? Yeah, I can do that. And then Brandon, do you want to move to one of the others? Yeah, do we decide which ones that we want to look at next? Are we having conversation with the TOC on this as well? So we are, the process for hearing from the TOC on their prioritization is ongoing. And so we are just moving forward with our best guests. So Justin Kappas, do you want to chime in on what you would pick next or, because basically we're going, we're balancing readiness with the things that are useful for our process. It looks like Kappas, go ahead, sorry. I think we don't have to look too far ahead since we're, I mean, we don't want to do these too, too much in parallel anyway, but Keyflip seems ready. We should get someone on the TOC to give us a nod and say yes, just so that later on when we get a message from, oh, I don't know, some random person, I'll call him Mr. Q, about why we picked that. Then we can say, well, Liz told us, or Michelle told us, or whoever told us to do that. All right, I'll just ping Liz or Joe and make sure that that's cool. And so maybe Brandon, you could just be responsible for figuring out who's ready after Keyflip. Okay, yeah, that makes sense. And then we know. We know that Falco are ready. Oh, are they? They're written on this call, I think. All right, because they were going to be next up after Keyflip, actually. They were going to be first, but then they needed time. Michael Ducey, do you know? Michael, yeah, he's on the call. I believe we're ready. I need to confirm with Leo and Lorenzo just based upon the time consumption. Yeah, actually. So let's make an assumption that Falco's after Keyflip. And Brandon, maybe you could just, more Michael, whatever, reach out. Yeah, we could also technically do Falco first as well. Let me check with the Keyflip folks, see whether they're ready, because the last time they said, let's wait till next month. So yeah, if they need a bit more time, then maybe we can go ahead with Falco then. Sounds good. Yeah. Oh, so I had a question of NSM and STRIMS. I don't think I know what projects these are looking at. NSM is network service mesh. And so I've been in brief contact on Slack with Ed and Frederick, basically just answering some questions about the process and pointing them to the docs in the repo here. So they can get familiar with it. I don't know if anyone from that team is on the call, but my general impression is that they're in the getting familiar stage and not really yet ready to engage. Okay. Yeah, I had conversations with Ed also about this. And I think there's some eagerness there, which is good, but I definitely think they're not ready today. Yeah, maybe we can make a column in this for notes where we can say like waiting on project or whatever, or in queue. And so the, I'll just comment on STRIMS, that was it came up at a TOC meeting where they gave a presentation. And at that time we were still, at least I was under the impression that we were getting guidance from the TOC to do assessments for projects that had not, that when they were considering bringing them into the TOC and into the, as C&C I have projects. And now we're getting, that was apparently not general TOC guidance, but the opinion of a single member. So that's where we're still like kind of figuring out that guidance, but we do want one of our first five assessments to be a project that isn't specifically designed to deliver a security outcome. And this was kind of an interesting thing because it sort of, it allowed you to do, I can't remember exactly what it does, but it does something where it's like a pattern for how you deploy on Kubernetes for this particular type of software. So like I thought it might have some like sort of security conventions that it's establishing for category of projects. But that's an example. So we would be, after we do the next two, like, so it would open us the second, then we would do, we think Kiko and Falco. And then we want a fifth that is a, ideally a C&CF project that we think is worth our time in terms of, you know, of all the C&CF projects that don't aren't on our list of security projects. Like what's one where it would be valuable to the community for us to assess the security of it. So it's Mark here. Hey, I was going to propose Prometheus if because there's a dependency here with logging. So, you know, if we had a breach in Prometheus, that's one of the high risk things because we depend on the logging to do a lot of the detection and forensics. Yeah, I was going to say, Fluent for the same reason. It's like, if you're using it and lots of people are, it will be on all your hosts. So if there's a problem, you're screwed everywhere. Prometheus is politically difficult. Fluent. I see. Yeah, and also the graduate that's so, right. And I'm not sure what that. Well, I think also another way to say that, Justin, is that, so Prometheus has already been through an audit and we were going to deep prioritize things that have been audited. And we actually looked at Prometheus as an example of, you know, when some of us read the audit, we had different opinions about the way it was handled. And so we were going to go through the thought experiment, potentially with the Prometheus team or some people from who were participating in that to talk about how we would handle it if different, if we had a different opinion, either from each other or from the project about the security assessment. And, you know, how would we reason about that? How would we capture it in the report? And so we have established amongst ourselves that it's okay for different security assessors, you know, security reviewers who are all volunteers to have a difference of opinion, and we would capture that and inform the TOC that we, you know, there were differences of opinions and this is what we came to. And then, you know, they could tell us how we want to resolve that or whatever. We are not arbiters of truth. We are, you know, sort of reporting to the TOC and to the community what our findings are as people who are knowledgeable in this area. And as we go through all these projects having a visibility across a number of projects. So we don't, because it's already been through that audit, we don't want to prioritize the assessment of it, but we should prioritize that to queue up that discussion at some point. So. Yeah, I think Floundy is also a little bit suggestion. Maybe we can write it down and then we'll see how that goes. Yeah, let's capture it in the notes. Okay, and since we're on the topic, I'd like to kind of end the discussion about this specific topic with an urge for those who have been involved or are interested in looking at the assessment to sort of make one last pass over the OPADOX, which I'm posting here. And let's do what we can to get at least our side of this done and I will push the Ash and others or I guess politely ask Ash and others to please close this out so we can finalize it on their side as well. Adding them to the notes. And I know that I have added comments and need to add more or at least give one full read through it since it has changed from the beginning. Wonderful. So we have in our remaining four more minutes, are there any announcements or things are burning questions or for the group? Yeah, I have one. So what's our take about the cap one breach in encryption? It's a very spicy thing to end the call with. Three minutes. Yeah. Anybody want to turn this off? Everyone's making a big deal over once again, the way that somebody got in was bad permissions. Yeah, I'm not interested in that. I want to know why, how do you have 30 gigs of unencrypted data that even an engineer that's an insider can get to? Yeah, then the question goes to like, how do you efficiently encrypt and decrypt on the fly at 30 gigs worth of information? Yeah, I don't, I mean, the thing is, this is related to our work here. It's, you know, part of key management and encryption and transit and at rest, you know, kind of these things that are in the controls framework that sometimes get left out. So it's a good use case. People seem to have some confusion about what encryption at rest means in a cloud context. Often people seem to think that encryption at rest is when something switched off, which a necessary bucket is never switched off. It's always effectively online. And people have some confusion about the meaning of encryption at rest. Oh, that's interesting. We always took it to mean on the wire would be in motion, but... That's encryption and transit, not encryption at rest. Right, right. So the opposite is at rest. Historically, people have had application, you know, where it'll be running on an encrypted disk volume. When you shut the computer down, it's indeed encrypted, but while the application is running, everything's decrypted and they still count that as encryption at rest. But that's not necessarily useful versus actually having the data encrypted until immediately the pointer, which is going to be used. Yeah, I mean, this is actually an interesting topic because it's sort of like cloud versus not cloud. Like if you have a desktop application, right, that needs to be encrypted at rest, you know, and you're keeping data in memory and you don't have any network interfaces. Like it's a different world than you've got a cloud native application and things are... Well, it's tied, it's interesting. Yeah, we could talk about it sometime. It's a really secondary interest to the group, I suppose, but you know, you like to think about decryption is happening at the transaction level, not a 30 gig data set. Yeah, well, yes. Unfortunately, I'm gonna have to drop off. I know others may have to leave. Yeah, so it's two o'clock, great topic. Thanks everybody. Please part some of it as a discussion to the future. Thanks for bringing it up, Mark. Bye. Bye-bye. Thanks everyone, bye-bye.