 So this is the name of our presentation, Insights from the Cloud Native Security Slam. I will be giving you insights and I'll be telling you what the Cloud Native Security Slam is by the end of this. What you're not here to hear about is me, but just so that way you guys know I'm not a psychopath that walked in off the street to tell you random lines. I am a father of a nine and a half month old child. I'm a recently published fiction author, recently published author of several Linux foundation training and certification courses, which we were joking about with probe just a second ago. I'm a reviewer for the open SSF security insights specification. I organize the event that we're going to talk about today. And I am a maintainer on the compliant financial infrastructure project for Finos, which is pretty fun. And I've been doing that for a few years. I am also the Ospo technical technical program manager for Sonatype, which is just a lot of words strung together to try to describe the work that I've been doing for them, which is just engagements with all of these foundations and helping elevate the community as a whole, because that's a really strong vision that we have that doesn't have any of my social media profiles on there. So you can't hate tweet me. But if you guys want to drop me a line, I'm on any of the slacks for any of our open source foundations. You can also just shoot me an email if you have any questions about the stats that we're talking about today. Connect with me on LinkedIn. And please don't visit me in Austin. My company Sonatype. If you're a user of Sonatype, sometimes you might actually forget that we are servicing your company. What we do is we provide dependency management solutions. But what we're most known for is our research that is very frequently quoted. It's been quoted three in three different talks. You quoted it, you quoted it. I don't see anybody else that's quoted it. This particular conference. But this is the eighth annual picture that I've got up here. This is from 2022. The next one is coming out in six weeks. Ah, I forget. Very soon we're coming up with the ninth annual, the annual supply and security report. Sonatype is really, really big on research, gathering data, analyzing the data, yelling at each other about the data before we write it up and present it with pretty visuals. That's something that we really, really enjoy doing. And so I decided, Hey, let's do that with our work with the CNCF. Part of that is that we're going to be looking at lines. So you guys should know a thing about lines. Some lines your family and your kids will beg you for other lines. They will beg for you to get them away from some lines are just suggestions. Other lines, you really want to get right. Some lines have a multitude of purposes. Some lines are very, very specific. Some. Oh, I didn't do my, my animation on here. Yeah. Some lines are work that nobody gets to see. Some lines will make you a star. Some lines arbitrarily describe the world around us. Others holistically define what we experience, such as jet lag right now. Some lines are, my animations are messed up again. Some lines are just critical for society and other lines are just for fun. Some lines tend to self-organize. Other lines, you get the point. So we're going to be talking about work that we've done within the CNCF. Over the last couple of years, there are a couple of things that we've noticed inside of the cloud computing foundation, which is that hygiene needs are really, really complex. It can statistically, we see that it's not easy for maintainers to achieve the necessary security hygiene to achieve. I'm going to pause to actually reference the security report that we did before our annual reports. We found out that there are ways to predict the presence of a CVE. I'm going to talk about that more later on, but the things necessary to bring yourself statistically less likely to have CVEs are difficult. They're complicated. It takes a little bit of focus to actually get those in place. So inside CNCF, there has been an effort to create a tooling that is going to actually run a series of checks, automated hygiene checks, and make them available for any projects that are part of the CNCF ecosystem. Some of those checks are going to require non-dev knowledge to make those improvements. And so last year, well, we've been doing things similar to this for a couple of years, but last year we had the first official cloud native security slam. This is an event designed specifically for collaboration, so that way everybody's making the same improvements at the same time. We're going to talk a lot about that and the results that we've seen from that. But another problem that we've seen is that there hasn't been a lot of knowledge about where we've been. There's been a lot of point in time knowledge, but not historical knowledge about where we've been and how we got to where we are now to help us predict about where we're going in the future and lessons learned. And so last year, during this event, we added historical score tracking, which seems super intuitive. If you're doing checks, why would you not just drop the results into a database? And once we said that out loud, we were just like, yeah, yeah, let's do that. So we've got 11 months of scorecard results inside of Clome Monitor, but these are limited. If you haven't been to one of the talks this week about scorecard, there was a talk that was like, here's exactly what scorecard is and all the value and how it fits together with other tools within the open SSF ecosystem that are provided for open source maintainers to help increase your security hygiene. If you didn't go to that talk, Google it, figure it out. It's actually a really cool tool for open source maintainers and for consumers to be able to know about what you're getting from your open source projects. Clome Monitor runs a subset of checks from, oh, I didn't even mention, there's another talk that broke down for maintainers about exactly how you can bring your score from zero to hero. They spent a lot of time getting it to zero to bring it back up to 9.6 or something, which is about all that is practical for a lot of folks. Clome Monitor seeks to take a select subset of checks from the open SSF scorecard and then do a bunch of other stuff. Hygiene is the goal, not perfection. And we have now 11 months of visibility from those checks being run. And I did about 8,000 API calls to get all of that data, format it. This is for 163 code repositories that are tracked and have been tracked. And as you can see, things are great. This is going really well. But we didn't have this knowledge before. We have visibility now to see that it's not just where you are today, but over time, without intention, without a thorough eye on what you're doing for your security hygiene, you're going to be all over the place. I'm going to draw your eye to the bottom where I threw some little pie charts. That is the percentage of lines that are not visible for that particular month. This is 11 months. I just grabbed five of them. They're not visible because they're all the way at the top. You have 100%. Those are the projects. Remember, we're doing a subset of the scorecard. Scorecard 10 is amazing. You got 10. You are probably the first person to ever do it. You're perfect security hygiene. Clote monitor is not looking for perfect. Clote monitor has a standard that is expected. And we do predict that all projects can be in that green section. But this is how many are right now. At each of the points in time. So let's scope it down a bit. Get some of those lines off the sandbox projects. They're the most sandbox projects because there is a process to get elevated from sandbox, to incubation, to graduate. There's three different levels. Sandbox is the first level where CNCF says we believe in you, but you're not there yet. Incubating is going to be the next one. We kind of expect this from sandbox. They're not yet incubating. They're going to be all over the place. So incubating is, hey, you've been hanging out with us. We've seen your project grow. We've seen adoption increase. We know you. And as expected, the security trends kind of go up. There's one big outlier there. I put it in blacks that we can't call them out too specifically because these are incubating projects. We don't hold them to the same standard as graduating projects. So we would expect, this is roughly what we would expect to see. So when we go to graduating projects, as you would expect, no, that's not what we would expect to see from graduating projects. We would expect to see sandboxes all over the place. Incubating is better and graduated has the highest standard, right? Now, I don't have a really, really effective laser pointer here, but if you can squint and see, let me wiggle it right about here. Actually, it's right about here. Kubernetes is right here. It's green. It's green. 65 to 75. 65 to 75. Yeah, I was hoping I had a red laser pointer today. I don't. 65 to 75 is where Kubernetes is hovering. And that's on security hygiene that is basic expectations. This means that according to sonotypes research, Kubernetes is statistically more likely to have vulnerabilities than many incubating projects. Now, Kubernetes has a massive code base, an overwhelmed contributor base. It's a huge target. It's understandable that it's going to be hard for them to keep up with everything, but it's also important to know to keep track of it. And so this data is really, really important to have so we can visualize it so we can swing back around and I can tell the Kubernetes contributors, hey, guys, in May, you failed the code review checks. Why? Why did that happen? And so we can have those conversations now. But let's look specifically at those projects that I mentioned earlier. When we started doing this data tracking, it's because several projects came together for the event to collaborate on improving their security scores. Google was amazing and ended up donating your team, ended up donating about 27,000, I think it was 27 and a half thousand dollars towards the CNCF diversity fund, diversity and inclusion fund. I think Sal, you helped secure some of those, some of those processes. So it was really, really cool to have that happen. And they were all donated on behalf of these projects. So these projects have their name on a website somewhere saying that in their name, donations have been made to the fund inside CNCF because these projects worked to elevate their security hygiene and Google wanted to give them credit and attention and awareness for that. This is their scores over the last 11 months. That is so much better, right? There's still variation, right? There is still some things that we need to talk about. And because we have this data, we can have these conversations. But that is, yeah, that is so much better. I think right now, two, maybe three of those are graduating. And the majority of them are sandboxer incubating. And yet they're performing significantly higher on average than any graduating project inside CNCF. So what we're going to do now is we're going to look at what are the things that we increased? What are the things that we've been focusing on? And then we're going to break down for each individual one, such as binaries in the source code, I'm going to break down what the actual results were. And this is why it took me 8,000 some odd API requests to get all this data. Because from here through the rest of the presentation, we're going to be talking about each individual check and how the different maturity levels performed. The first one is binaries in the source code. This one is terrifying if it fails. If you're going to attack a project, if you compromise a project, or if you compromise a container, the best way to slip your attack through without getting it noticed is to give an obfuscated file that executes during build time. It's going to be a binary file. It is not a code file that is reviewed and visible, and all of us can see that it says steal the crypto. It's just X, Y, L, 1. It's just a mess all over the place, and you don't want to see it in there. So it's good that we are over 75% passing on this check. It is good that our graduated projects are almost universally passing it. We're going to have to have a conversation with somebody because one of the projects unpassed it in May, but they fixed it. So conversations are possible for these kinds of things. Concerning, though, and this blue line right there is our sandbox projects, that level one of three, level two is the red one going right across there. So it's concerning that there are more sandbox projects. I didn't make myself a post-it note with the exact number, but there are, oh, it's on the bottom. This is how many sandbox projects are passing that check in the blue graph right there. The red is how many incubating projects. There's just so many more sandbox projects that exist. And so the fact that they are statistically more likely to have proper hygiene on the scariest check possible, it's kind of weird. That's a weird thing for incubating projects to be aware of. This is that purple line up there is our slam participants. People who last year focused on their security hygiene have not had a single dip the entire time. Green is the average for the entire ecosystem. Right? So let's go to the next one. Securing your CICD pipeline. This is a check that CNCF cares about because if this check fails, that means that there is a practice in your build or release process that could subject you to some really scary stuff. Here's an example of how you could exploit an insecure CICD pipeline. This is not code. This is not fancy. This does not require special permissions. If your project is printing out the title of a pull request, which is a common thing to do, here's my pull request. I'm going to in my delivery pipeline print out the name of the PR that's being checked or merged. If you are not properly filtering it, somebody can dump out all of your secrets. That's freaking alarming. So this check makes sure that you're not doing that type of thing. There's a few other things that are lumped in it. That's an example. So we're really happy to be above 75%, but again, we're going to need a conversation with these incubating projects because this is another terrifying check and between 20 and 25% of incubating projects are failing it. The graduated projects we would expect to be at 100% and they're not. Somebody from the security slam drifted recently around June, but because we have this data, we can have that conversation. Here's the next one. I didn't animate this properly. So I'm going to have to like pause and oh, I'm not going to succeed at pausing because I messed that up. This is the thing they tell you never, never try to fix it on the fly, but I've got information behind there that I want you guys to see. So let's click the animate button and restart it. Let's see how this goes. Oh, it's so convincing. All right, don't start from the beginning. Let's present from here. Oh, this is exactly what I wanted to happen. That's what we were going for. That is what we wanted. Yeah, no kids, no animals, no live demos. This is live edits. So I think that might actually be a combination of an animal, a child and a demo. All right, I'm mentioning the dependencies. There's just some background information. What I didn't mention before is that on these slides I'm quoting myself mostly. I told you earlier that that probe helped me and some of the others produced some training courses on this material. Those are where these quotes are coming from on these slides. Managing your dependencies, this is dependabot. This is renovatebot. If you're a maintainer of open source projects, these are things that you're very familiar with and you shouldn't be scared of implementing. It just takes a little bit of focus, a little bit of time and you're going to implement it. Really cool thing that I was trying to save for a second is that Sebastian has got a pull request right now to make a back reference to open SSF scorecard. So that will be a configurable option that you can add to your renovatebot outputs once that pull request is merged. So it was merged yesterday. Huge, huge. Things like that are what elevate the ecosystem because we don't all know everything all the time so we need to cross-reference as much as possible. So when we're inside scorecard, we're reminded that we have to use dependabot when we're inside. Renovatebot, we're reminded that we need to use scorecard. When we're using guac, we're reminded that we need to use salsa, et cetera, et cetera, et cetera. These are the things that we need to do because there's so many different ways that we can improve our security and reduce the availability of bad actors. Dependencies are what we're talking about on this slide. Dependency management is just do you have a tool in your ecosystem telling you when updates are available? This is trending upward. The thing on the bottom is the one that I really, really like is that our sandbox projects are really getting aware of this and incorporating them. And again, they're staying percentage-wise way above our incubating projects, which is a conversation that we need to have. Having these dependency updates gives us the knowledge that we need to make the right decisions to secure our projects. Here are our SLAM participants compared to the average in the ecosystem. Can't do a lot of applause for this because it's just expected at this point, right? But with that last one, I do want to call out that green line going up is really important. Percentage-wise, it's low. We don't have a lot of momentum percentage-wise, but the number of projects that are realizing that they need to be implementing something like DependentBotter and Grenovier Bot are rising. And so that is something to actually celebrate. That means that the ecosystem is having the right conversations and is being exposed to the right information. We just need to do it faster. Permission is the pipeline. This is another one that affects our CI, our CD. It's something that is important. We want to keep an eye on it. And we definitely don't want to see it hovering at 20%. Fortunately, our graduated projects are twice as likely to be securing the permissions in their pipeline. But if the ones that don't, the 40% of graduated projects that don't are going to be more... Ooh, I lost a word that I was going to say. If there is a compromise at any stage in their pipeline, there is a chance that their entire pipeline could be compromised. Their entire delivery process, their entire codebase could be compromised because they have not properly implemented the permissions across their entire CI, CD pipeline. We are not perfect among our SLAM participants, but that line is very far away from that green line. And so dependency permissions or CI, CD permissions are something that is annoying to maintain. Every time that you update your pipeline, you need to keep an eye on how you are declaring your permissions in there. It's annoying. It's not easy to get perfect every time. But we see that the people who have been made aware of this are generally maintaining it. Next one. We're on five of nine, so we're almost through all of these checks. We're past halfway. Communicating security policies. This is something that is not actually programmatic. This is community-wise. So we've got people in our community that are bug hunters, security researchers. We third of our employees are security researchers, I think. These are folks that are going to be finding issues in code. And then you've got clever end users that are going to say, that didn't behave like I expected. Let me figure out why. That would be a disaster if somebody took advantage of that. There are people who are finding things in your community across the board and you need to give them information about how to report that. The security MD is a standardized way to do that. And it's also where you're going to put other security information that's just going to be helpful for your team, your project, your end users. But the biggest, biggest thing is that the security MD, when formatted properly and used properly in your code base, is going to give projects the ability to receive those vulnerability reports from the community. I did expect this. Our graduated projects are almost universally making sure that vulnerability disclosure processes are communicated to their communities. I didn't expect Sandbox to be as high as it is. So this is actually a pretty optimistic stat. But a year from now, all those lines to be at the top. That's what I want. Because we know it's possible. Protecting your source code. Does that sound important? Yeah, it's important. Branch protection is one of the ways that we are securing our core source code. And April was rough. Because not because everybody just completely dropped the ball, but because the insights that we had available to us became more accurate. And we found out that there were just places where the branch protection was not as robust as it otherwise had appeared to be. And this is another really scary statistic. The fact that it was hovering around 75% was scary. Learning that a lot of our incubating projects are barely above 50%. The likeliness of an incubating project to be passing a score is barely above 50% is not okay. That's a weird stat to have. And we need to be talking about this. I'm not proud that our SLAM participants are not performing significantly higher. They're performing higher. Because they're thinking about these things. They're addressing these things. The thoughts have entered their head. But it's a difficult thing apparently. And we need to be paying closer attention to that. Maintenance. I think we were talking about this last night, Billy. Maintenance is not an easy check to pass. Because it shouldn't be. We should be holding a high standard for how often our code is touched. Because that means that somebody is getting in there and thinking about it. Not just looking at do I need to do a dependency update. But that's still something. When we're actually getting in and looking at our code base, we are more likely to bring in knowledge that we've gained over the last month. We're more likely to be having critical conversations about potential vulnerabilities. Pretty good. That's pretty good. Most of our projects are active. I like those numbers. I like those numbers more. S-bombs. The biggest debate and hottest topic. Do you have software materials published? Pretty good for weight. That is zoomed in. We are not supplying information to our end users about how our software is being built. And what dependencies they are bringing in. Which means that we are not giving them the critical information about the actual code that they are running on their machines. And if you're doing federal, it's going to be even more important for your future. Look what happens when we zoom out. It's a big gap. Look at the bottom. Half of the S-bombs in the CNCF ecosystem. Green bar is the total ecosystem. Purple bar is our slain participants. Half of the S-bombs in the CNCF ecosystem were produced by projects that participated in the security system. Nine of nine. Do something for provenance. It's really, it's not like, hey, guys, get to salsa level three and have Guac like integrated into your ecosystem and then publish things for your free, it's like, do you have a signature in place? Like, are you signing your releases? That's what we're checking here. And that graph is zoomed in again, you guys. That's not cool. So my biggest question is, if we tell end users and ourselves, if we, if we say that proper security hygiene when consuming a product is to validate that it originated in the correct place, how can we, how can we build a habit of validation across our end users if we're not providing them signatures to validate if less than half of graduated CNCF projects are not supplying signatures? How are end users of technical projects across the world going to build a habit of consuming those signatures when less than half are actually been providing it? Less than a quarter of total ecosystem is providing it. But it's not something that's impossible. It's annoying. Our slam participants actually were delayed on getting that line up. It's annoying. Yeah. But it needs to happen. And it's a really, really, really annoying number to see that green line hovering under 25% because that means that we cannot create a worldwide habit of validating the origin of our software, our provenance. But bad news is first, good news is second. I tried to do that on every single slide. Bad news is the ecosystem's not doing well. Good news is we've got the purple line. We can show that things are trending upward. This is the total summary score. So earlier, I gave you that massive graph of everybody bouncing around all over the place. If we summarize that and then just put colors on it per ecosystem, 75% of our projects are... Oh, sorry. This graph's confusing me. It's good. It's good. But it's possible to get it way, way, way, way, way better, way better. So the big thing to tackle, to think about, is how do we get it better? And something that I will preach until I tip over is that when we're figuring out what to do, especially with that whole long list of different security hygiene improvements, it's like, well, you're just giving me nine things and that's a subset? Like 20 more than I'm not even talking yet? Like that's really annoying. And if you look at Clio Monitor, they were also like, well, hey, there's the best practices and then there's legal and there's licenses and then there's documentation. And these are all indirectly contributing to your security. It's really annoying to triage and prioritize this work. This right here, it's in one of the courses about scorecard, about how to prioritize which checks you're evaluating. But at the end of the day, there's two key things. Any work, whether it's security or software features, you want to think about how important it is. If it's features, how many end users are we solving problems for? If it's security, how many bad guys are we stopping? How bad are the things that we're stopping? What would be the problem if this wasn't done? So that is going to raise the importance of something. But the ability is also something to consider when we're evaluating the priority, when we're deciding which order to do things. If Billy just had a baby and he's out for the next two months, but he's the only person on our team who has the knowledge, the historical insight, the skill set to address this critical security problem. Even though it's the most important one that we have, does that mean we don't do the second most important one? Well, Billy's out. We can't get to number one. So let's work on product features for a while, guys. No. This is kind of a no-brainer, right? But oftentimes, what we do is we look at the biggest, most important, hardest things. We say we don't have the people at the table right now, so we'll do security later. And that's just not the way that it should be done. And it doesn't have to be done that way if you just slightly shift your thinking to involve ability and importance in every single triaging conversation. And if you want to talk for 30 minutes about that, I will. What I can't put on a slide right now, but I have been given permission to say it, allowed to you, is that in the 2023 report that's coming out next year, it has been found by Sonotype that the end user secure consumption of open source has not improved. There's been zero improvement in the last year of secure open source consumption. Yeah. Should I bring you a microphone, Sal? I feel like this is a lot of really good insight. Yeah, the fact that our end users are not using the information, I'm actually going to bring a microphone next time you talk. The fact that our end users are not using the tools that we're providing to them, the fact that we're not always providing those tools is important, too. But the fact that end users are not managing their dependencies, that end users are not validating their provenance. Things like this are examples, and it has improved. So this year, going into the security slam that's starting next month in October, if you're part of a CNCF project or would like to harass your maintainers to let you make contributions, next month, what we're going to be doing is one of the problems that we're trying to solve is that end user awareness. And so we're going to be incentivizing and creating prizes and awards and little badges and patches that you put on your sweater or backpack if you want, the creation of documentation so that end users can know the best way to consume this. And actually, we've got some really good ideas for how those improvements are going to be getting distributed to end users. The second problem that we're trying to make sure that we solve is that dip that we saw in SBOM documentation. Providing the SBOMs to end users has dipped, which means that it's not properly being automated. And so we're going to incentivize proper automation of your SBOM creation. Whoa. Whoa. Let's go to that one. So the third thing, right there in the middle, that we're seeing is an issue, is that there are other indirect stuff. So I mentioned it. I kind of rattled it off. There's documentation. There's licensing. There's just general best practices. Legal statements that are just being made available so that way end users in enterprise settings can better consume this need to be considered. And so instead of only focusing on the scorecard checks this next year, we are going to incentivize and reward projects who make general hygiene improvements. And this one was added especially for that purple line. All the people who participated last year and already got their security stuff in, hey, you guys do the indirect stuff too. Like, we've got a prize for you. And this year we are focusing on giving awards to the actual maintainers that are doing the work. Fourth one, not all projects are actually set up to where they're showing up on our stats. Billy, I think you have a couple that are not showing up on our stats. We were talking about this earlier. So we just want to make sure that they're actually tracked. And then that they are brought up into the proper security hygiene. Then self-assessments remain incentivizing doing a self-assessment because I changed this to problem or hypothesis. The fifth one down is a hypothesis. We believe that simply thinking about the security state of your project will result in you making better security decisions in the long run. And so we want to test that out. This year we're going to incentivize projects doing a self-assessment. So that way we get that historical data. A year from now we could look back and say, hey, the ones who had stopped and documented their own state of security have now performed this way over the next calendar year. So if you are a end user of a CNCF project, a maintainer of a CNCF project, or if you're just like wanting to harass your maintainers and get an opportunity to get involved with elevating your project community. We have an event page. We've got a kickoff webinar where we go spend an hour going through all of this. A couple of people in the room are going to be speaking at that webinar to help make sure that we're all aware about the opportunities. And we have a library of curated resources to make sure that we have a good knowledge base for when folks are choosing and triaging which of the problems they want to tackle, that we have resources available. So that way in that 30-day event that's coming up from October 10th to November 7th, I think is what it is. That's not 30 days is it? That month-long event, we're going to make sure that we have community channels for conversation, SLAM resources for research. And we want to see as many projects as possible participating in this. If you're an end user, which I mentioned a minute ago, we have a survey on there, just pop in and say that you care. So that way we can spin around your project and we can say, hey, I have five of your end users who cared about these three metrics specifically, right? So that can help you guys in your prioritization as you're choosing what to do. My next slide says Q&A, but I'm just going to leave that QR code up there. How much time do we have left? Because I lost track of we started at 11.55. So that means that we have 10 minutes now for conversation. Is there anything you guys would like to talk about considering all of this, right? I just vomited a bunch of stats on you. Some of those stats are concerning. Some of them are kind of encouraging if you look at them in the right light. Yeah, please. I saw you first. Sorry, I'm supposed to do this. Mute. Could you tell me your name too? That'll help me out a lot. Programmers don't do end user devices. So my name is Tom Hennan. I was curious, the way that this is presented, it is measuring the individual projects, which is certainly how it needs to be done. Certainly some of those things are potentially failings of the project themselves for not putting in the work. But do you have any guidance for specific things that could be done better that can actually make this easier? Like you mentioned, hey, we're not generating S-bombs in an automated fashion. If we just added this tool to this default publication workflow that everyone on GitHub Actions is already using, then it would just happen and they don't need to do anything different. Yes. So we have a course that just went out yesterday that is going to be part of that security same library that touches on, so it's Linux Foundation training and certification course that's an hour long that touches on every single thing to consider related to S-bomb production and signatures and provenance. It's really cool. It's really fun. OpenSSF helped contribute to reviews and stuff for that. But the key thing is that if your only goal is to automate your S-bomb creation, there are tools that do just that. And it's a matter of finding those tools that are appropriate for your language. Do you mind a follow-up question from him? Is that okay? Yeah. Well, I guess what I was wondering is like, to what extent can we bake these best practices into the defaults of the tools that these projects and all the other users are doing? As an example, GoReleaser in their most recent release has added S-bomb generation as an option. So it's totally possible and we are seeing the community go in that direction. Please. Hey, Frenchie from Syncnia. Two questions. One, you mentioned regressions when that's detected. It's the opportunity to have a conversation. Is that a conversation? Can that be automated? And then second one is obviously seems like a very clear correlation between involvement and security slam and positive outcome. What are the constraints for more involvement? I don't do good with two questions. Sorry, first one was regressions. Is that a conversation? I think it's a conversation. It's, hey, was this automated? If it was, then why did it dip? Did you change your automation maybe? In some cases, it came right back up. Code review for Qboarden went down and then came right back up. So when we saw earlier, there is a page where just everybody bombed on code review. Qboarden wasn't even watching their stats. They just were part of that dip and then quickly resolved it on their own through best practices. And so that conversation with Qboarden would be like, hey, guys, what did you, what did you do that you recovered so quickly and other people didn't? And then with the folks who we thought had automated it, the question is like, well, then why didn't, why did it dip? Right. And so, so I think conversation is very, very often the first place to start. Yeah. Kara, you raised your hand first earlier. Oh, cool. Okay, great. The same question was just correlation between involvement and security slam. And yeah, how can we help get more people involved? Awareness. I haven't talked to a single project that said that sounds dumb. It hasn't happened. Yeah. But, but also there's, there's just a limited number of us that are really championing this and saying, Hey, this is an event. Argo, for example, they're not going around screaming that everybody needs to join it, but into it is blocking off engineering hours to make this happen this year. Right. So there's a lot of excitement, but not a lot of championing. Could you talk a little more about the role of documentation? You were saying that documentation seemed projects that documented their security practices had fewer regressions. Is there any other data on that or other non-engineering actions that support? Yes. So the indirect stuff right now is, is part of a hypothesis. I think there's some places where we are asserting that increased documentation is going to, to result in, and we're making that assertion. But as a, as a general trend, it's a hypothesis right now. We, we believe, we firmly believe that increasing the health of your documentation will create those security outcomes, but right now we're, we're wanting to use the security slam to prove that hypothesis as true this year. Do you mind passing the microphone? If you have a question on the way, just steal the microphone. I think a light means it's muted. Is it? Okay. Love the power I've got now. So I think the one, you're going to get statistics out where it cuts around languages as well within this, which I think is going to be interesting. So, so what Sal just referenced is that we are intending and hoping and, and trying to make sure that next year's statistical analysis includes a breakdown, not just of project maturity, but project language. That's something that we want to, we want to make sure we do. But I think in terms of making this work and make sense, this is common sense interventions. I think the reason why this is not having the effect that I would like it to have is that we're effectively coming in as volunteers and doing drive-by security interventions and not getting the buy-in from those developers to start doing security first principles, right? So the reason why Intuit and Argo CD are doing this because we've all worked with them, right? We've worked with them to get that as a security first space. So thing that you can do to make sure that the security slam works is make sure that you have that focus of security in your team. Set that time aside next month. Yep. If you're listening and you're an end user, I don't care if you're a maintainer of one project, but if you're an end user of a different project, fill this out, let your projects know that you do care about this to help elevate that importance, right? Our reputation with our end users will increase by doing this. We have a survey that's published on the other side of that QR code that you can drop out and we're going to just let projects know this is what we're hearing from your end users. Thanks, Sal. We are past time, but I don't think anybody's going to yell at us if we keep doing Q&A. But please, if you're tapped out on Q&A, feel free to step up and leave. This was really, really, really pleasant, guys. Thank you so much.