 Also, thank you, Candice, appreciate that. Yes. So coming off the heels of a recently released state of software security report, we're happy to take this next hour or so and share some of the more interesting sort of statistics and data that come out of it. And I'm actually excited to be joined by both Matt and Steve in order to get some of their insight and industry expertise as far as like, what do some of these numbers? We've seen a lot of the statistics and the survey results, but what do we feel that the underlying back story is and some of the reasoning behind it? So we'll start by this by actually looking at addressing some of these cybersecurity challenges that we've seen that have come out of this open source software security report. In order to start that, one of the things that I think makes sense is to actually give a little bit of background. How did this security report come about? Some of the steps associated with it. Steve being one of the co-authors as well as Sneak, the research behind it started back in March, 2022. And actually this state of open software security report is something that's an annual report that's been going on since 2018 here at Sneak. Started the research in March as part of some of the background for that research leveraged across the industry, did quite a few 15 different interviews with open source maintainers, cybersecurity experts to make sure that we were asking some of the right questions in order to extract a lot of the data that we would see as useful. Launched the survey in April and then that brought us to where we are today as part of that survey. We did target a few different audiences across different types of organizations, sizes, scopes, looked at open source software maintainers, contributors to that, occasional contributors, even some of the developers as well as consumers of the open source software within the software supply chain. As part of that there were over 550 responses. And then again, they went across many different sort of ecosystems and backgrounds. So without further ado, let's actually jump into some of the details of what we saw as part of this. In this, what we have seen in some of the things that we were able to drive is that the open source security is still a significant challenge. And as part of what we're seeing as the software supply chain it seems to be growing in complexity of not only controlling and adopting to what you have but also with the growth and adoption of applications we're seeing it become a much bigger sort of focus as well as a little bit more difficult to tackle. One of the ways to actually make this session just a little bit more interactive I think we'll pop up a poll real quick and get everyone's insight as far as if you start looking at within your organizations if you interact with those understanding what your confidence level is in understanding the risk associated with direct dependencies. These are the dependencies that your developers bring in directly listing them specifically out saying we're building this application with these components. And then in conjunction with that also asking what your confidence level as far as the security of what are called indirect dependencies. And so for those of you that may not as be familiar with some of the development process and when you look at open source when developers bring in an open source package inside of that package very often are nested or references to other open source projects. And so you get this package A that might be leveraging another open source package B which then might be leveraging C and D and you get this hierarchy associated with it. Those lower level ones are very often referred to as indirect dependencies or transitive dependencies. So understanding the security risk associated with it is what we're looking for as far as feedback in this poll. Excellent. So as everybody fills that out I'm gonna transition and we will start talking about some of the insight that was captured as part of the open source security report. As part of understanding those direct dependencies in those open source, one of the things that we were able to gather from some of the survey results were that the average open source software project on average has about 49 current vulnerabilities. And this also spans across 79 of those direct dependencies that I look. Now that varies across ecosystems and what you're seeing are some of the survey results on the far right. And interesting enough too, the vulnerabilities associated with, when it says that 49 vulnerabilities across those 79 direct dependencies, another stat that was actually surfaced inside of this report mentions that 40% of those 49 actually happen to be inside of the direct indirect dependencies of those transitive ones that we talk. So knowing that there is this wide spectrum of dependencies, indirect dependencies, Steve, I'd love to get some of the insight as far as do you see this as organizations grow applications as there's digital transformation is a big focus across the ecosystem. Do you see this as growing in use, staying about the same, kind of what some of the insights as far as what you're seeing as important takeaways from some of these stats as far as the prominence of these components. So I mean, I think, you know, it's kind of probably an important point to make here that dependencies aren't bad, right? By the definition, you know, this is what has enabled modern software development, right? So we don't reinvent the wheel all the time we have these gigantic resource pool of free and open source software that we can build on top of. And that's really what's enabled open source software to become ubiquitous, right? So I mean, dependencies, you know, are not necessarily a bad thing. And I mean, you know, I've fielded a couple of questions around this particular set of stats of like, should I be considering not developing anything in JavaScript, right? Which is, you know, it's clearly, you know, not a thing that I think this report is saying. So I think you have to take some of this stuff in the context of what these individual ecosystems, you know, what they look like when you work within them. And you know, JavaScript tends towards a model of things that are very small, much smaller scoped, but a lot more of them, you know, there tends to be a lot of choice and a lot of projects that do, a lot of packages that do one particular very small thing. Whereas, you know, Python that tends to be for each function, there will be one leading package and that package may have a much bigger scope. So I think that's kind of important to understand. The indirect dependencies issue, you know, is really the critical piece here that, you know, organizations need to understand that by using this model, yes, you can develop software very quickly, but you are also potentially pulling in code that you weren't aware of, you know, that whilst you may have been aware of those direct dependencies, the indirects, and, you know, down along the chain, you know, make as you know from what we do at Sneak every day, you know, these can be very deeply nested, you know, dependencies and four or five plus levels deep. And often that's where some of these issues in the software supply chain can come from. I mean, one of the things that occurs to me with looking at the slide is that this is really a great advertisement for why software building materials are important. And the reality is that, you know, we need knowledge about what's in a component. We need to understand how usable it is, whether or not we're licensed to use it or not. And we need to be able to trust it. So we need to be able to have ways for the actual component and the information about the component is non-falsifiable so we can trust it. So when you get into dependencies of dependencies, the transitive or the indirect ones, things can get very complicated. It's very hard to actually always, you know, have information into your fingertips that will tell you about the usability and the trust and the actual metadata describing the components. So this is a great reason to take SBOMs very seriously because that is probably one of the best ways to be able to address the complexity that's as Matt was saying, as part of modern application development. Totally agree and I'm actually glad that you mentioned the software bill of material because I think that's a core component that goes hand in hand with a lot of this understanding and being able to know where these risks are and be able to then take action upon this, right? It's not only, you know, for sharing externally in a lot of circumstances where we're seeing some governance being dictated but also just for internal use and consumption I think is a critical element in making sure that you understand the president of the rest. I think that actually is a good transition to start to talk about what we also, what was also captured as part of this survey results where that, you know, as we've seen this focus and there's some great examples we'll talk about in a little bit about open source and open source risks and larger sort of consumptions. And as we just saw the amount of projects that were being used and leveraged and where some of those vulnerabilities are one of the things that also was starting I think was fairly apparent inside of these security results is that there's still a struggle I think within a lot of organizations to prioritize open security. Like how do we make it a core component? And when we started to look at one of these themes and some of the details behind it what we were able to capture and what came out of this was that, you know looking at these real results only 49% of organizations currently have a security policy that addresses open source software. And so looking at, you know both the size of the companies as well as, you know who of those said yes we do and no we don't or don't know it's intriguing to see even as you go across like again, you kind of understand that for smaller organizations, right? Cause there's not as big of a prominence of security teams and the ability to address all of those. But, you know as you start to look at larger enterprises to also see that, you know there's close to 30% almost 30% of those some of those very, very large organizations and, you know 20% that still don't have open source security policies in place. It's interesting to see that some of these organizations are still, you know struggling with that or going to that path. And so this is also another one where I think it's great to get some insight as far as what you see, you know Matt, Steve as the impact of that, right? You know, some of these organizations that are easily don't have one currently or still struggling to actually get one in place and what the impact inside of the organization might be and other things that they should be considered as far as not having as part of a security policy. Yeah, this question and the responses to it was one of the bigger disappointments to me when I went through and analyzed the data on an overall level the right-hand side of the screen is absolutely correct. We had only 49% of organizations had an OSS security policy. We also had let's see 34% that didn't and then we had 17% that didn't know. And if you take out the don't know not shores at this point which can be done from a standpoint of doing analysis it ends up being about a 6040 split 60% have a policy 40% don't have a policy at 40% is a really large number. As far as what the impact of this can be I think it all comes down to governance, risk and compliance. If you don't have a policy let me back up just to say not having an open source software security policy is a little bit different than not having a software security policy. And we didn't have a question the higher level question which was do you have a software security policy in place? And I'm going to do that next time around. This was specifically open source. I mean, in some ways I would think that if you had overall software security policy in place the incremental step to have an open source one wouldn't be too much of an effort but this is still being debated quite a bit inside of Linux foundation. So we can't presume anything at this point but the reality here is that even with 49% that don't have an open source security policy in place how do you really, how do you address governance risk and compliance issues? I mean, you can't effectively manage risk. You don't really know enough about what you're going to do when it comes to vulnerabilities. So your security posture suffers minimally. You're the quality of what you're producing can suffer. And this shines a bright light on the fact that Linux foundation already has courses in training and certification on best practices for secure software development. Notice we didn't even say open source secure software development. And it's about 150 best practices and you don't have to eat them all at once. They're tiered and level silver, silver, gold and platinum I think. But the reality is, is that if you don't have a policy in place you're probably addressing best practices for doing secure software development in a very ad hoc kind of random way. And policy is one of the best ways to not only get organized from standpoint of understanding what's important about security but then making sure you're addressing it but also taking the next steps to be on that with things like skill ability and automation. So I co-opted the platform here but Matt, do you want to add anything to that? I mean, I think there's a couple of interesting things to me from this stat. I mean, clearly that you go into a question like that thinking that small organizations are going to struggle more with kind of policy driven stuff around the more challenging aspects of security. So that's somewhat unsurprising that split. I think the fairly large sort of cohort of larger organizations there is the one that's really quite surprising. I think there's probably a couple of different things going on there. I mean, most larger enterprises have some challenges around making change can be difficult from a cultural perspective clearly dealing with sort of open source software development is a different kind of security approach to the sort of things that we've seen over the last 20 years. So you've kind of got that inertia in larger organizations that can be a factor there. And I guess it probably says something about some of the perceived challenges about putting in place a policy, what should be in a policy about open source, right? And I mean, unless you've got a fairly good understanding of what are the things that you need to be thinking about around open source software, then formulating a policy becomes more difficult. I mean, as Steve has pointed out, there are a lot of resources out there, templated policies on this kind of stuff available from the Linux Foundation and from others. But I think it definitely highlights that organizations are finding a challenge. And it's not just about the code when we talk about risk ground up and source software either, right? Because we've got to think about governance models, is that software well-maintained? Is there a single maintainer who's going to go rogue? And as we've seen in some of these recent ransomware things, and how do we identify what are the sort of positive elements that we should be looking for within a particular piece of software? And that's not just about function, it's a wider thing. So I think there are a lot of challenges that organizations face when starting to think about this thing. But I mean, having a policy is kind of, even if the policies are one-liner, right? It's got to be better than zero policy because at least it proves that you've started to think about what the issues might be. Whereas if you don't have any kind of policy in place, you're effectively flying a plane blind. And one of the other things that I think you see in the wider report is that there's a pretty strong correlation between having an open source policy and your confidence in the security around your open source software. So whilst having a policy is not a proxy for security maturity, I mean, who's to say what's in a policy in a survey? But there's certainly a strong correlation there in sentiment, right Steve? Yeah, absolutely true. Yeah, I would agree. And I think that's one of the things in the, we're obviously not going through everything that's in the report because there's quite a bit of information that's covered in there. But I do know, Steve, that there's a lot of cross-referencing as far as having security policies in place and looking at security postures and confidence in those security postures. And so it's interesting to see some of the correlations that can be derived when you actually start to cross-reference those. Yeah, I mean, just to explain that a little bit further, throughout the report, since I did mostly analysis behind the survey data, one of the things that I did was to display most questions segmented by do you have an open source security policy or do you not have an open source security policy? And the differences typically are quite striking. So that really makes the strong case for having an open source security policy. Agreed. Awesome. And as we start to look at ones with those open source security policies or even organizations that have started to take some sort of proactive ways of actually looking at the security posture with those open source packages, there were some interesting sort of information and data that was shared across the approaches that we're seeing across the organizations. Like what are they actually putting into place as far as the ability to look at that security risk, right? Some of the things that are being utilized today. Now, I know some of these buckets potentially grouped together and packaged some of those details. But what we did see is that, at least 44% of companies have some way to examine the source code to look for a lot of those risks and see a couple of different approaches in here across sort of the spectrum. With that said, obviously, again, knowing Steve, Matt, your backgrounds, the interactions you do with all of these organizations and the heavily involvement in the developer community, love to get again some additional insight as far as what you're seeing and how they align with the information that we were able to derive or Steve that you were able to derive through the security report and kind of how that maps back to some of this data here. Well, I think, I mean, just starting from the left, there's two things I think that are important that I'm gonna discuss. Well, the first is on the left here, the 44% who use tools to examine source code. I counted about 10 or so different tool categories when it came to tools for helping address security. And this would be across the lifecycle of the application. Now, most of these tool categories had to do with CICD type activities. So very much focused on development, but not all of them. But so I think the 44% doesn't mean developers are out there using IDEs to kind of comb through their code, maybe one of the four eyes kind of approach to check it, because that's just a very grossly inefficient way to do some sort of evaluation of your code base. But there are a tremendous number of very good tools out there to be able to, that need to be used. And I think we'll probably have a slide on this later, the popular ones for, of course, are SAST tools for static analysis, for security testing, and then SCA tools for looking at license compliance and vulnerabilities. So those are two great categories, but I think we should lose sight of the fact that in the survey results, infrastructure as code IAC played incredibly well from the standpoint of people using it to help deal with security. And you're probably wondering, well, how does IAC do that? Well, it's primarily the fact that IAC is very good at automation. And if you're automating activities across CICD, that's removing manual touch points. And those manual touch points are where shortcuts can be taken and introduce all sorts of security risks. So that was, I think, the storyline behind IAC. And then the SAST tools for dynamic application security testing did not play as well as I had hoped. And I think organizations should lose sight of SAST mostly because that's a very much a runtime focus. And that's an important way to be able to make sure that you have better coverage across the life cycle. I think one of the things that was positive to me about these numbers are, particularly, some of these categories are slightly down from the top about checking that the project has an active community looking at the frequency of commit and releasing, that this is clearly indicating that people who are answering this particular question do have an understanding about those things that are important to consider above and beyond the actual code within open source projects. So I think that was a good thing to see in this particular graph. Yeah, when it comes to the adoption of components for use in code, I think many of the responses here are essentially saying, listen, we need to look at the community that it's actually responsible for the component and have a good sense of how they function and how they operate. And I think many of the responses here are exactly that. Yeah, and I think, I mean, what we're starting to see across the industry is new ways of providing that information to consumers of open source software. I mean, we start out from things like GitHub stars, but I think now we're starting to get a lot better at if we look at the open SSF scorecards project and sneak advisors, obviously a free service that we provide that give potential users much more detailed insights about not just the vulnerabilities that exist in that code, but about these other factors. Does that project have good governance? Does it have the right things that need to exist in that repository to give you confidence that the maintainers are actively considering security as part of their software development process? Yeah, I'm glad both of you mentioned that because that was the one thing I think I derived from this was more of the approaches to identify trust, right? Like building on like a full framework associated with, how do we actually, we want to leverage this open source. It's a big prominent part of organizations building any sort of modern application. And in order to do so, there's got to be this sort of measurement of trust in order to understand what that is and using sort of multiple pronged approaches in order to validate that. And Steve, you both mentioned this, right? Like looking at the community, understanding if it's active, looking at the reputation of the maintainers and some of the details, how active is it when it is what changed last and understanding sort of a broader spectrum associated that I think becomes a critical part of that security posture, at least from a proactive perspective. Yeah, I mean, it's interesting how this has changed over the last 20 years. This idea of maintain a trust is something that's, it's been talked about for a very long time in open source communities. And as communities have scaled, there was a time when even in large projects that have remained to every contributor knew every other contributor, right? I can certainly remember even in the bigger projects where that might be been the case 15, 20 years ago, but clearly that's not the case anymore. And I mean, as we started off saying, we tend to find these vulnerabilities in smaller projects anyway, where you may not know who the maintainer of that particular project is, but yeah, trust is what it comes down to at the end of the day, right? Yeah, excellent. So now let's transition over into, I'm sure everybody's favorite topic, which became a very prominent discussion track the end of last year, which was the log for shell vulnerability, the one that originated in open source. And so looking at some of the impact on understanding some of the relationships and a lot of the things reiterate, and I think a lot of the points that we've talked about, which are also very prominent inside of the report, there's a couple of interesting statistics and data points that I think were able to be shared, which was when we looked at log for shell, it was that, and again, some of this based upon sneak being in this market, being able to see massive amounts of projects and understand the implications of customers that were impact that 79% of projects were actually affected by log for shell. And Matt, I think you even mentioned earlier, some of that ubiquity associated with open source and how they're becoming very common sort of components inside of a lot of major applications, this being a perfect example, take that and also compound it hand in hand with the fact that when we looked at the open source, the log for J component, 60% of the instances of those open source packages were actually indirect. I mean, they were in transitive, which could be several layers deep being used and consumed by other sort of core components. And so I'm interested again to hear, Matt and Steve, your perspective on what did this mean? Like what did you see as far as the impact? And I know a lot of us were involved very closely with talking with organizations, walking them through, helping them be able to address and remediate a lot of these in a very, very fast timeframe. But knowing sort of all of these, the state about how prominent it was and then also the fact that it existed multiple times and sometimes buried several layers deep of what sort of that, what organizations had to deal with at the time of this making the headlines. I mean, this is kind of the perfect storm, right? I mean, and this is the kind of story that happens every couple of years and it brings this right to the forefront of people's minds. Log4j, a credibly well-used piece of utility software to add login functions to Java programs. And therefore it was being used by a enormous spectrum of the all of the Java code that exists in the world. And I mean, on the one hand, that's brilliant that such a piece of software exists and it's fantastic utility. And it means that people don't have to reinvent the wheel but when you do have a vulnerability in something that is this widely used, then clearly the impact of it is very widespread. And I mean, the actual vulnerability had been in Log4j for several years, right? I mean, it was a very small programming mistake. And it was only when someone worked out how you could actually exploit it that it even became an issue, right? So, but yeah, I mean, this kind of illustrates, I guess, what we're talking about, this idea that open source projects can be a victim of their own success. And I think that when you have infrastructure software, scaffolding software that's in an enormously wide range of things, the impact of issues being found with that can be gigantic. And I mean, this is still going on, right? I know we were talking before we came on air about the amount of embedded, the amount of Java programs in the embedded space where it's incredibly hard to change it, right? It's flashed into the hardware, you know, that's all still going on. So, I think it definitely sharpened people's minds about the need for software composition analysis scanners, right? I mean, you know, we certainly saw a very large uptick in usage of sneak tied directly to organizations trying to solve this problem. Because, you know, people just didn't know whether they were impacted by it or not because, you know, they've got, you know, big enterprises, Java's an incredibly popular language. You know, I've got 10,000 business applications running inside my organization. I mean, that's panic time, right? Yep, I think it also, go ahead, Steve, sorry. I was going to say realistically, you know, SCA tools are the only way that you can crawl across the entire portfolio in an automated way to be able to understand what the, you know, what the impact is going to be when a form of it built in like this is found. Yep, I think it also like, and kind of be in the same drum that we have before, but it starts to reiterate the importance of some of those open source security policies. Because without some sort of plan in place when something like this does roll along, you know, the ability to actually react, right, to take action, to have sort of the components in place in order to have some sort of structured mechanism to say, okay, what do we do? All right, what's the next steps? We need to know where we have this, you know, what we need to do to actually go down the path of remediation and, you know, the steps to actually fixing all of those issues when they're discovered. And, you know, you've got absolutely no way of knowing, you know, if it's down your tree of indirect dependencies, I mean, it's okay if it's in your direct dependency, you can do a version bump, you know, but how do I deal with this thing that's like five levels deep? And, you know, you've got all of these dependencies that all need a version bump in order to get rid of that one. So, yeah. Yeah. The chain reaction definitely was caused by that discovery. Yeah, and I don't think this is, you know, this is probably a slightly controversial statement, but I think sometimes you need things like that to happen in order to drive change, right? I mean, I think, you know, that's definitely, like I said, sharp in people's minds about, you know, that you should have something in place to be thinking about this and to be concerned about it. And without that, you know, you may not have had as much change as I think it's driven in the last few months. Yeah, it's a valid point. Best way to know how to prepare for a storm is to live through the storm. So, I don't necessarily disagree with that. So, building on this, right, like as we start to discuss a lot of these statistics that were captured in some of the information, one of the other themes that we saw was the fact that finding a solution for this, right? A complex solution for a complex problem was one of the other, I think, key takeaways that was shared inside of this report. And one of the other more interesting ones, and I know, again, you know, as we were discussing this earlier, is this is one of the interesting statistics that came out of the survey results and that the fact that the time to fix vulnerabilities, since, you know, since we go back a few years and when we started originally publishing their reports, you know, it's gone from 49 days to now 110 days. So, a fairly substantially increase, you know, over, you know, a span of three years. And then one of the other things that was also kind of intriguing is looking at the fact that fixing vulnerabilities takes, you know, almost 20% longer than fixing organizations own first party proprietary code. I know some of this and I'm gonna turn it over to you and Steve and Max, I know you definitely have some insight as far as the stories behind it, but there was also an interesting statistic that was shared last week at the Open Source Security Foundation conference in which, you know, with this expansion and growth of Open Source packages, currently upwards about 30% of a lot of those packages only have a single maintainer and in some cases none. So, I'm curious as far as what your insight into why we're seeing sort of this increase in the amount of time that it actually takes to get some of these vulnerabilities fixed once they are, you know, discovered. Well, I guess one of the things to note here is that 49 days was back in 2018 and we're up to 110 now, so over a span of three years we've just more than doubled the amount of time it takes to fix. Over that same period of time, two things have happened. One, software use and development has been growing at tremendous rates. And at the same time, when it comes to Open Source software and security, security has become a far more important topic now than it was back in 2018. It's not even anywhere close to back in 2018. There was a shadow of what it is today. So I think both of those things are significant factors here. We're paying much more attention to security. As a consequence, there's more time that's being spent on security. We're much more attuned to the concerns about vulnerabilities, but one thing that's not, so there's lots more potentially that has to be fixed these days because there's so much more software. So I think, and we have free sourcing issues when it comes to actually software development. So I think all of these things kind of combined together is helping push the time window out. But one thing that is good news is that if you look at the criticality of these different kinds of fixes that are being put in place, the really critical fixes are happening very quickly, but with an increased number of fixes that are required to be addressed, some of those low priority ones just aren't being addressed in a Tommy fashion because of the resourcing issues. So I think that's part of the explanation behind what's going on here. Yeah. Yeah, I mean, I would have said basically the same thing that ironically, this does, it actually exposes somewhat that we're actually getting better at detecting vulnerabilities. But clearly, we've got a resourcing issue there in terms of managing to solve some of the lower priority things. And I mean, open source is a broad church, right? And for every huge, well-funded project like the Linux kernel, like Kubernetes, there's 10,000 projects that are being run by one person in their spare time. And prioritizing security fixes, particularly with things that may be perceived as less important versus use of feature velocity and support and all the rest of the things that open source maintainers have to do is a challenge. I mean, I suspect if we looked at this across some of the biggest projects, we would actually find that the timescale is very short to fix vulnerabilities. But when we look at these things in aggregate across, across a huge range of projects of different sizes and all the rest of it, then we're clearly gonna see that resource issue that's at the base of some of these issues. In terms of the fixing vulnerabilities in open source, in the open source portion of applications versus the kind of homegrown portion of applications, when we look at typically in most modern applications, that portion of open source is actually far larger. So, in some ways, that's always gonna skew that number somewhat because, you know, 80-20 of is it ends up being a typical split of people, you know, in language ecosystems using packages. So, yeah. Yeah, and that's a very good point is that there's probably more open source than a lot of modern applications and the fact that there's probably less resources there as well. And I know that's a big focus of the open source security foundation is to promote more community awareness and sharing and collaborating and giving back. And I think those go hand in hand with potentially addressing and fixing some of these issues as we see sort of more and more use, right? The ability to actually handle those appropriately. But I think that inside of what you both just shared how the critical ones being handled, and again, there's disparity in differences between, you know, potentially some of the larger ones and the ones that are used a lot more often. This was an interesting finding. And I know like as far as, as we start to talk about, you know, the open source, first party code and some of the other components in a lot of these applications, looking at some of the server results as far as, you know, what organizations we're using, what approaches that we're taking. And one of the interesting ones to see, and I think this is probably just more consistent than what we've seen over the last few years is the fact that, you know, SAS static application security testing, software composition analysis tools are still ranked number one and number two in the ability to address a lot of these security concerns. But there also seems to be a wide variety of different approaches for, you know, how organizations are also starting to tackle these as either augmenting those are very complimentary approaches or just sort of additional techniques of looking for those vulnerabilities. So curious as far as like, again, with the industry interaction and organizations that you've talked to, if this is in line with what you're seeing and other thoughts that you might have on approaches to actually, you know, help address and what we'll talk about a little bit is actually automating some of these approaches as far as helping organizations address these risks. One of the things that kind of comes to my mind here was that is the whole theory behind doing SDA which is that doing it when a time-based kind of principle is probably not the best approach. And so one of the challenges, I think that exists here in the industry to see if a Mac or a Mac you have insight into that is how to deal with SDA in more of a real-time or near real-time kind of basis because there needs to be, when vulnerabilities become known, there needs to be sort of immediate potential to take action on them. And a time-based approach that has a lot of latency built into it is probably not your friend when it comes to trying to get some of these vulnerabilities. Yeah, I mean, I think, you know, when we talk about this whole landscape, you know, there are two, it's not just one challenge, right? We've got this challenge of the open source supply chain having become a fertile place for attackers to look for exploits. But we've also got in general, you know, that organizations need to move towards this, you know, what we call the developer first security model, right? Where you've got to change the way you do security from being like this kind of gatekeeper function at the end through to something that's integrated all the way through your software development lifecycle. And that's because, you know, velocity is absolute key to success in the modern era. And, you know, we can't treat security in the same way that we used to. So I think, you know, seeing organizations who've managed to build into those multiple integration points all the way through their software development lifecycle is, you know, a real key success metric for people who are starting to get how we really need to do this stuff. And, you know, to Steve's point, you know, once we've got those, you know, regular tests on source code management systems, we're testing on every PR, we're testing in production, you know, all these integration points in the SDLC have subtly different reasons to do them, right? And you kind of need to be thinking about doing all of them because, you know, they're detecting different things. So, you know, giving developers access to tooling where they can immediately see before that comes even been checked in that there are issues with a particular dependency or issues with lines of code that they've just written, you know, is absolutely critical. The cheapest place to fix anything is before it's even been checked in, right? But then, you know, having those integration points into your source code management, into your CICD pipeline, you know, you need to be doing those things as well so that you can, you know, you have real-time updates on, you know, none of this is fixed. We're constantly on this sliding floor of new vulnerabilities being found and new exploits every day. So I think, you know, being able to integrate that stuff is where you start to get the most benefit from a security perspective. Yeah, I mean, this goes back to something I said earlier which is there's probably about 10 fairly popular tool categories when it comes to being able to address security across the life cycle. And for instance, you know, I mentioned IAC tools and the value they have from the standpoint of automation. Well, there are IAC scanning tools to help you ensure that the IAC scripts that you're writing don't end up creating additional headaches. And they're scanning tools also and policy tools for cloud service providers being able to check on the resources that you're getting from them and whether or not from a security standpoint they're where they need to be. So I think one of the things to take away from this beyond what we just see here on the screen is this notion of take a look in the report to get a sense of what those additional different tool categories are and, you know, begin to think about how they may be able to also, you know, some of them add significant value to which we're already doing. Yeah, and I think the automation piece of this is really key as well. I mean, we did some work last year as part of our cloud native security report where we looked at how automated organizations deployment pipelines whereas the kind of proxy for how far along you were on your kind of journey towards cloud native because, you know, that's a very strong indicator if you are doing end-to-end, you know, fully automated deployments there are, you know, a whole set of things that have to be in line there to get there. So it's a fairly good proxy for that. And we saw very strong correlation between folks with high levels of deployment pipeline automation, how easy it was for them to implement security scanning tools because, you know, you're fully automated CICD, you've got lots of hook points that are, you know, it's a kind of self-fulfilling prophecy in a sense that it becomes much easier to integrate automation in there. And then a very strong correlation between that and time to fix. So a dramatic drop in how long it was taking people to find and fix vulnerabilities when they have that automation in place all the way through the SDLC. Yeah, the automation I think is key, which I think is a good transition to where we're going with this, but the automation and being able to take more of a proactive approach in a lot of these types of circumstances not disregarding Steve, which you just mentioned which is still the ability to have the reactive component because those are still very critical when you start looking at open source and the dynamic nature of which, you know, the vulnerabilities and risks change. As Matt mentioned earlier, right? Like the vulnerability and log for J had been in there for quite a few years. It just happened that it was discovered at a very specific time. And then we're remediated and everybody had to take action in order to follow up to ensure that they, you know, were in line with a lot of those changes based upon that timing. Proactive giving you more of the efficiencies but having always a plan in place in order to ensure that you're, you know, being able to react when the situation does change. With that said, I think, and these were some of the core sort of components I think that you could easily start to derive out of that report. And there is again, a lot more extensive information as Steve just alluded to that, you know, there's a lot of data points that are in there that are very interesting, which have probably stories behind them and some of the intriguing elements that I think is useful to use. We'll share the link for the full report here in just a little bit. From a takeaways perspective, I think one of the more prominent themes that we've seen, and again, these go hand in hand both from a sneak perspective and as well as a Linux Foundation perspective is encouraging developers to improve their security knowledge. The more that you can take information and especially with, you know, the larger adoption and creation of applications, the explosion of applications, the more drive for rapid development, the more you can take that security expertise and bundle it into some sort of format within a workflow of the developers starts to empower them, right? It allows to have some of those secure coding best practices approaches, visibilities into risks early in that decision-making process become probably one of the more key sort of approaches for being able to address a lot of these security issues as early as possible. Hand in hand with that, I think is, you know, as we were just, you know, discussing, there's a lot of sort of approaches and solutions that you can start to leverage in order to take that capability and actually leverage it within the pipeline, within the PR checks, within the IDE. And so leveraging solutions, one of the other themes that we haven't spent as much time that I know is embedded inside the report as well was this data points that a lot of the individuals that were involved with the survey also asked for more vendor expertise to embed more sort of security insight into controls to take more ownership of that so that, you know, there wasn't all of these pockets of individual knowledge and that the industry starts to take a lot of this more responsibility on in order to help promote that with those organizations. Steve, I don't know if you wanna then- Yeah, no, no, that's absolutely correct. That was one of the key takeaways which was the users were looking for more intelligent tooling from the vendor community. And it's not that the users are trying to advocate responsibility and say, well, this security issue is best solved by somebody else because there's lots that the user community should be doing themselves. But I mean, it's still a point well taken which is more intelligent tooling and integration between the different kinds of tools would go a long way to be able to have a better approach to how we deal with security and probably not at the expense of more time on the part of the developers if the tools were more intelligent and had better integration. Yeah, and that was an interesting one to see. And then the last sort of key takeaway I think that we've seen with all of this goes hand in hand with kind of the first two approaches the more that you can empower and share and then start to embed into a very proactive way but doing so in an automated fashion, right? Automated, automated, automated built into those checks and Matt discussed earlier is looking at one of the ways that we looked at that cloud native application security report last year was to look at organizations that had reached that maturity as sort of that mechanism or bar to say is are they more mature because they have some of these approaches that allows them to do way more with less. So the more I think that you can embed a lot of that capability in there it allows you to have that speed of innovation, right? This is probably one of the more core components because that's from a business perspective that's what you're trying to achieve. And so being able to achieve that speed of innovation and the ability to create these applications in a rapid fashion but still ensure that the risk of the security of the organization is still being kept in check I think is a critical component that we're seeing as far as a common theme here. Excellent. So with that said, as we mentioned early on there is a link to the full report that you can download. There's like I said, a lot of additional insight that you can derive from that. There's some summarization that we shared early on in the report but then there's a lot of the details and again some of the cross-referencing that Steve shared earlier as far as what are some of the differences of ones with policies, ones without how did that impact some of the other sort of server results and how did they map out? So a lot of really great insight that's in there and that's sneak and I'm sure the Linux foundation as well love the ability to entertain any questions. But in fact, I don't know if we wanna check the Q&A and see if there's any questions that are queued up and if so we can address those otherwise we can turn it back over to Candice. Let's see if there's one in there. I see one question. Centralized API registry, IBM Z16 may increase the software cybersecurity may have your view on this. Matt or Steve, I don't have the expertise on it. I'm not sure what that's referring to either. All right, yeah. And do some research. I'm assuming this is something in IBM's portfolio so. Yeah. I mean, it is true that the Z-Series has a lot of unique capabilities that go well beyond what other platforms have been able to accomplish. So I don't know specifically about security but I think I'm not surprised given the nature of the question that this suggests that IBM was doing something relatively unique here that probably has for the mainframe customer some significant added value. Excellent. So if there's no further questions, Candice, I will turn it back over to you to wrap up. Thank you so much, Mick and Steve and Matt today. And thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars. Have a wonderful day.