 Hello and welcome everyone. Thanks for joining us today. We are very excited to have you on. In this webcast, we will be discussing key findings from a five-year research study on EBSEC. My name is Agnes and I work on the campaigns team here at GitLab. We'd love to hear where everyone is tuning in from, so please use the chat function to say hi and tell us where you are in the world. Before we get started, I'm going to cover a couple of housekeeping items. First, feel free to ask questions throughout the presentation. You can use the Q&A function at the bottom of your screen for that. We'll have dedicated time for questions at the end of the webcast, but you can go ahead and send in your questions as you think of them. And if we run out of time to answer your questions today, we will follow up with you after the webcast. If you're experiencing any technical difficulties, you can use the chat function to get in touch with me, the moderator for help. Lastly, we will be recording today's presentation just so everyone in attendance is aware. The recording and slides will be delivered to all registrants in the next few days. Our presenters today are Cindy Blake from GitLab, our senior security evangelist, and our guest speaker, Daniel Kennedy, senior research analyst from 451 Research. With that, I am going to kick off our webcast today with Paul, so we get to know you a little better. Looks like we have a lot of DevOps folks on the call. I'd be curious to know if people are still drawing a line between DevOps and Dev or if they're becoming one and the same. I guess that's something we can chat about later. So you have two questions in the poll, just in case people don't see that to scroll down. All right. I'm going to end the poll now. Thank you so much for participating. I'll share the results so everyone in attendance is aware. And now I'm going to pass it over to Cindy for the rest of the webcast. Thank you, Agnes. So I'm excited that we have Daniel here with us today. He's done a lot of research in the DevSecOps space and has some really cool insight to share. Just a little bit about me. I've been with GitLab for about three and a half years and kind of since the beginning of our security capabilities. But full disclosure, I've also been at Fortify for a while. So I understand the more traditional application security testing methods and how they can compare and contrast with a more of a modern DevSecOps approach. And I'm anxious for some of the questions and discussion. The format for this is going to be, I'm going to ask Daniel some questions. He's going to share some of the insight and we'll kind of go back and forth. But feel free to jump in at any time with questions. We've allowed some time at the end for questions as well, but don't be shy. If you have a question, I'll try to keep an eye on the Q&A section. So Daniel, do you want to say a little bit about yourself and your background before we jump into the questions? Sure. So to lead off, I was a Fortify customer. But yeah, and my background starts and background doesn't form perspective on this particular area, APSEC. But I started out as an application developer. So I worked on some of the first online brokerage websites, the BC Financial Network product that became the LJ Direct. And then I liked the joke when I made enough mistakes in the code, they switched me over to being the first APSEC leadership at Bank of New York. And it was a great environment to sort of immerse yourself in the space. You think about it, that was a pioneering company. There were no application servers back then. We built everything ourselves, full CC, C++, and Java. And so every sort of application thing you can think of that would need to then go back as we learned more and needs to be cleaned up. It had to take place. So we had insane projects where we're correcting tens of thousands of vulnerabilities sometimes that scanners were finding. And it wasn't a very efficient process. You know, run SAST and have a project re-emptor, resolve 97,000 things. It's developers don't want to see you after that. So we started very early on searching for ways to federate some of this activity into the day-to-day developers. And the tools were not quite there at the time. Fortify, as an example, did some good work around plugging it into the IDE where you could run a scan, but it was still, you wrote code, you hit scan, you waited a long time, results came back. It really wasn't a double checking as to what happened. So we had to build a lot of custom tools to sort of do that process efficacy, validation, and all that other stuff. Appsec tools have come a long way since then. So yeah, after Pershing, I was a CISO at a Midtown New York hedge fund called DB Swarnden Company. I ran a security consulting firm, and my current turn is as an analyst doing market research. And a lot of the data you're going to see is from that research. We go out and we survey end users four times a year for security. We do live interviews four times a year. And so we have a really long-running security study. And based on my background in Appsec, we do have some focus there. So I'll have some slides to share here and there as we sort of talk through the different questions. That's great. I'm excited for the participants to see what you've got. The first question I think that comes to mind is, what do you see as kind of the defining trend of application security? You've been in it for a while. I've been in it for a while. It has certainly evolved. What insight can you share about how it's trending over the last few years? Well, I'm going to bring some slides up along with what I'm saying, and hopefully it pops right up like it's supposed to. But for me, the defining trend is a long-term one. I started to alluded to it. The real question is who should be doing the work. So in the early days, it was the security team and it was unusual to find a sort of application-focused security skill set. It's rare today. It was even rarer back then. A lot of security folks earned their stripes since they had been roles or network or infrastructure roles. So coming from the developer world into security was a little unusually by the time for me. Quick side note, when I became a CISO, I found out that I had to have a huge ramp up on the network side. I had almost total focus on applications. I found that I need to go out and seek third-party knowledge to sort of become more broad-based when I got into strategic management and security. But the defining piece is that those early processes didn't work. You can't run a scan every couple of months and go back and fix all these issues that are potentially sitting in production. And then the next step was, well, we'll just run it on projects. And so at the end of the project, we'll do a scan. We'll see what happens. Well, developers, project managers are given their estimates. They've committed to timelines. And then you're the security team rolling in right at the end of testing and saying, yeah, we're just going to run a little scan. And then 600 issues to fix. Let us know when you're done with those. I just added weeks to your project and then security gets a really bad name through that activity. And there's a lot of risk acceptance things getting to production and so forth. So one of the key things I wanted to track when I sort of had access to kind of measure people's perceptions of this. And really, this is a survey of security leaders. So I wanted to understand where they were, where they sat. Who's running these application security testing tools, these AST tools. So I asked in 2015 and, you know, 71% said if we're going to allocate AST usage, 71% is going to information security and 29% to application development. And, you know, that doesn't seem ideal. If you think about that sort of state we all want to get to, it's developers sort of checking their code as close to code construction and creation as possible. So we can correct the issues. And the reasons for that are kind of obvious at one level. One is it's not cheaper to fix it then. You know, I'm a dev. I've got everything open. You know, it tells me there's a problem. I go, okay, I fix it. When it gets into testers, well, now two people have looked at it and an issue has been written and I got to close the issue. It gets a production and a bad actor finds it or customer finds it. Now I've got a whole situation and, you know, if it caused a breach, I got any bigger problems. So, you know, fix it early. It's cheaper. Less people are involved. That's the right way to do it. And that's a simple statement. Everyone kind of agrees with it on principle. But I've got all these tool sets that were designed to run like vulnerability scanners in production. And so we've been talking about shift left and we've been talking about it for a long time. Shift left into the developer life cycle under the pipeline. We're sort of, it's working, but we're still getting there. You know, devs are listening here and I saw a number of DevOps folks on and, you know, be interested in their perspective a little bit that some of this is still pretty clunky. You know, we're getting better about context based scanning and stuff like that. But it's still a little clunky. You're getting a lot of false positives, getting results that might be outside the project you're working on. You're getting results that just aren't a priority to fix. And so we're sort of at that stage of what we're doing. So the biggest change we can see right there that we reached sort of parity. People are saying, you know, half the time devs are running it, half the time security is running it and we're working together for these results. I would predict application development may take an even larger day-to-day role in this. I think it has to to scale. There will always be more developers than there are security people. You know, one of the things, we did, we do an annual survey and at GitLab and our data aligns with this well in terms of really becoming more of a shared responsibility and accountability. I did a focus group with some developers when I first came on board to GitLab. And I wanted to understand if it was a carrot or a stick that incents them to create secure code, right? What is it that the developer thinks about when they're thinking about application security? And every single person in the group said they didn't want to be the one that brought their company down. They felt like they had the accountability and the responsibility and all of that burden, but they didn't have the tools to be successful or the insight. So I think that this shift is really important there. You know, one thing you mentioned, you were a Fortify customer. I was at Fortify. You said you found 10,000 vulnerabilities. That was not uncommon, by the way. I mean, it will find everything. The question I've always had, though, is if you found them all and you can't fix them all right away, what's the liability implications of that? You know, as application security has gotten more awareness with some pretty prominent attacks in the news and that sort of thing, do you think that when you find them, if you don't make the time to fix them, which fixing them is the harder part, right? What does that do from a liability standpoint? But kind of food for thought, maybe an academic question, but I mean, it's an absolute legitimate question. And I've sat across from enough lawyers who are talking about creating discoverable assets. But as a security person, I kind of throw my hands up and walk out of the room like we're gonna do what we need to do. You know, and that's that I can't manage by. I had one as a running a consulting company, one client that we dropped that tried to ask me to deliver all the results of pen testing verbally. And then let them pass through to agree with issues they were going to fix, but don't include anything that they weren't going to fix. And I basically said that's like unethical. You know, I can't really do that. But that was the attitude. It was a lawyer in a room. You know, yeah, no, if you put it in writing, we don't correct it. And I was explaining, well, you know, if you don't correct it, there's some logic behind that document, the logic document your risk acceptance who signed off on it. But as you say, no one wants that liability. And risk acceptance signoffs are kind of a trick in security in that sense, because when we elevate it to, you know, ahead of a division, you know, at the hedge fund, it was the COO had to sign off. Well, that was the moment where the COO would go, what is this? Yeah, no, go fix it, fix it, fix it, fix it and fix it now. And so that was like the security trick to elevate an issue and back in New York at it, we raised things to the board directors and it was going to raise the board, you know, you kind of wanted to correct it. But that's a stick. You talk about character sticks, that's a stick. I never really had issues with developers beyond timing. It was, you know, there's this many others, this many hours in a day, usually 12 for developers, maybe more, and, you know, you have to slide in there. And so those were the discussions, not, you know, a desire to maintain high code quality was never really the issue. So when I used to roll out apps, when I roll that app sec, you know, my first position with that second, even the second, I brought an outside training is the first step. I just brought in hardcore app sec folks at the time. It was folks like Ken Van Wick and John Viega at Mac being folks like that who could really speak the language and come in, deliver 45 in a presentation to the kind of the hardcore developers in place, usually middleware guys and front and back end guys. They would challenge them, they'd ask 100 questions. And, you know, what I was doing was moving people from a development mindset, which is use cases to abuse cases. And like, that was the biggest transition I had to make. It sounds very simple. But, you know, you're writing code, you know, I always left my code comments was like, build and send it like, you know, let's go, you know, once it works, get it out of my face and I move on to the next thing. And so you think about all the ways to make this thing work. And once it works, it's like good. Yeah, you don't want to come back to it later. And you don't want to come back to it. And so, you know, you may you maintain it in a way where you don't have correct a lot of things. But when you start to think about abuse cases, which is not make it work, it's all the ways someone can make it do things that it's not supposed to. All the things you didn't account for all the, you know, use cases and you start to stare at that and see what some of the security folks for me as a dev at the time, seeing what some of the security folks are able to do with my code, getting it to spit back, all kinds of stuff, cookies, database results, whatever. It was kind of like an eye opening moment, you know, it was a long time ago now, but to think, oh, wow, you know, all these folks can sit here and malform input into all these things. I have to start thinking about all the ways to constrain what people can do beyond normal users. You know, you got all the air valleys, normal users, you don't think that security people live there. Hundred ways to break things. So, yeah, it is a little bit carrot and stick. But, you know, with devs, you know, generally, you're tapping into perfection of streak anyway. And so it's really just a lot around awareness. You think about compsci degrees. My son is going through one right now. There's no course on abuse cases. There's very little on security. It's high level. I took a master's course in security years ago. It was like, basically, the domains of the CIS is paying and stuff like that. They never get too deep into these issues. And when you think about it, you can teach a whole course on buffer overflows and race conditions, malformed input and API abuse and all these other pieces. So folks are kind of on their own to learn it. And so if you can put it in front of them and give them the tools to do it, generally, you're going to be pretty successful. And speaking of that, you talk about shifting left and people are doing that. Do you have any insight into where they're running their application security testing? How that's shaping up? Yeah, let me actually jump to that slide. My computer will behave. So, you know, the second big trend, right? Where ASD is applied. So who's running it? Increasingly, you know, half and half devs security. Where is it being run? And the big one is the middle one. Afternoon code is introduced. You see a five-year increase in that category, but 69% last year. They're saying that's where we're running these tools. You can't run everything there theoretically. If you're still doing sort of user-driven DAS and stuff like that, you're probably in a QA or staging region or something like that. There are variations on that. IS and DAS where you can run it in a dev setup. Get lab action if. So opinions on that as well. But that middle piece of the chart is really where to concentrate. 34%. Five years ago, 69% today. After code is introduced. The other one, we only run it against production. 32% down to 10%. That's the sort of legacy classic scanning, you know, right before production or in production. That's quickly sort of a diminishing part of the population with ASD tool usage. The quote at the right is one from one of our interviews. And it sort of makes the point. You know, CICD, there's probably a lot of opinions on that. In the audience, dev ops folks have some pretty strong opinions on that. I don't know if continuous deployment based on my measurement is all it's reputed to be, but continuous integration is absolutely happening in all of these places that we talk to. And again, checking in code multiple times a day, integrating code multiple times a day. We need to make this triggered on a dev process and context specific to what that person is working on. We'll get into that, I guess, a little more. What is the main driver in terms of, you know, shifting left? You mentioned in the beginning that people have been at it for a while. This isn't a new concept. What is it behind the movement? What do they hope to get out of shifting left? Well, what I said earlier, being able to catch up, so I'm just going to jump to the slide, being able to catch up with the level of code release is the biggest piece. It's this understanding that if we don't do this differently, we're going to miss, you know, 18 wheelers worth of stuff getting into production that has not been scanned. The second piece is this idea that you can't make all your projects unstable in the sense of adding a security component at the end of every project. So it's not going to work. It's just not going to keep up. This chart's interesting. It's its release cycles, frequency with which organizations that deploy software apps to production. And that blue line is the average. So, you know, it looks a little bit like a, you know, left-weighted bell curve, you know, right in the middle of monthly is, you know, around where people are in general. But if you look at that orange bar, companies less than 10 years old. So companies with less technical debt, less legacy investment in technology, kind of able to do the thing with, do things the way they want to do them. All those releases are on this side of the chart, hourly, daily, weekly, they're pushing in that direction. And to an extent they're a bellwether for everywhere else, everyone else wants it to go. So, you know, the sort of the heat in this process isn't going away. People want to release code more frequently. And like that's the reality. It's happening. You know, newer companies are ahead of older companies in this regard and everyone is going to keep pushing, not to say push left again, but we're pushing left on release cycles. We want to do them more often. So, for security, the pressure's already there. They already know they're missing things. You know, we look at the Verizon formerly cyber trust, DBA report, incident report annually. And I think they had north of 80% of these breaches are caused by web app type vulnerabilities. This is like that little eye popping, you know, some of these leaks and stuff like that. But that's what's being attacked. And yet it's also being changed very often. And so security people are already sort of behind the eight ball and then they know they need to catch up. And meanwhile, the goalposts are moving because releases are more frequent. So how do you see aligning the security and the development teams in terms of funding this shift? I mean, it's if the old tools aren't capable of doing this and you need new tooling that costs money, change management costs money, maybe retooling people in terms of education. And there's, it doesn't just happen overnight, right? There's it's people processes and technologies. It's a heavy lift to shift left. So how do you see the teams aligning budgets when the purpose is for velocity and reducing risk, but it's part of the development process? Who funds that dev or sec or both? That's a completely it depends question. It's very much related to the setup and personalities within a particular enterprise. You know, when I was a CISO security funded, we knew it had to be done, but we federated the tasks out to developers. But you know, we knew at the end of the day, we were going to be responsible for breaches and no one was going to do any complicated mental calculus on figuring out, well, it was really the nuance, this dev didn't do this. And they're just going to say, you know, we have a security department, we have a head of security, we just had a breach. You know, and that says as much as the business leadership folks are going to think about this. Right. So, you know, security, you know, CISOs are present generally at organizations more than a thousand employees. No, you're responsible. It doesn't matter. And you can't sit there and say you didn't understand it. So for that reason, I see a lot of projects funded by security folks. Does that matter? I'm not sure it does. I've been saying for a long time, yeah, if you're a vendor in this space and you're across the table from folks, make sure some aspect of dev leadership and security are in the room if they exist at that enterprise. And you may be the first person to bring them together and have to talk through it and say, you know, listen, this is when you need to get out of this process. This is when you need to get out. These are the use cases we've found to be successful. I'm not telling you what to do, but this is the way it's worked for other people. Because when you only have one in the room, you have this, you're only a security person, then you end up with these, you know, these jumps on the dev team. And they don't, that's when you start to get dev say, we're not being enabled properly. And security really doesn't understand what we're doing over here. When you have the sort of development or dev ops folks sort of run on their own with no input into security, you end up in a situation where a, the security team can't speak to the risk posture of the apps. And B, when a breach inevitably happens, then you're going to get to meet the security folks. And like, that's not when you want to meet them. You want to meet them ahead of any of these things. So you want to know how to react and respond to situations. Now, I've got a chart up here. It's interesting. If you look at the, we asked people in organizational dynamics survey, these are security folks, you know, we're the most important skill sets for security today, but what's inadequately addressed? I'm sort of a map them together and I've ordered it by inadequately addressed. And what's interesting are the, where the bars are big in both areas. So we talked, you know, in the bio intro about that network security legacy for security. And you see folks not surprisingly saying, yeah, it's really important, but also it's pretty well addressed. Only 18% are saying we're inadequately addressing the networks that's outside of the equation. So you start to look at the top of the chart, what's not being addressed, cloud platform expertise. That's kind of obvious, you know, that there's some real discussions around who owns cloud workloads, deployments, the rules around that. We've been wrestling it over the long time and yet you talked to enterprises and, you know, we asked this fundamental question of security folks, how would you know your cloud data has been breached? And the answers have changed over the years. Years ago, we'd get answers what the cloud provider should tell us, right? And then you'd start to get into what shared responsibility models look like and stuff like that and how the cloud provider really isn't responsible for telling you when your infrastructure in the cloud has a problem. You know, the AWS is the world will take steps to try to prevent you from hurting yourself. You know, they'll try to have or now have, you know, S3 buckets secure by default in terms of permissioning and stuff like that. But, you know, they give you a lot of flexibility and you have the flexibility to really hurt yourself. You don't know what you're doing. Forensics incident response, one of the more complex areas of security, specialized skillset, machine learning. Again, I mean, you and I talk to vendors every day. I'm not always sure what they're saying when their references black box machine learning. I certainly understand what machine learning is. I just don't know exactly how they're implementing it. And then app sec, fourth, probably the only area that's not new and like a pillar of security. You need to give an endpoint sec, network sec, application security. We've been addressing with a long time. You it's still at the top of the chart for being inadequately addressed. Some of that is the skill sets. You know, again, there aren't a ton of security people come from a coding background. And some of it is sort of following where the threats are. So I've been talking about app sec for a while, but we're certainly reaching critical mass where that's sort of the weak part of the overall infrastructure. Well, it seems like the threat landscape is changing a lot as well. You know, it used to be enough, I think, to worry about like the OWASP top 10, right? What are those threat vectors in your code that you need to watch for? Now it seems like with cloud, we've got all kinds of other attack surfaces. We've got the cloud service itself. We've got containers. We've got orchestrators. We've got APIs. There's all of these other things that represent an attack surface in addition to the code itself. Do you put all of that in the cloud platform expertise or do you think that there's areas that maybe we haven't even scratched the surface of yet and maybe need to? We're in early days and things like container security and stuff like that. So the whole infrastructure is code piece. The idea of the way you instruct infrastructure, you want to move from configuration to, you know, being highly customizable via code. That's still sort of maps to, you know, when you used to scan web servers and stuff like that for known vulnerabilities. But the difference is you can introduce a lot of configuration mistakes pretty easily, along with that flexibility that, you know, using encapsulation technologies like containers provide. So specific to your question, have we scratched the surface yet? There are tools out there. We're definitely, you know, or Churchill at the end of the beginning, but we're still in some sort of beginning phase with this. Do I count that as cloud platform expertise? I think it fits into it. You know, containers aren't necessarily joined at the hip, but they are kind of a cloud native technology, but you know, you can run things in containers. I mean, the whole idea is portability. So I don't know that it's an exact mapping, but it certainly fits into that malibst. Do you think that we're doing enough in terms of insight into what those potential misconfigurations are? I mean, I think I look at good security hygiene kind of goes back to passwords and patches. And if you go back to, you know, Verizon does a threat report, AT&T does a threat report. I don't know why the telecoms are particularly in there. Fortify used to do one, you know, anybody that's that's that has a long history in that business has looked at what are the most common threats. And it's always the same. It's the same thing. It's hygiene because that's the easiest to circumvent. You know, people are going to get sloppy, you know, those exploits work. I think misconfiguration is going to be as important as patching. Do you think that we have as an industry have thought through how to get that insight? Who needs it? What's the best way to apply it? Or do you think that that's kind of the next the next step? I think we're in the midst of it. There's a number of containers scanning technologies out there as an example. Some of them have already been acquired by platform players. There are AS you know, AST vendors tool vendors that have gotten into the container scanning space. So I think there are answers emerging. I'm sorry, I'm simplifying it into the container example. There's other elements of it. But you know, I think, as I said, we have tools out there really early stage on what the constituencies are that need to get results out of it. Exactly how it's going to work. Exactly what sort of grouping it's going to be a part of. You know, 451, we're throwing a lot of this stuff into cloud native security. You know, some of that is just a lack of better categorization at the moment as the market sort of defines itself. You know, you start to look at all these cloud acronyms for security tools. And you know, it's just an explosion at the moment. So we'll see sort of tool consolidation, but it'll happen over time. Where should people start? What do you think that they should look at if they want to shift left? What's the kind of the low hanging fruit or what should they consider in their approach? A lot of it's going to be, there's going to be company specific elements of this, but you know, in terms of benchmarking what other people are doing, I think that's a decent place to start. So you see, you know, static analysis is right at the top still. Some version of DAS and a lot of times DAS is integrated into vulnerability scanners and stuff like that. It is a part of it. Open source protection is a part of it. So and the net market is changing as well. The get players for lack of a better term yourself included have commoditized huge portions of that when people use your platform. So it's, you know, we'll tell you when the open source libraries are out of date. So now the competitive space is above that. How do we sort of integrate and develop our pipelines? How do we alert developers in the most frictionless way possible? Do you want to fix this? Yes, like updated. So SCA is a market sort of in motion, but some sort of consideration on open source risk since almost all modern applications have likely a significant composition of open source code. And then we're seeing like new things emerge and old things emerge anew. You know, IS has been around for a while, but there are challenges around DAS. So we see some look at IS as a way to sort of automate what you're getting out of DAS or some percentage of what you're getting at DAS. Some companies do both. Some people run an IS in the background and then throw a DAS tool at it and see what the results come out as. We see RAST, you talked about a little bit in the emergence of these microservices architectures. You know, what does that mean? Well, RAST was sort of a component that was pushed back on years ago when it first came out because it was sort of introducing behavior that developed a control application. And, you know, as a former dev, we didn't love that because if there was a problem, then we started to look for, you know, a tool to blame or something that's outside of our, what we coded, that's causing a problem. That attitude is shifting a little bit. You know, we're seeing some of the observability plays and others, you know, be willing to look at RAST again and implement some version of it because in these architectures, each little component is responsible for its own piece. So there's less resistance in having sort of a security layer that's looking at the inputs and outputs of the application and potentially even acting upon because it's easy to isolate the behavior and where it's happening. So a lot of different flavors out there. I would say SCA SAS is where most folks with a serious application development discipline and enterprise would want to get started. I'd also say some of this, but sorry, I was going to say it's a valid point. A lot of DevOps folks on the phone that, you know, we've been wrestling with this for a while. So I'm like just freely referencing things from over a decade ago, you know, and what's changed, you know, because I'm showing the last five years and clearly it was something that's happened in the last five years. And a lot of it is what folks have accomplished in DevOps. There's sort of this organic standardization that's gone on in enterprises where, you know, if I suddenly say, well, I'm going to develop a hook into Jenkins, there are going to be a lot of enterprises using that. If I'm going to develop a something in, you know, what do you guys get left? You know, if I'm developing on that and I can just slide in testing into the developer pipeline activities, you know, code push, you know, this just got a whole lot easier. And so some of the penetration is simply based on that organic standardization of what I'll call like the dev tool stack. Why do you think that SCA is so low? I've seen numbers in various places that, you know, some astronomical percent of applications use third-party code. So if everybody's using third-party dependencies, why are only a third of people testing those? Some of the early versions of SCA were tough in the sense that they also work like SAS, so we're scanning entire applications. I'm a developer working on a project. I kind of want to know whether I can use a library or an open source library or want to, you know, libraries of using are up to date. And the SCDA tool spits huge amount of results out where, you know, it's telling me about everybody else's projects, entire application, every place he uses opens. And I can't fit that into the, you know, 24-hour estimate change that I said I was going to make. So the tools didn't work great early on. I would say a couple of years ago, they started to pick up on the fact that, you know, when I go to push code, it should tell me what all the issues are. And it should give me a very clear path to recommendation of what to do. When those started to come out, we started seeing uptick in usage. But SCA is funny. It sort of follows like problems. So like we saw the Heartbleed bug years ago, and then we saw attention on SCA. You know, suddenly teams were not able to turn around and tell the C-suite all the places that, you know, certain libraries were used. And like people were demanding, you know, end of the day answers. Are we susceptible to this thing? I saw in the news, oh, well, telling you in a week, that doesn't really work. So we saw an uptick there. And then like a classic disaster recovery use case, as there wasn't a big problem for years, we just saw it kind of, you know, tick down usage. And then the Equipax breach happened. And we saw it swing right back up. So it's funny, you look at it and say, well, it's low. And I look at it and say the number was 11% a couple of years ago. And so I'm like, no, it's really, really taking off. So it's all a matter of perspective. And again, with all these AS2 tools, when you look at penetration, you have to assume, okay, in a survey, I'm asking every enterprise, not every enterprise is a serious application development team in place. So really, you know, when you start to cut by that, you see higher numbers for everything. But it seems like one of the nuggets I heard you say was if the results can be overwhelming, right? You might bite off more than you can chew if you scan everything and you get, wow, here's all of the vulnerabilities that you have. Do you think that with the GitLabs method of doing the scanning and showing the vulnerabilities on the diff? So on the code changes before, at the point of code commit, before it's merged with anyone else's, you can see, Daniel, you just created these vulnerabilities. It wasn't somebody else. You did it, so you're best able to fix it. Do you see that as helping to solve that problem? I think, without getting into any specific implementation, using that example, I think context-specific results are hugely important. And usually important across SEA and SAST, where, you know, I can't run some sort of scan activity and get results for Bob's project analysis project and not Dan's project. And so I need things that are very specific to what I'm working on. And I can't get results for the entire application, even if I own that application as a dev. You know, I've given an estimate for how long I think something's going to take. I need to know specifically, you know, what security problems have I created? And we start talking about the future. I'm going to also need to know, I might not be able to even fix this list. Which ones are the important ones you need to fix? I need to fix, you know, an address. So yeah, context-specific results, it sounds simple to say, but these tools have struggled with this for years. And we're all a gal game to the point where we're actually seeing, you know, this very simple statement. I only want to see stuff under the project I'm working on. You know, I have code bases or millions and millions of lines of code. I can't get results that, you know, funny story when the stress testing first started to become very common for web apps. You know, we brought in like kind of a performance tester. And I laughed because the first result he gave me was the one defect. So imagine one tier ticket kind of thing. With the every request that was over a certain amount of time, over 75 brokerage websites. And so it was like thousands and thousands of lines of stuff in one defect ticket. I said, what am I supposed to do with this? This is ridiculous. I said, I can spend two years working on this. And like that was always my go-to example, not particularly a security, but it's the same kind of thing. It's you can't give me a lot of results that are not relevant to the context I'm in right now, the project I'm working on right now. You know, there can be clean up projects to catch up with all other stuff. Well, I think we struck a chord with SCA. We've got a couple of comments in there and a question. One comment that SCA is probably more important than SAS, but the tooling is lacking. Another comment that says there's, there needs to be a culture shift with SCA that this person has worked in some developer environments that couldn't tell what versions of libraries are being used. And so there's some catch up there. And then a question about what's your opinion on validating TPIP before downloading it because it could compromise the machine. So I think, I think we struck a chord there around SCA. If you have any, any additional comments that you'd want to make around SCA? Well, I mean, just hitting on all the valid problems there. One, you know, I'm a dev, you know, I get a project. There's clearly an open source component out there that does an aspect of what I need to do in this project. And frankly, that's built in the timelines now. They're not going to let me give an estimate that allows me to build something that's already out there. So how do I know whether we have that code already in our application somewhere that I can just call the library? So that might be an easier question to answer. And then the version control of that because you end up a lot of times is the same code repeated over and over and over again within these huge applications. So, yeah, that's one piece. It's like a sprawl that occurs, you know, tragedy, the commons kind of thing where, you know, you're just not keeping track of which libraries are out there and certain code isn't updating and corresponds to calls on open source library. And so you can't update that library without updating the code. And it gets into this very complex mess. And that's where SCA tools kind of help you make sense, but you almost need a person sort of watching that. The second piece, yeah, how do you avoid introducing threats? Yeah, we've seen how easy it is to have an underlying library pulled out and all the things it breaks. That's happened a couple of times where it's been reported in the media. So it's gotten that big. And then, yeah, we've had reports on people sort of, you know, being able to introduce changes to an open source project that you don't necessarily want in your code. And it's a complex problem to solve because it's an expertise thing, you know, you can't sit there and audit the entire open source library. There's some implicit trust there because you're leveraging the time savings of someone else having created this. So, you know, that's where SCA tools do help in the sense that you're sort of leveraging third-party knowledge on what changes are problematic. The other thing in the open source world, the community and this folks better suited to talk about this than I do, but I do use open source code in places is not all these projects are actively updated. So a lot depends on the open source community and the teams. You're almost dealing with like a reputational component there too, you know, who is where the experience depth looking at the request and allowing them to be integrated in. Well, you know, going back to the not only doing the scanning but what you do with the results and getting the results in the hands of the people that can fix it, you've got some interesting insight here into how the tool can best help in that circumstance. Do you want to talk about that a little? Well, I was going to hit on the one point that support for assessing open source risk is, you know, nobody likes when analysts are trying out quadrants, but yeah, this one is about features of APSEC tools and you see top right support for assessing open source risk. It is very important. This analysis is kind of interesting. You know, we ask up front and surveys what things you think about when you purchase tools. This one is the sort of stuff that you might be unconsciously thinking is important because we're asking what features are most highly correlated with your recommending a tool to a colleague in the industry and what will lead you to repurchase an AST tool. So open source risk top right. The most top right thing is practice service portfolio within APSEC for that vendor. And that's an interesting one. You know, security folks as a buyer and certainly a buyer group, I understand a little bit better, is than others, is this idea that they will almost always go best to breed. And so it's interesting to see then folks turn around and say, well, actually, you know, a provider that can bring together different flavors of testing and I showed you all the different flavors that are out there in the prior slide is something we're looking at. Now, why are they doing that? Well, it goes into that classic too many security vendors argument. And I was content that that's not the argument. The argument is there's too many tools trying to do too many things and they're overlapping improperly. They're not working together. And that's usually the chief complaint when you really get down to open to folks, security leaders in these interviews. It's not so much that, you know, yes, I have vendor management issues of 580 tools in the environment, maybe that's not as hard to manage as two tools trying to do the same thing, conflicting or giving wrong results or not work together and not correctly sending the information to where I want. If I want one dashboard, I need everything to sort of feed into it kind of thing as a very simple example. So we're seeing this play out in application security where they're saying, well, you know, the same provider from the SCA and SAS and some version of DAS or I asked, you know, that's a benefit because I'm going to start to see correlated results theoretically. I don't know how far we are as an industry in getting meaningful results out of that. But that's certainly a next stage. It's saying, you know, if the DAS or I identify the same thing as the SAS identified, does that make the issue a higher priority? The answer is likely it does. It's been validated to different types of testing. So it's interesting. You it's coming out as well. Can the same vendor offer all this? I think it's more can we design products that will seamlessly integrate and work together to achieve sort of this common task of securing the application? Very good point. You know, I think efficiency is one part that always comes to mind when you think about that approach. But not only efficiency, but control and visibility. If I'm using fewer tools, I have a much better chance of understanding who changed what where when across the whole life cycle and and also applying my policies in a consistent manner. Can you can you talk a little bit about what what do you see as the next major shift for APSEC? Where are we going? I think it feeds into what we're just saying with the with the quadrant slide. I think, you know, now that we're getting closer to context specific results, it's the simplest level of prioritization. It's this concept that we can't fix everything. We just can't. There isn't time. It doesn't make sense to do that. So how do I know what to fix? And so the question becomes, where is that prioritization going to start to come? What are new sources for that? You know, in security, we always talk about risk when we show these risk scores and applications, they're always missing a an impact part of it. Does this actually deal with sensitive data? Is this actually an important app? You know, the most examples are, you know, is this code you're telling about ever actually invoked is a good example. You know, we used to fix things. We'd find libraries or functions that were never called. So it's like an irrelevant, you know, yes, there's a buffer overflow or race, you know, race condition there, but but who cares? And, you know, and good dev leaders always say, why do you have code in there? It's not being called. And that's a valid question. But we're kind of big code based. There'll be a lot of that in there. The other piece I'm showing here is you see all the shift left technologies in AST, we have shift right, we have all these protection solutions. We talked about RAS, WAF is a classic use case software WAFs, API security. What can those protection solutions tell us to inform our testing? What are attackers banging on in the outside? What are they looking for in the apps? How can I use that data to inform what I should be fixing on the development side? Here's one. The thing we talked about before, are there places these tools run where different tools are validating versions of the same issue? And how can I correlate that to elevate that issue in my queue? To say, you know, yeah, the SAS says this code is vulnerable. And by the way, the task web app testing found the that, you know, I can put a request and get a malformed output or do something that takes advantage of that portability. Well, now this just went from theoretical scan result to actual problem. How can I get, combine those pieces of information to elevate this to the top one or two things I have to fix? So, you know, we're not there yet and it's a little theoretical, but I, you know, always explaining things the simplest way, it's a matter of prioritization because we just acknowledged now releases are happening so fast. We showed you that slide earlier. There are a lot more developers and security people you said very well at the beginning. How are we going to actually fix what matters and meaningfully reduce our risk in the limited amount of time we have in these projects? Yeah, being able to prioritize that risk and getting the risk view of things as opposed to the just the pure vulnerability output. Just like you said, if it's, if the code's never being used, then certainly you prioritize that lower. You know, there's a question in chat about improving application resiliency with technologies like RASP. So, full disclosure here, I was a product marketer for Fortify's RASP product when it was and did that launch or application defender. And I had all viewed it as being a key to resiliency because you could almost use it like a virtual patch to say, well, we found this vulnerability. Let's just block it in a production environment. Do you think, you know, you talked earlier about RASP and I asked having a role, I almost think that machine learning is going to come into play when you talk about whether it's whether it's that insight, time the insight from production back to development or tying the inside across tools. That just seems like a good use case for machine learning. You have any thoughts on that? Machine learning is good at identifying patterns that are not immediately identifiable by looking at large streams of data. So I think it's, it's absolutely legitimate use case. I hate to say I bring a jaded analyst perspective to this a little bit that, you know, every briefing I do for two years now has some machine learning components. I have to, you know, we tend to get very specific and lean into the camera when someone says machine learning and say, explain exactly what you mean. But I absolutely think that, yeah, unsupervised learning is going to be a benefit when you talk about, you know, just a lot of production data on the way the application is being used. And when we can identify, you know, attacker patterns, we can start to see these abuse cases in real time and figure out, you know, how do we make our application resilient to them? And resiliency gets into failover components and how can the application gracefully degrade, how fast can you recover it, elements like that. I think that's absolutely a very important architectural discussion to have around every application, you know, tabletop. Did we have this problem? How are we going to recover from it? Realistically, I've asked him to recover. Can we lose a component and go around it? Saster recovery pieces, stuff like that. So, you know, there's definitely aspects of that. We see chaos engineering starting to talk about security a little bit, you know, in terms of, you know, how can we make sure the application fails gracefully when things that you don't expect to happen happen and how can we build that uncertainty into our sort of testing? So, yeah, all elements, the important elements of the future, not a ton of penetration yet there, but absolutely valid. So, you've covered the tool piece. What about the people piece in terms of alignment? I think there's still room there, right, for alignment. Yeah, I like the slide, yeah, that we talked about earlier. This is a, so, unlike the other data you saw, this one is a DevOps server. We're asking devs and DevOps folks what they think. And we see some kind of, you know, talking about that ideal use case. It's interesting to me there's a little bit of disagreement between staff and senior management on this. Senior management is a little rosier on how everyone's getting along really well. But, you know, we see a lot of DevOps folks kind of saying, you know, we're working independently or working in silos. And, you know, we touched on earlier, you know, if one or the other team is missing an enterprise, fine, you know, run with it. But if they're both there, and then you're not in alignment, you know, that is not an ideal operating condition. When you talk about AST, you know, it's how can we move, how can we reach a state where we'll move day-to-day, federated day-to-day activities to the people right in the code? You know, I want some, a lot of these open source vulnerability decision-making made by the dev using the library. I want a lot of these vulnerabilities cleaned up when the code is written. I don't want these vulnerabilities escaping to production. So I think I want things found by, you know, web app testing and DAST or IS in the background and so forth. And I want security measuring the efficacy of this entire process, looking at how efficiently it's working and having a real-time view on the risk posture of these applications. You know, what vulnerabilities are we allowing to produce on purpose? You know, we're just viewing this as not important enough to hold up the code change. And how do we sort of plug into the decision-making? So, you know, we see that top dotted line bar, you know, that's the way it's supposed to work. Folks are supposed to kind of get together on these issues. We talked about skillset differences. You know, sometimes the security folks aren't speaking the same language as the DevOps folks. And, you know, rightfully, some of the DevOps folks are going, okay, we just need to, they're saying crazy things or they're trying to dump thousand page reports over the fence. You know, like this relationship isn't working. How do we sort of avoid the pain? So in that sense, security folks have to sort of get with the program, so to speak, on how application security is going to work effectively. At the same time, DevOps folks can't ignore the security team either. You know, they have to acknowledge that from an organizational perspective, if there is a big problem someday, it's going to land on security first. And the devs can always say that thing that you said initially, well, we're not enabled with the tools and knowledge to do this, so we didn't know what we were doing. And it's like a great excuse because, you know, it's legitimate, so you can go with it. So the bottom of this chart concerns me greatly. You're working independently or saying, yeah, DevOps is primarily this team or primarily that. I understand the argument when you say security is everybody's responsibility, that it really is nobody's responsibility then. If somebody, the buck has to stop somewhere, I would say in general, stops with the CISO. Whether that CISO is prepared for it to stop, they're not. But, you know, development increasingly understands security issues. You know, should it be their primary decision making component? No, delivering functionality on time to business needs should be the primary measurement function. But security has to fit in somewhere. And so that's the challenge. Well, let me ask, we're just about out of time. We've got a couple minutes left. I wanted to have some Q&A, but people have been really good about putting things in the chat. So that's great. You know, one final thought here is that there's a concern about resources. It always runs into a resource issue. If you had one take away or like one first step that you think people should do in the next week or the next month, what would that be? If you're starting from scratch, I like launching these things with an educational initiative. So I said the way I did it in two different companies, I brought in a third party expert to deliver a training session. And, you know, I went around and shook some hands in terms of, oh, you know, hey, Bill, hey, Sandra, you know, I'm naming real people you're actually up to. But yeah, I really, you guys, you guys are leaders in the organization. I really would love to have you front row for this. It's going to be an hour of your time, hour and a half of your time. It's going to help me out a lot from sort of in installing a security mindset, security culture in this place. Once you sort of get that buy in, having the tools discussion gets a little bit easier because they know why, you know why we're asking, you know why I'm talking about AST and SAST, you know, and a lot of folks at this point sort of understand the issues, they understand what the static analysis is for, they understand what open source risk is. But you're asking for their time. You know, when I was a dev, you know, I can't speak of the books in the audience, but any new thing anyone asked me to do display something else. There were no free hour of the day to consider new things. And so, you know, getting sort of buy in on the priorities, the first thing. And what you don't want to do is wait for the first disaster to have buy in. You know, I worked with so many, especially as a security consultant, working so many companies that, you know, it was like going into a place after a bomb on off. Like, yeah, now we have buy in, but great. Like, there's been a huge massive problem. Like, that's not the thing to wait for, to decide now we're going to push in a security program, an application security program. Yeah, so what I hear is the people part of this is super important and it needs to be like a personal outreach and personal relationship building. I think that's a good takeaway in terms of place to start. It's a, as you said in the beginning, it's a team sport. The tools aren't always there, but yeah, the people relationships in terms of moving this forward are really important. Well, I think we're out of time, but this has been a great discussion. Daniel really appreciate your time. Dmitri's got a question about is threat modeling important? I don't know if we can squeeze one more in Agnes. Are we okay? Yes. Yes, it is important. Very important is architectural components. You're developing requirements. You have to know you have to tabletop potential threats. And what I was talking about prioritization becomes important. You have tools in place that are telling you where you're being attacked and how. It gives you a lot of input into thinking through how you want to architect new changes and new applications. It's like your abuse case. Yeah. Exactly. It's the same kind of thing. Absolutely. All right. Well, be sure and reach out to Dan or I on LinkedIn. Happy to continue any conversations there. I'm CBlake2000 on LinkedIn. Daniel, I don't know if you want to share yours. I wonder, Daniel Kennedy. Yeah, confusing. All right. Well, thank you all. Appreciate your time and your interest and the engagement. Great discussion. Thanks everybody.