 All right, thank you, Candace. That is amazing. All right, my name is Steve Shaker. Welcome to Navigating Application Security Landscape. And I was pretty excited to be able to do this one today for, well, reasons that will become obvious shortly. A quick intro to me. Thanks for the already existing intro. I'm gonna change my title. I'm just gonna do that and call myself the Cloud Security Navigator for this one. I've been working at Prisma Cloud for going on two or three years. Boy, time flies. I've been writing code since the 80s, I gotta say. A long time, I was very, very small and C and Java and JavaScript and going beyond there. So I work for the product team. I work on our open source tools. I'll mention that a little later. My specializations are Kubernetes containers, apps I can supply chain security. So kind of a lot to concentrate on. I am based in London, England in spite of my accent. I've got two flags up there because I've got dual citizenship between Canada and the United Kingdom. And if you find yourself in London, I co-organize the DevSecOps London Gathering Meetup there. Big community of 3,000 security enthusiasts. And I also have a cloud native security show on Twitch and YouTube. So you can Google my name and find all sorts of wonderful things. Let's get started. Interesting beginning. So a little bit of a quote. What's interesting about this is that what gets us into trouble is not what we don't know. It's what we know for sure that just ain't so. And when I say ain't, I always want to affect a twang. Just ain't so. Mark Twain. Now, what's interesting about this quote and you're probably thinking, what does this have to do with AppSec? Everything is that it's actually, if you've probably seen it before, it sounds familiar. It's at the beginning of the big short. And it's also at the beginning of an inconvenient truth. And what I found fun and AppSec-y about it is what it's saying. And it's being attributed to Mark Twain. Or was it Mark Twain? Because it's also attributed to four other people. And if you do your homework on it, you find out that Mark Twain didn't really say it. He didn't really even say anything like it. There's other people said something more accurate, but Mark Twain's the most famous. So he's the one that they stick below it. And it's ironic because it is an example of itself. It's kind of meta. What we know for sure that just ain't so. What we know for sure that just ain't so is this quote is not Mark Twain. So it's an example of itself. I thought it was pretty funny. And by the way, I'm just gonna throw this out there. All these images that I'm using, these are all AI generated. If you're wondering like, how did I find an image of somebody putting on a Mark Twain mask? I asked AI and it did it. So cool. So this is where we're gonna start and introduce the concept of a false positive. And if you are keen on application security, you're probably already rolling your eyes going, ugh, yes. This is one of the biggest problems with an application security is noise and false positives and how do we solve these? We're gonna come to that. But I would be remiss if I didn't start with the application. Cause when we talk about application security and the landscape around it, what is the landscape? Well, what is securing it look like? Well, what is the application? And I think it's interesting to try and redefine the application. Now, I mentioned that I've been writing code for 1986 or so, probably if I'd be precise, starting off in C and there was a million ways to do bad code in C, trust me. It is fraught with pitfalls, let's say. And then moving into more languages that have memory management, et cetera. It had to be more creative to start creating bad code. And then that was really how I got into security because I went from writing bad code to doing code craftsmanship and then eventually it was a natural progression into security. But what is the application? What are we talking about? Usually as practitioners of application security, we think of the OWASP top 10. And the OWASP top 10 is a nice guide, certainly to what we're doing wrong. This is, if you're unfamiliar with OWASP and I would imagine some people might still be unfamiliar. It's the open, used to be web application. Now it's a worldwide application security project. And every three to four years, it reboots what we think it was the worst possible thing. It used to be SQL injections at the top. And now it's access control and misconfigurations and data integrity. And the list is always organic and changing. But it does help us to find, if we look at it, what are we considering to be the application? Well, if it's saying outdated components, okay, that implies dependencies. That's not just the code I write. If I'm talking about misconfigurations of the cloud itself, oh, okay, so is the cloud itself part of my application? So it really expands your mind as to where does the application be in an end? And this is what I'm hoping to do today, is actually start by defining the application and the landscape. Because if we don't know what we're talking about, how can we possibly secure it? Now let me muddy the waters even more. In addition to the OWASP top 10, we have in application security, the CWE, the Common Weakness Immigration. And these are also what we look for in terms of application security. If we're looking at our own code, we might scan it with a tool and it often comes back with CWE numbers that are proof that there are recorded weaknesses and we are executing or we have created code that falls into the trap of certain weaknesses. They are becoming more prevalent. And this is directly from the website. More prevalent in vulnerability exposure conversations. Because I think of the OWASP top 10 as styles of vulnerabilities. Whereas CWEs tend to be closer to the root causes. They are the programmatic missteps that can become the OWASP top 10. But on closer analysis, I find the CWE list pretty fascinating in how it is created. Because we had a 2021 top 10 and we have a 2023 top 25 list for the CWEs. So let's take a quick look at the top 11. Yes, only the top 11. And you can see just for reference, I have the OWASP top 10 down there on the side. So you can see what it is with broken access control and secure design. They're all there. And we look at number one. Number one, top of the list. Is it broken access control? No, it's not. It's out of bounds right. I got a old C program in classic that can blow up the whole program. Okay, so let's see we're still making those problems. Now, if you consider CWEs aren't by default connected to security, they just kind of are very often and the conversations around CWEs do tend to trend towards security. So this is a problem to do with our application typically to do with our own code. Let's just quickly pop down. I'm not gonna go over every single one of them but just expose some of them. And you can see improper neutralization of inputs. Okay, that can be injection, for example, or that can lead to remote code execution, use after free classics. The sister to the out of bounds, right? The out of bounds read. And we'll work our way down to number 10. What I do find amazing is that CWE 352 cross set request forgery, which it used to be in the O-Hospitat 10 has fallen completely off, mainly because most programmatic frameworks protect you automatically against it. But it's interesting that it's still present in the top 10 of the CWEs. So there's a bit of a map, but the way I found very curious was this number 11. Missing authentication, which maps directly to number one. So number 11 maps to number one. We had to go all the way to 11 before we got to the number one issue on the O-Hospitat 10. And what's even more amazing and is that it's number 862. So that means we went through documenting 861 programmatic missteps or weaknesses before we got to 862, which is now the number one problem with security and applications. Hmm, interesting. Now, if you're curious and you're thinking, wow, CWEs, how many are there a lot? Upwards almost 1500, I think right now is what we're at. And if you think about that and you wonder why application security has a tendency to be a little bit confusing and noisy, well, that's why. Because if there's 1500 ways I can write bad code and each one of these ways, there's probably five to 10 different ways that you can create a CWE 862, then it's no surprise that when things like static analysis tools are looking at our application, that they make a lot of noise. And that is probably the first big problem with classic application security that we're gonna start talking a little bit about. However, let's go back to what is the application. Now that we've analyzed some of the contributors to our mental definition, let's go and try to answer this a little bit more clearly. The application is the code I write. Okay, yes. This used to be all the application was in our minds. Back in the day when I was writing code, I'd say in the 90s when I wrote a lot of C code, we didn't have any dependencies, barely. Maybe there were some, maybe some drivers that came with integrations and with hardware, but the open source world just wasn't as huge. It wasn't what it is today. But now we have to acknowledge that it's much, much, much bigger than it used to be. It's now the code I write and its dependencies. But does it stop there? I mean, we know if you look at any one of many reports from synopsis or synotype, like the Osprey report, that we can see that modern cloud native applications are upwards of 80% open source dependencies. So we have to consider the dependencies. And that can be troublesome. Why? Because when we talked about noise with CWEs, now we have CVEs to think about there as well, known vulnerabilities in our dependencies. And if we look at the way applications are built, just in a very ad hoc way, developers are a little bit like humans. They are humans, but follow me. When you have, if I'm the application and I live in my house, which says my cloud, I have dependencies and my dependencies might be, I don't know, that bread maker you bought. Remember buying the bread maker? Where's it now? You still have it, but it's in the top shelf in the corner of the kitchen and never really sees the light of day. Or maybe you got two blenders for your, for as wedding gifts and did you get rid of one? Not really, because you need them, their dependencies. And if we look at the way we kind of hoard our dependencies, developers sometimes create applications a little bit like that. I know for sure when I am writing something, I've been writing a lot of Python recently. I add a lot of dependencies that eventually I realized I didn't need. Or sometimes I've got two dependencies that do kind of the same thing and I try both of them and then I realize this one's better and I forget to remove it from my dependency list. So you end up with this potential dependency bloat that can create noise and vulnerabilities that just simply aren't real, that don't matter. So you get a version of false positive, even with something like software composition analysis as opposed to SAST or static analysis, that can create bloat. So we're already creating a lot of noise, but just these two. How about the third one? We looked at cloud misconfiguration being called out in the top 10. So is the cloud part of our application? I would argue, yeah. The cloud has to be probably the first thing you set up securely. Ideally using infrastructure as code like Terraform or CloudFormation or their bicep. And you're doing a good job of making sure that you apply that code. If it's code and it's in a repository, that sounds like an application to me. And the cloud has to be set up very specifically to run my application. Maybe it's running in Kubernetes. Maybe I have a Kubernetes manifest and I have Docker files, et cetera as well. So is that my application? I would argue, yeah, it is. The definition of the application and thus the landscape we are attempting to secure is expanding. Certainly the move to containerization put certainly the infrastructure in the same pocket as the application. And since that happened, we have to start expanding our definition of application. But this can be an advantage when we'll come to that. Now the final thing I think is the least considered part of the application. And that's the pipeline that builds it. You might be thinking, what, Steve, come on? How is that my application? Well, it kind of is because it's actually one of the most vulnerable parts of our application's journey. Now think of it this way. Your life is you, you've got your dependencies, the breadmaker, and you've got your house, which is your cloud. You're securing all of it. But you don't just stay here. You've got something that takes you on a journey, perhaps your car. That's still part of you and that's sort of part of the application. And as the software gets written, it's the same thing as saying securing data at rest and securing data in transit. The application is data. And the application will be built and constructed and orchestrated via CI CD pipeline and it will be pushed into the cloud. That journey needs to be as secure as the code itself. So let's think about the problem with AppSec at the moment. One of the problems with the way we're executing within application security is this. The code I run, that's a tool. That would be static analysis. So static application security testing, SAS. That's the acronym and it's dependencies. We got a tool for that. That's called software composition analysis or SCA. And these tools, this is almost the order within which tools were created. I used to work at a company called Coverti. They created a static analysis tool, pretty good one. Particularly good with C. Wait, I think it was 2000 and it might have been five? Almost 20 years of static analysis. So that's pretty old. SAS has been around a long time. Then SCA, there are lots of dedicated tools out there that do SCA, some free, some not, to tell you what are the vulnerabilities in my dependencies. And there can be a lot. It can be pretty noisy. So that's a separate tool potentially as well. Maybe some of you are watching this going, yeah, I have a SAS tool. I have my favorite SCA tool. Keep going. Okay. The cloud, it runs in. What are my tools for that? I've got cloud security posture management tool that looks at the configuration of my cloud to make sure I haven't done something silly like leave an S3 bucket unencrypted and exposed. And I've got CWPP, which is cloud workload protection. That's monitoring my Kubernetes and my containers and it's looking at my server lists and it's monitoring all of those things that are running to make sure that there's no anomalous behavior and maybe that I've got a admission controller that's stopping containers with vulnerabilities from running. I've got some of that. So I've got two more tools. Awesome. And of course, the pipeline that builds it. There isn't even really an acronym for that one yet. So now you gotta figure out how do I secure the pipeline? Now there are some open source tools you can use. I work on one of them. Check-off is an open source tool and you can go check that out at checkoff.io. The open SSF has a really open software security foundation, if that's what it is. Open SSF has a tool as well. You can get for free that will check your pipeline and make sure that you've got things like multi-factor authentication turned on and branch protection and all that. So there are options. You can do an open source, but there's not really a acronym per se for that, is there? Which is interesting, but it also implies you do need something in the tool realm for the pipeline that builds it. Now, why am I going on about the pipeline? Well, very good reason. The pipeline that builds it is the emerging threat. Let's take a look back at the past few years. CircleCI, 2023, a malware attack on CircleCI, engineer's laptop, I suppose the secrets. Interesting. Let's look at CodeCov, a supply chain attack that went undetected for months. Look at the NPM, a ubiquitous COA NPM library was hijacked. And then of course, SolarWinds, one of the most sophisticated hacks of all time back in 2020. These are supply chain and pipeline attacks. Like, this is not the application itself. This is the journey that's being affected. And that's really important that we start considering that as part of our application and part of our security strategy. Now, as a reaction to this, if you're unfamiliar, I did talk about the OWASP top 10, the normal one. There are many top 10s on OWASP. Some would say too many, but here's another one. The top 10 CICD security risks. And some of these terms might seem a bit foreign because they're not ones that we're used to. Poison pipeline execution being one. Is there some way, and there is ways to do this in most CI pipelines, that I can submit code that can run simply by creating a pull request? An example of this within Save GitHub is if you submit a workflow file as part of a check-in under very, very weak credential checking, that workflow will actually run before it gets approved as a PR. Very weird, right? There's a whole other talk I've got about how you can abuse that. But these are real threats, and they're really interesting. So if you got, I'm not gonna go through all these, but just making you aware, if you've not heard of the top 10 CICD security risks, go give it a Google, go to the OWASP website and check it out. There's some really important application security learning to be had here. And this is directly as a reaction to some of the hacks that have happened in the past few years. All right, getting back on track. If we recall, we got SAS, SCA, CSPM, CWPP, and that's just for lack of terms, call it a CICD platform security tool. Maybe we have five different tools all attempting to do application security for us. Now, that sounds like a solution, but actually I think it's kind of part of the problem. And I kind of, and I blame myself. I've worked through the past eight years on different styles of these tools. So I've worked with all of them and I've been a contributor to almost all of them. And it wasn't until recently in the past few years, I started to realize that the act of creating individual tools for individual pieces of our application security problem, like a SAS tool to scan my code, SCA tool to look at dependencies and doing them independently has meant that when I visited organizations and I started to talk to them about their security strategy, I found a rather large problem. And that is most organizations as a result of tooling being independent have siloed the security teams to map to that tooling expertise or the technological geographies that was in which they exist. What do I mean by that? Well, SAS tends to exist within the developer pair community because it's analyzing their code. CSPM is within the cloud community and the app sec team and the cloud sec team is almost like InfoSec and AppSec. They aren't the same people. You go into a room, I have been in a room where security was brought in to speak to me and the AppSec and the InfoSec teams shook hands and introduced themselves. This was the first time they'd met. And yet I would argue that this is all the application. The application, the securing the application should not exist in silo. And we have a bit of a problem that we need to overcome but I'm pointing that out. It's almost like a reverse conways law simply because we create individual solutions as vendors. We have actually shaped some organizations in a way that if a tool comes out that can actually combine or be more holistic, then where does it go? Who owns it? So that is essentially what our reality in AppSec is. Now, to expand the AppSec reality and I've been alluding to it the whole time, I would say AppSec in the past few years has made developers cringe. Why is it made developers cringe? It's because it's just noise. We're focusing on what we would call findings. If I've got a large scale application and I run SAST on it and I know there's 1,400 CWEs, maybe I find 1,000 things wrong. Then my SCA reports that I've got another 2,000 vulnerabilities in my dependencies. And I haven't even looked at analyzing my Terraform code to see that I've got 200 misconfigurations there before I've even provisioned my cloud, nor have I even looked at the fact that I probably have issues in my Kubernetes manifests and my containers they're about to go. I'm just creating so many findings that I don't necessarily know what to do with them all. Now, the reality is that a lot of these findings simply don't matter. And that's what we need to find a solution to. The findings themselves are important, but we need a way to be able to dig into it and figure out how do I prioritize and what really represents risk. And I think that's what I wanna introduce is we're looking for risk, not findings. I'll say, for example, this light over here that's telling me my seatbelt isn't on. Okay, well, there's a few other indicators in there. Like, for example, I'm not actually traveling. There's no RPM and I don't even have any fuel in the car. So I'm gonna go ahead and call that a false positive. Alerts without context. Okay, how do I bring context? Well, maybe everything really isn't fine. Maybe in the indicator where I should have my seatbelt on, I need to look at the context of the situation. For example, this context, there's no seatbelt on. Okay, great. What else is wrong? Can I see indicators that might consider this real risk? Yeah, the windows. I'm obviously driving very fast here. That is a problem. I got some loose objects sitting in the seat and she just looks way too happy, you know? What's going on? She's actually focusing. There's an interesting scenario playing out here that makes me feel this represents real risk. And our expectations of our application solutions have this, maybe it's a marketing issue. I don't know, but what we want is we want the application security tools to be able to show us exactly the needle in the needle stack. This is the piece I need to focus on, but they don't. And there's a reason why they don't. What they do, for sure, is they, instead of giving us great results, they give us alert fatigue. And if we have alert fatigue, then we don't work on fixing anything. What we do instead is we think about false positives. All you have to do is get one false positive within your dozen alerts and you probably won't look at any of them. Like take, for example, if you've got a friend that walks up to you and says, hey, in the middle of a conversation, says, you know the moon landing, yeah, yeah, that was fake. You're like, wait, what? It's okay, all right, conspiracy theorist, great. There's a false positive. A friend just gave you a false positive. Now, do you listen to anything they say after that with any level of seriousness? Yeah, maybe not. And that is the damage a false positive can do in particular when you are already overrun with alerts and findings. All right, let's move on and double down on the application. So just a quick recap, the application is the cloud and all my workloads. So the application, the server list, the Kubernetes, the York straighter, the VMs, the compute storage, IM and data, all of this. As it's running in real time, this is my application. Not just that, it is the repository that is holding my proprietary code, my infrastructure is code that provisioned that cloud and the packages upon which I depend. That is also my application, not just done there, it is also the journey, the continuous integration, the repository, the registry that holds my images, all of this, this is the application and this is the landscape. This is what we need to secure. Why? Because everything can have a threat. We've got things that we're more accustomed to over years, misconfigurations, threats, potential lateral movement, escalations, any all of this can happen in runtime. But our pipeline can be under attack because our pipeline is using the same container images that might be vulnerable to build. If you use GitHub actions, for example, this is a lot of these run in the actions, run in containers. GitHub actions can depend on GitHub actions that can be completely vulnerable. There's an amazing talk that was given at Black Hat and Def Con this back in August called the GitHub worm that was specifically analyzing the top 1,000 GitHub projects that are open source dependencies and showing how simply by dependencies you can have, you can get access to corrupting a lot of them. Vulnerabilities misconfigurations and malware in our own code, in our packages, in our dependencies, the landscape and the attack surface is much bigger than we expect. And what we would hope now, once we realize this, is that we would look for solutions instead of siloing all of the different components of AppSec integrate all of the different solutions. Now, why do we want that? When you think about the context I brought to the seatbelt, how do we bring context to findings? Well, well, it's through basically holistic visibility. If I can combine, could be one example, if I can combine software composition analysis in SAST. Now, we think of SAST as something that is being used to find weaknesses. Well, it's also recording a lot of very important information, for example, it's usually creating an abstract syntax tree and it's walking all the different paths over our own personal code. Now, independent of whether it finds weaknesses or not, that's a very powerful tool. If they can identify, pardon me, if they can identify what dependency calls are being used by my own code, perhaps it can tell me that a vulnerability in a dependency found by an SCA tool is or isn't reachable. That's one huge advantage of combining SAST with SCA, but how many people actually do that? That is the level of context that the future of APHSEC represents, that that's where we're going. Now, what does that look like? Let's take a look at this. Real APHSEC risk, we need to think about it, not as findings, not as vulnerabilities, not necessarily as misconfigurations, but as attack paths. Do is there an attack path that will lead an attacker to the thing I'm defending? What am I defending? I'm defending data most of the time. There's my destination, my storage bucket with personally identifiable information. There's my entry point, the internet. If I can see in one holistic view the combined efforts of all those tools I just talked about, SAST SCA, CSPM, let's add web application firewall in there. Let's add API security. And I can see that there is a path from the internet, publicly exposed to my Kubernetes namespace with a container that has a critical vulnerability that is over provisioned and has access to personal information. I can see this path exists. Why? Because instead of siloing all of my tools, I have one that looks at all of them and brings them all together. Now that's a threat. It's a threat until I'm actually looking at a normally detection and I can see unusual high data volume and data exfiltration. Now I know it's an attack. To be able to have this level of comprehensive visibility, we need to move forward in our definition of application security. And that definition of application security needs to encompass an unsilowing of not just our tools, but an unsilowing of our security organizations as well. We need more conversation. We need to look at maybe what we have already in terms of tools, be it open source or commercial and look how we can bring information together, look at how we can do the data analysis to normalize and rationalize the findings into something that looks more like this. Something that shows us that we have real tangible threats. In our search for the risky needle in the needle stack, we need to be able to distill findings into actual usable data, what code is used and determine reachability of vulnerabilities. Reachability of data at risk. This is the solution. So modern attack, if we're looking at what modern app site does, the goal we're trying to do is take all of the findings, bring them into a central location, uniting DSPM, CWP, CSPM, SCA, all of it, relate it through attack paths and then bring us priority. And then if we can, even in just logs, like we would do with something like Splunk and bring that, the 17, in this particular example, 17,000 alerts, reduce it to critical and actually represent actual real risk. A number as low as 24. And from there, we actually have something actionable. Like how actionable is most of the legacy application to security data, it's very much not in many cases. If I only have certain attack paths and I can see where in the attack path I need to fix something, maybe I don't need to fix the critical vulnerability, maybe I need to change the admin access. Where's my best fix location? How do I fix it now? And how do I fix it permanently? And that's important to know too. And again, when we talk about integrating something like SCA with CSPM, if I find a misconfiguration or let's say IAC analysis, if I find a misconfiguration in a cloud and I can make that change, sure, maybe as a, sorry, I might wanna make that change immediately so that I know this attack path has been cut off. Well, I still need to change on the code. How do I do that? How do I have that conversation? How do I know where that is? And this is what modern application security needs to be represented as. We need it so that if I've reduced all of my findings into something that is tangible, something that has an attack path, I need to fix it twice. Once now to stop the flow, and then I need to be able to create a pull request that sends a message back to the developer in their language, not necessarily via a Jira ticket, and shows them that they can see via PR, okay, I need to bump this version in my package, Jason, or I need to add this particular parameter to my Terraform to say we've fixed it in the cloud, here's the equivalent fix back in code, please approve this pull request, and we're not gonna undo our good work in the cloud. Now this is real application security at its finest. Does it exist yet? Kinda, I work for full disclosure, you know, in the beginning I work for Prisma Cloud, and this is the direction that we are going in terms of approaching application security, and we're not alone in this. This is the direction the industry is going in terms of consolidating what is silent tools and organizations into something that's more platform focused, and more considered to be there's a modern term called cloud native application protection platforms, more platform related, because we suddenly are realizing the power of a platform level application security tool is immense. And then finally, being able to fix things as early as possible, ID integration. So all of that information, we just had all that holistic visibility, if we can bring that and learn from it and start generating new rules and new concepts of what we consider to be threats, and push those definitions directly to a developer as they're writing code, well, we can start to prevent, that's the most important thing, and even predict that attack paths may occur based on code that is being written right now. This can apply to both the creation of infrastructure as code, and code that I write and dependencies. You can see it's a bit of a shameless plug that are powered by Checkoff. Checkoff is the open source tool that I work on with a huge team of amazing engineers, which you can go get for free. It's like a free Checkoff.io tool you can get that will analyze infrastructure as code like Terraform or Kubernetes manifest. Something like that stuff. You wanna get started, you wanna get your developers doing security, now that is a free takeaway right there. And that is proper shift left. That is a definition of proper application security do-sec ops, is to make things happen as early as possible. And that would be great if everybody did that, then we would find that our noise in runtime would be incredibly reduced. We'd only be looking at real risk. We'd have a much quieter world and we'd sleep at night a lot better because application security was finally telling us that only this one needle was important. Okay, I got through that relatively quickly. Plenty of time if you have any questions and yeah, thanks. Weird thank you slide. Here's the real one, back to you, back to you, Kendis. Yeah, if anybody has any questions, feel free to drop it in the Q&A box. We have some time that's giving to answer any of your questions. If we wanna give it like a minute and see if anyone has anything. Sure, I figure I either put people to sleep or I answered all their questions, one of the two. All right, looks like we got a few questions then. I'll do them in, well, the reverse order, I think, just cause he cares. What part of the model shown is available now from Prisma Cloud? I'm not sure I understand when you say model, but the screenshots that I showed that are Prisma are current screenshots. So anything I said that I pointed out there as things that are currently, that do work are real within Prisma Cloud. Maybe that answers the question. If not, feel free to let me know. Recommendations for self-study regarding these tools. Yeah, there's a variety of tools out there in terms of SAS and SCA. Interesting, in terms of self-study, Palo Alto, we have partnership with Udemy and there's lots of great courses on Udemy. I know you don't always get it for free, but that's where internally, by default, that's where we go. Various OWASP, any tools that report various OWASP CICD issues? Yes, the, you can, if you have for certain CICD platforms like Bitbucket, GitHub, GitLab, the tool that I just posted into the chat, to everyone at checkout.io, if you have an API key for your repository and it is visible to checkout, does actually will then go and say, if your framework is CICD, it will go and look at your CICD and tell you what you've done wrong. And also the OpenSSF has a thing called Scorecard. So if you look up OpenSSF, actually, I know what I'll do is I will even paste that into the chat so you can see that is a Linux foundation, actually. So go check out OpenSSF and they have some tools that also check the OWASP CICD issues. So definitely go check that out. And then there's one says, how different is APSIC from ASPM? That's a great question. Not that they all work great questions, but ASPM is another one of those hot terms at the moment, which I find I'm uncertain whether it's marketing or real. ASPM stands for Application Security Posture Management. There's a lot of SPMs out there. There's DSPM, Data Security Posture Management. There's a lot of SPMs that are out there. I think I even heard KSPM, Kubernetes Security Posture Management once and I was a little bit unsure, like is that real? But why not, right? Application Security Posture Management at the moment, there's some players out there that are doing a great job of it. It is a thing, but it is monitoring the application running in the cloud very much like you monitor the cloud. So it's looking at, in some instances, it takes information it knows from the repository and it monitors the cloud almost in a gray box kind of way. So it can look at the application and know whether it's doing the things it's supposed to do in terms of it's like open ports, API behaviors, and it also at the same time, takes a little bit of a CSPM approach to it as well. So it is an integrated, it does some of what I said. It takes some of the tools that we are siloing at the moment and it integrates them. I think one of the solutions, which was recently acquired, I think it was called Bionic, they actually did some reverse engineering of the binaries of running applications if it didn't have access, as opposed to access to the source. And so it is a combined runtime application security with a bit of CSPM. So that is what ASPM is. I think it's a pretty cool concept, but at the moment, it doesn't quite complete the application security picture that I'm trying to convey. And for me, that is, if I do find something running in an application that is anomalous, and it's perhaps accessible via knowledge of cloud security, how do I fix it in source? And this is where I think current, this is obviously my opinion and not Prisma Clouds. How do I get a permanent fix out of that? I think as ASPM develops, it will actually just become application security again. But it's an interesting, good question and a very interesting perspective on a way of tackling application security in runtime. I just think there needs to be an extension back to source almost every time for me. Coming from somebody who writes code, both infrastructure as code and application code, I want my life to be made easier by application security and I'm not 100% convinced ASPM is the answer to that. Oh, long question. Okay, existing security standards like, yeah, you have BSOC approved and ISO, they focus on individual vulnerability reports. Too true. With this, playing with your shift towards those standards as well. I would hope we would see a shift in those standards. That's a very good question, Joshua. The industry tends to lag. You probably already know that though. What is, I guess, aware within some of the more platform level tools is that generally we understand, certainly at Prisma Cloud, we definitely understand that these are the SOC2 and the ISO are our reality. But having the overarching full visibility of everything within cloud application, everything, it becomes incredibly easier to generate the reports that are required for those kinds of certifications. So I don't know that the, it would be nice if the certifications changed to be a bit more about risk as opposed to vulnerability because vulnerability is not a real definition of risk, maybe over time, because a lot of the people who create those requirements are people who work in the industry, quite often for vendors. So hopefully we can push towards that. But in the meantime, all of the different tools that even whether it be Prisma Cloud or any of the ESPN solutions, we are very much aware of what we need to create and having larger visibility makes that easier for not just us, but for everyone. See, did I miss any? Good questions, five questions. Okay, well, any last minute ones? I'm here to help. I can be reached by the way if you, I don't know if my name was ever up on the screen, but Jagar is pretty unique. So if you find a Steve Jagar, G-I-G-U-E, you'll find me on LinkedIn. I can't hide, unfortunately. You can ask me questions there. Any reason for not covering SBOM? Because I hear the word SBOM 400 times a day and maybe I was rebelling and decided not to say SBOM. SBOM is almost, SBOM is almost there, I think. I know there's a bit of a mandate, particularly in North America for SBOM generation. I think that's all a great, very good step forward. Software bill of materials for those of you who are like, SBOM, what's he talking about? Generally created by software composition analysis tools and container scanners, et cetera, that are saying this is the list of all my dependencies. This is, here they are, so that full disclosure, these are the dependencies that are in the package that I am shipping to you. It's a great ingredients list. Same as when you buy food, you got all your ingredients there. You know what's in it, you know the calories, you know everything, right? This is an amazing step forward in software to have software bills and materials. However, I don't 100% think that SBOMs solve much because again, these are just packages. Packages have vulnerabilities. Vulnerabilities aren't necessarily threatening unless there's some risk there. There's an additional addition to SBOMs, which is VEX that is looking at bringing context to SBOMs. And I think in combination, once that industry matures a little bit, then that actually will bring more relevance at the moment to SBOMs. But right now, SBOMs almost feel like some certain compliance directives in that. It's a good thing to have, but we're not necessarily leveraging yet. And in particular, if you actually created a real SBOM, in many cases, it doesn't go deep enough. Like if you look at the potential SBOM for Kubernetes itself, if you went through all transitive dependencies and put that on an SBOM, it is unbelievably, unreadably huge. So we need to create SBOMs. I think it's a step in the right direction, but right now, it's a little bit of theater at the moment. So that's not why I didn't mention it, but that's my comment on it. That was a good one. Maybe say something controversial. Okay, maybe we're done. Oh, look at that. Well, thanks, Prakash. All right, all ready to wrap up, Steve? I think we're good. All right, thank you so much, Stephen, for your time today, and thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars. Have a wonderful day.