 Okay, thanks very much. There we go, recording in progress. We've answered the very first question already. It will be recorded. Hello, hello, hello. My name is Steve Shagair. Big thanks to everybody for showing up to this webinar. Yeah, that's a long title. I know, we came up with that a while ago and we're stuck with it. It's long. I came up with some other ones later called Firing Checkhouse Gun and Know What's In Your Bag and you'll find out why I came up with those titles a little later on. A little bit of foreshadowing for you there. We're gonna be looking into software composition analysis, open source, putting that all together, context aware, all sorts of wonderful things as we go through this presentation. Quick introduction to me, if you've never heard of me. My name is Steve Shagair, I am Canadian. If you can tell by the accent, I do live in Britain. I've kind of 50-50, I'm dual citizen. I'm a cloud security advocate, developer advocate. It says Bridge Crew, Bridge Crew is Palo Alto Network. So kind of the same thing. I've been writing code since 19 and we're just gonna leave that one with that. I'm focusing on cybersecurity. I've worked for a few different companies in the space, now focusing mainly on supply chain security, which encompasses an awful lot. You might even say everything. If you find yourself in London, England, I also run a meetup, or I co-organize, I should say, a meetup called SSECOps London Gathering there. So it's running monthly, come check it out. And if you're around on Fridays, I go on Twitch and YouTube with a colleague, ex-colleague of mine, and we just talk about the week in cybersecurity. It's a lot of fun. If you like recklessly clicking QR codes, there it is. There's one for you and we can be friends on LinkedIn. No worries. What's the agenda? Well, it's gonna be pretty exciting, right? And this dog's gonna get a face full. That's exciting. I wonder what I'm gonna say there. We're gonna look at some pretty logos. I like that one. We're gonna reflect on some news and how that changed the face of licensing and open source for forever almost. And we're gonna meet this guy. I mean, he's a friendly looking chap, isn't he? And I'm gonna throw a graph up on the screen that makes me feel really smart. This is just some of the things that we're gonna encounter on our journey over the next 45 minutes. But let's begin. Let's begin with challenges. Sure. Mainly challenges with open source. Why am I talking about open source? Well, cloud native applications, in particular cloud native, but really all are rife with open source. I could cite probably a three or four different sources, Ossura by Synopsys, the sonotype state of open source, all of these things, they all kind of align and reinforce each other's postulations that we're about 80% open source at a minimum. And this is across not just our own applications, but even cloud providers are heavily made up of open source and it can be a problem. Why? Because open source libraries and dependencies, they have vulnerabilities. Okay, we all have vulnerabilities, but these ones are published. These are public, that is as useful to us as it is to the baddies. Now they get nice little scores like CVEs, if you're not familiar with a CVE, that's the common vulnerability enumeration. They get a fancy little score with the year in it, see how old it is, that's nice. Common vulnerability scoring system CVSS tells you how risky is it? It's actually a pretty complicated calculation out of 10. I'm not gonna get too into it, that's a whole other presentation, we could get into that. But the reality is they're out there. Now, taking a step back, and there's gonna be a few of these as we go through this presentation. I like to branch off my production main and tell a few extra stories along the way. Why opens source software? Why? Why are we so into it? The promise that when a solution exists, we don't need to reinvent the wheel. This is a big deal. This is why most software is about 80% open source. And doing that, well, that reduces development costs, it accelerates development, it's pro innovation. And plus we don't talk about this too often. Open source is like crowdsourced quality. The good packages, the good dependencies, the good projects, they rise to the top. The world clicks, the star, you can see an example there of 4,400 stars on that project and says, hey, I like this. I wanna follow what happens to it. And that's good for us, because that helps us with some of the challenges. Well, first one being, if you're trying to solve a problem and you wanna do it with open source, there's often several options out there for you. And you don't know which one's the right one. So you're relying on this sort of thing to tell you, this is the one I'm gonna try. Now there are other things to consider other than stars. For example, how well is the code maintained? How many contributors are there to it? How many commits have been made in the past week? How well do they secure this? Hmm, we'll talk more about that, say throughout the entire presentation. One extra one that often maybe we don't think about enough is, does it meet my licensing obligations? Because open source packages, they all have licenses. And we often don't really pay attention to those licenses. In fact, open source packages use open source packages that use open source packages. They all have licenses. They all have vulnerabilities. And that's what we're gonna dive into. But first, a nice little quote from opensource.guide. I actually really like this. So I just put it in word for word. Open source is powerful. It lowers the barriers to adoption and collaboration and allows people to spread and improve projects quickly. It is really fantastic. And we're gonna talk about a great example of it shortly. But before that, let's get back on target vulnerabilities. Really good open source bugs. I'm a bit of a fanboy of really good open source bugs. They get logos and they get cool names. If we go back to 2014, I actually, when 2014, I worked for a company and they are, I worked for a company that found Heartbleed. Heartbleed was kind of the big, got a cool name. It got the first bug logo, I think. I know that they were almost more proud of the logo than they were of finding this bug using a fuzzing tool, it was amazing. But then others followed Dirty Cow, Shell Shock. You can see the logo's over on the side there. And then we got a little bit lazy. We started calling bugs like the Run C vulnerability and the Apache Struts vulnerability. I mean, come on us, come on security. The Apache Struts vulnerability devastated Equifax. I mean, we didn't even give it a logo. I'm a little disappointed in ourselves. I blame myself actually. But then we got a little better. Last year we had Dirty Pipe, which was a nice little bit of a take on Dirty Cow, Log for Shell. All of these really catch the eye of the media. They catch the eye of security enthusiasts. They even catch the eye of developers and they certainly catch the eye of baddies. Now, is there some way that I can learn more about this? You know, it would be interesting if, oh, by the way, this is another aside. While I was searching the internet for this presentation for bug logos, this came up. Now I don't know who Brand Crowd are, but they have some kind of logo generation system and they're ready. They're ready for the next bug that made me laugh. The reality is these things are all published. Where are they published? Well, the National Vulnerability Database. And when I took this image, I learned something. There's also a database they've just published which is known Exploited Vulnerabilities Catalog. So if it wasn't bad enough that we make all the vulnerabilities public, we even have a catalog of all the ones that have been exploited. So if you're looking for some examples, well, I guess there you go. So what about my cloud providers? What if they use open source, I said, right? We got you covered. Open source vulnerability and security issue database. You can go check out vulnerabilities in your cloud providers as well. How about that? Not too bad, right? What if they were already pre-produced known exploits because these are exploited. Are these public? Oh yeah, oh yeah, they are. The exploit database, fabulous. So this is all both exciting and alarming simultaneously, 50-50. Now, the good news is it looks like I gotta put a lot of work in, right? It's not like there's like a search engine where I can just go out there and search for vulnerable. You know, it looks like there is. Huh, all right. So say I were to use this to, I don't know if I knew where Heartbleed was and I was gonna search for the version that was before where the fix was applied and I do that and I see, wow, people don't patch their stuff. 4,000, this is, and by the way, I took this capture last week. 4,000 people still haven't upgraded past open source. Okay, well, anyway, it's only 10 years old, but I digress. What if I wanted to find the exploit for this? I could go trolling through that exploit. Oh, they have a search for that too, right? Yeah, all right. You get the idea. The challenge, one of the biggest challenges with using cloud, with making cloud native software is that we are just, we're embroiled. It's become a standard that we're using open source and we need to track it and we did track it well. Now, I'm not gonna stop right there. I'm gonna talk a lot about this, but I really wanted to highlight some of the reasons why we don't in a moment. First slide, I wanna highlight the fact that it's not just about vulnerabilities. It's about combined misconfigurations and vulnerabilities lead to new attack vectors. What do I mean by that? It means that when we're looking at creating our applications, particularly in cloud native, we're often provisioning our cloud, we're provisioning our Kubernetes, we're building Docker files, we're creating manifests. This is all happening as code. And then we're putting our application in there. So securing our application is just one small piece of the puzzle. If there's a problem or a combination, for example, well, we'll talk about it in a moment here, where multiple vulnerabilities can combine to create a unique attack vector, then it can be in a little bit of trouble. There's an example up there where you can see there's a high severity CDE insult stack you used in public cloud. And actually there was another one recently, the SolarWinds cyber attack. It's been hard to avoid that, right? How does somebody get into a pipeline and start making modifications without anybody noticing? There's been loads of new innovations in that space and I'll talk about them later on, but it's amazing how we're still learning in cloud native. How can we harden the inside as opposed to just the outside? And then finally, log for shell. How is it being exploited and how to mitigate damage? Well, one of the ways to mitigate damage is to make sure that every aspect of what you're provisioning be that cloud security, Kubernetes security or application security gets the same amount of attention. And particularly when you're being done in combination. There's a tax service for cloud native is complicated, extensive. These are understatements. Our RCE, which stands for remote code execution can be used to exploit, say an overprivileged pod. An overprivileged pod can be used to exploit an overprivileged host. If I can get access to the host, then I can query the cloud metadata. If I can clear the cloud metadata and I can get certain IM details, I can get access to just about everything else. And it's not that hard to see how this can happen because it does. Okay. Here's a real clunker, budget. Budget could be a problem. This is another reason why we use open source. Lower is the barrier to entry. You can try things out. Maybe there's a pay version of it. Maybe there's not a pay version of it, but you being able to try things at our own pace is actually really critical to the success of any security program, I would argue. There was a, this was a, I think this was three or four years ago. There was this came out and I haven't seen anything quite the same since. It was called the CISO report for tribes of the CISO. It was a little unfair on CISO. So I, every time I mentioned this, I apologize on behalf of the authors, actually who I've met, because it's more about the philosophy of security within our organization. Tribe one being security as an enabler. This is like our security in Nirvana. They understand that money spent on security is money saved later on. Worst case scenario, security as a cost center. And it's a great, if you can just Google it. It's, it's, there's only one of them and you'll find it, but everybody falls into some form, one of these camps, and usually we teeter between a few of them. So it's a really interesting article that might convince you that if you feel you're more in the security as compliance camp, give it a read and that'll probably push you a little bit more over to the, to the left towards enabler because you'll start to convince yourself that maybe I could reevaluate compliance. Compliance is a challenge. It's actually also an enabler. So there's two, there's two signs to so many coins. These don't, we don't want these to exist. They exist because we are frankly terrible at writing software by default. PCI DSS doesn't exist for payment systems because we wrote awesome payment systems. It exists because we wrote terrible ones and somebody had to slap us on the wrist and say, hey, this is the bare minimum. Okay. You're talking about people's money or HIPAA, people's medical. These are all important things. They didn't exist originally, they exist because we did things badly. However, compliance as an end goal is a continuation of doing things badly. There's plenty of articles out there that's, there's a great article actually recently that said, if I'm CIS compliant, how vulnerable am I still? Google that, duck, duck, go that whenever your search engine preference is and you'll find out even meeting the bare minimum of clients doesn't mean you're actually secure. So compliance is not an end goal. It's just a beginning. But we can use it as a bit about, let's call it security theater and that's a no-no. Licensing, I alluded to this earlier, licensing is complicated. So quick one on what is an open source license? They're based on the open source principles and I've got a link down there, opensource.org, that's pretty easy to remember. And even if you don't go for the full length, you'll find it. But it's a bit, I don't wanna call it idealistic, let's say free distribution, open source code, the ability to create derived works at will, integrity of the source code, attribution, no discrimination, this just sounds wonderful. This should apply to everything. This sounds fantastic. But there are, let's say, versions of these principles that are been released and are roughly described in three categories, permissive, weekly protective, and strongly protective, and you can use the wrong category very easily by mistake. Quickly going through these, these are your favorites. Permissive, this means you can modify, redistribute, give credit, of course, but you can even re-license a new product within your commercial product. This is fantastic. MIT, Apache, these are some examples of the licenses you wanna look for. And if you're wondering where a really good example of this is, maybe you've heard of Kubernetes, Apache license too, check it out, these are the permissions, these are what you can do with it. Pretty interesting, right? Kubernetes is a fantastic example of where a company owns some internal IP and realize that open sourcing it in all its complexity and all its glory was going to be not just an advantage for them, an enabler, but an advantage for the entire world, as it's become the de facto orchestrator for containers. And in fact, people refer to it as the operating system of the cloud. So there's a fantastic example. Weekly protective license. This has kind of got their legs on either side of the fence, they allow proprietary modules to use derived projects, but you kind of go to tread lightly through these, Mozilla, LGPL and Eclipse are examples of these. And then last but certainly not least, derivative works must remain under the original license. So if you find something that's GPL, you have got to continue being GPL. This could be destructive for companies who don't want to expose their own source code because if you're open source, you have to continue to be open source. And this happens. These are a few examples of where it really did happen. Free Software Foundation sued Cisco. Why? Because Cisco acquired Linksys and Linksys was using it as an open source code, that open source code had a GPL license. Bad things happened. So you really got to be careful and be tracking those things. Now you might think to yourself, I paid for a scan six months ago, so I know what my licenses are. Well, the second warning is, do you? That one I threw up at the beginning, Elastic CEO reflects on Amazon. Elastic changed their licensing because of, well, they didn't agree with a certain use case. MongoDB changes its open source license. There are examples where really, really commonly used open source packages change their license and you upgrade and suddenly, you're getting a little bit potentially caught off guard. So having more consistent monitoring of this is very, very important. Okay, we'll come back to licensing a little bit, perhaps later. Culture, yeah, it's important. Things change a lot in the internal culture of that cloud-native, heavy environment, cloud-first, whatever with other versions of the naming. That's a big deal. You know, when you're, well, back when we used to go into the office and you'd see somebody going into the office, tell-tale signs would be a heavily-stickered laptop. And you see this and you're like, okay, all right, here we go. That means we're probably a microservice environment. We probably have isolated teams creating unique technological wonders that will be united later on as an application. The technical stack will be completely in their hands. Potentially the security solutions will be in their hands. This means you're dealing with DevOps. Yeah. And I can't quite remember what DevOps stands for totally, but I think it means don't ever offer painful security. I think that's what it stands for, but if it doesn't, it does now. But we can deal with that. How? Well, we want early detection of vulnerabilities and licensing issues. We want to be, we want an inventory or a bill of materials, a term that we'll get to later of everything that we're using. And we want to get this as early as possible. We want to know what vulnerabilities would be there. Well, to say that means we want to be friendly with the DevOps teams and the developers. We want them to be part of our security solution. So we want everything we do from a security strategy to be developer friendly. We want the developers in the room when we're making decisions. We want our solutions to be context aware. The same vulnerabilities for one team, they really don't matter to another. When we're talking about microservices, I've got an analogy for this one later on. Context is really quite critical to make sure we can prioritize the things that we find. Otherwise we'll just be swimming in a sea of noise. Everything must be cost effective. That feels like a no brainer. So I'm not going to sit on that one. Quiet means actionable. And finally, if you didn't catch on, developer friendly, also really good. Okay. So if you're wondering what I'm going to do, you're probably thinking, all right, this is very pitches this shiny new product feature. I will be talking about open source. So let's move on. What I want to talk about is, yeah, this is exciting. Software composition analysis. Whoo! Yeah. All right, you're on the edge of your seats now, I bet. That is pretty special when you say that. People sit up and listen. Developers go, wow. Let's make this even more exciting. Let's define it. SCA, which is the short form for software composition. Visibility into the open source components and libraries being used and incorporated into the software that dev teams create. Okay. It sounds like what we want. Can help to manage security and license related risks. You darn right it can. And ensuring that open source, bedded data breach, property, blah, blah, blah, blah. Right? Yeah. It is not the most exciting subject. I'm going to admit that. But in an open source heavy world, this is the low hanging fruit for your security, but provided we do it right. So let's say we're going to turn on the SCA, right? This is where the dog comes in. This is what it can feel like. If you've got a fully mature application or applications that are out there at scale and you haven't done any of this and you just decide to analyze it, you might be thinking or if you've used SCA in the past, because SCA solutions have existed for over 15 years. You're probably thinking noise, inaccuracy, and inflated by dependency bloat. Yeah. You would be correct. SCA needs context. We need to take a new look at it. We need to put it in the context of where it's being deployed. What does the infrastructure look like? What are these applications doing? What are they outward facing or are they inward facing? All of this information is absolutely critical to making decisions such that we're not wasting our time. So we need context, right? And the first type of context we need is actually cultural context. Where is it all going? Can we make it so that the information is not overwhelming the people who are involved? And of course, technological context. Is it easy in the technological environment that's being placed? Is it being placed in this cloud versus this region attached to these resources? This is all very, very important. The good news about culture for SCA, I think, is that from an analogy perspective, we already do DevSecOps. I mean, I don't know if you've heard this analogy before. Maybe if you've seen me talk before. But when we're going to the airport, we're doing DevSecOps. It's that simple, right? We're a control of a container. See where I'm going with this, right? It's analogy heavy. And we put all our stuff in it. These are the things we need. Sometimes, if you're like me, I have something that I own, a little smaller bag that I put inside my big bag. These are the no-brainer stuff. This isn't like, do I need a college shirt? Do I need t-shirts? This is the stuff I always bring, my toiletries, et cetera. These are all going in there. And then I start loading stuff in afterwards. And then I'm happy. And you get to the airport and they ask you, do you know what's in your bag? Like, don't you trust me? Do you know who I am? Yes, I built my container myself. Thank you very much. Has anyone given you anything to carry? I'm 80% open source. Any dependencies in my bag? I don't think so. My partner did put some spare underwear and socks in there. She took the bag away. I didn't see it, but I'm not going to tell you that. Any dependencies on my dependencies? I don't think even know what you mean there. So yeah, yeah, I'm good. And then they ask you if they can scan your bag and they want you to put it on a conveyor. They just ask you the questions. Don't they believe me? Geez. So the bag goes on the conveyor and that's fine. You go through the host scanner. You got your arms in the air. I don't know if you can see me on the camera there. It scans you and you're an expert. You got your belt off. You got your little baggy. Well, your toiletries, you took it out. Laptops out. You get it all in two trays. You go through and you're clean, right? You give a look to the person next to you who's getting frisked, amateur. And you walk in, you put your thighs up on the conveyor and you're waiting, you know, and you're taking a look at your bag because you know you're cool, right? This container is clean. Then why are you leaning on the conveyor? You just want to see the X-ray. I just like to see the X-ray in my bag. That's all I want to do. There's nothing weird going on here until the X-ray shows up and you're like, I don't remember. This is actually a true story. This isn't the actual scale, obviously. That would be probably in prison now if this were the case. I had a spark plug wrench in my bag one time because I rode a motorcycle and I was doing local travel and I just rushed and grabbed the same bag and went to an airport. And I don't know if you've never seen a spark plug wrench. It's got this kind of three-dimensional kind of thing to it that looks like it looks bad. When I go through a scanner and I answered all these questions and it's the exact thing happened to me. And then I had to deal with, well, I don't know, the penetration tester, I guess it'd be the equivalent here. But yeah, you don't want that. You don't want that. You don't want any of that. You don't want to deal with security on the right if you're someone who lives on the left. You want to go with velocity through that pipeline and you want to be in the sky as soon as possible, deployed. So this is something that we do. It's a little like safety and security. And these kind of analogies help developers go, you know what? Doing everything in advance does make that go faster. There is, this is security is an enabler here. I hate it when things break in the build in the pipeline, like what just happened to me here and having experienced it firsthand, it's a thing. All right, second analogy. This is how we used to write software, particularly way back in the day, but even when we became internet facing, we used to write big Java apps. They were very capable. They would be outwardly facing and then they would have this kind of security. We'd build big walls. We would have a moat. We would have a drawbridge. Maybe when we thought we were super futuristic, we'd have our WAF. There they are standing in the doorway. Hey, what's this big wooden horse doing in here? Come on, go inside, no problem. That was relatively easy to characterize. Okay, well, it looks secure. It's got little windows. It's got some things up top where I can put some defenses. I think we're good. And then we went to microservices and our application started to look a little bit more like this. That's a complicated surface, if I ever saw one. If I start looking for vulnerabilities, I'll find them, oh, guaranteed. Nobody's building a perfect house, particularly when each house has been built by a different team. What if I look over here? Well, these windows aren't very secure. They are not even double glazed and they're very close to the edge. I don't really like that. Once someone can get over the wall, there's that moat we've got over here, but it doesn't even go around the whole place. And it just appears that there's trees. What if I can get from there into the tree over onto that? I don't know. This doesn't look good. This kind of weird elevator shaft seems like an easy way in, but it's pretty far back, so. And then way over here, well, this is a completely uncovered roof, but it's right in the center. But it's highly vulnerable. It's wide open. And this is a very rough analogy. Now, I didn't even mention the fact that there's a river running around the back of this place. Who knows what kind of vulnerabilities we can find there? But this is more with the attack service of a cloud-native application looks like. But the score of each of these vulnerabilities, in this case, is kind of irrelevant. It's more about their location. And this is what I mean by infrastructure and context-aware. It doesn't matter what the score or the outside ones are. Those are the ones we probably want to look at and fix first. All right, I told you I was gonna do some branches and I'm gonna come back into the main in a second. What is Check-Out? This is an example, an analogy of a problem we have with an open source within cloud-native applications and why we need to be better as developers, in particular, and DevOps teams when we're creating these applications. And one of them is an analogy called Check-Out's Gun. If you're into literature, you may have heard of what Check-Out's Gun is. If not, the analogy into software is dependency bloat that I discussed earlier. Developers might add dependencies. I know I do this because I am guilty. I work on the, we'll be seeing in a second, the open source project check-off. And when you're testing things out and you're making those decisions about open source, you might add things to your dependency file that you then actually don't then end up using because you replace it or you refactor and you just don't need it. This happens. Now, sometimes they get caught by bots. Sometimes they get caught by checks. Sometimes they never get caught at all. And it often can lead to confusion when you're applying security tools and security automation to look for vulnerabilities, as you can imagine, because if you add something really vulnerable and you never use it, well, that's weird. Now, the Check-Out's Gun concept is after Anton Check-Out. Anton Check-Out was a writer, a pretty famous one. And when instructing on how to write for good screenplays, let's see, would say that if you're, or plays generally, if someone walks into a room in the first act and hangs a shotgun on the wall, the shotgun better have gone off by Act Three. Otherwise, what's it doing there? You've distracted the audience. They're constantly thinking about it. It's made a false promise of something exciting. It's going to happen. And that's kind of what a Check-Out's Gun actually is. And you can see I've got an example there. Bond movies used to do this all the time when they got overly keen on, was it Q giving all these gadgets? And then he actually wouldn't use them all. And you're like, what happened to the laser pen? He didn't even use it. Now people use it in movies as a device. Bruce Willis at the beginning of some film will put a weapon into a drawer, two hours later, you completely forgot about it, but it becomes the thing that saves his life. So they do the opposite of it. Look for it now. And now that I've told you about it, you're going to start spotting it in movies all the time. So this is the idea. Not using dependency bloat, because the reality of it is if you start adding things into your, this is my, what dependency bloat looks like, couple of guns on the wall, we're just, we don't need them anymore. But how do I tell which ones are the ones we're using? Which ones are the vulnerable ones? This is not ideal. Some are Check-Out's guns and some are Check-Out's guns, which of these are extraneous. And if I'm looking for, if I'm scanning all of this, then I'm just looking at noise. This is a problem. Now this is a good, this is a good lead in. Let me see what I've done with the word play there. Check-Out, anyway. This is an open source tool. You can go download it free. Check-Out.io, go check it out. I love open source. I think it's a great way to get started with your, not just get started, to be a critical part of your security strategy. Check-Out does a lot. It's been around maybe two years now. It started off just scanning Terraform. And that's all it did. It had about 500 rules. Now it's way beyond 1000. And that's an understatement. It scans for misconfigurations in any infrastructure as code and any declarative infrastructure as code, I should say. And there's some examples along the bottom there. Terraform, cloud formation, any of them in the Kubernetes family, like just straight up YAML manifests or Helm or customized are there. Bicep, which is so much better than ARM for Azure, by the way, I think. And then the major cloud native languages, they also scan the dependency files. Why? Amazingly, you don't think of those as infrastructure as code, but they kind of are, they're declarative. They're saying, this is what's going into my software and there's no reason like we can't look at those and take advantage of all of this public information about vulnerabilities and start showing you those things. And what's even better, we can do it and bring that context that I was talking about, which is kind of nice, right? And we can do it early. So all of the things that I put on the screen earlier, this is a great step into the direction of doing something like that. And it's something you can get developers to do on their own. They can pull this information as opposed to having this security method pushed upon them. So I'm gonna use an example here about the firing checkouts gun, how big is the problem, and just a little intro. I'm not gonna do it live because I didn't trust the platform to work live. So maybe that's my own lack of faith. I'm gonna use log4j as an example and I'm using a platform called depths.dev. I really like it, it's a Google thing. So here's a screen catch for depths.dev. You can go to it and you could just type in log4j and then you can choose, it brings up the package and you could choose the version. I chose 214.1, why? Because I know that has the first instance of the log for shell vulnerability. And there it is, second in the list, remote code execution in log4j, fabulous. Now we can all do all this manually and this is a good example of why you don't wanna do this manually. I can look at the tool, I can see there's dependencies and dependents. Well, let's take a look at dependencies. How complicated is log4j? Yeah, it's complicated. These are all of the open source packages that log4j depends on. So if you've got something, an SCA tool that says, you had log4j. Okay, did it also tell you you had all these? How deep did it go to tell you these are the things that are bad about what you've got? Sometimes the only report things that have vulnerabilities, which is a great way to filter out noise, but it's just interesting if you're looking at something that generates a software bill of materials, how comprehensive does it do it? Now I'm gonna flip this a little bit and look at dependents, I really like this because this is kind of the reverse. Now there's no graph here, but you can see how many open source packages directly use log4j a lot. You can see why it made such a splash when it was found as a vulnerability. 17 and indirect, this is like, think of it like the Kevin Bacon factor if you've heard of that. This is two level, two bacon factor, two away. 1744 of indirect dependencies. So what I did, and believe me, I only did this once. I scrolled down until I found direct dependencies and I picked one called Genocore. I looked up what Genocore did. It's a framework for semantic web applications. More for looking at, well, to tell you where I found it being used, it was using for a tool parts database. And so it allows you to make certain kinds of queries and I was like, okay, that makes sense. And then you log queries and that all makes sense. So indirectly, I noticed that, look, remote code execution. So it's still new that two levels down, there was log4j and I thought, okay, that's pretty good actually. That is a transitive dependency. That's a dependency on a dependency and it still found the vulnerability, amazing. But there were still indirect dependencies. So I went up two more. Now I'm an owl query. Owl query for an open source project called open Caesar, which is something that tracks parts and things, right? Owl query is what uses to track, to use Genocore to use and then use it logging. So now I'm really far away from the vulnerability, but because it's a query mechanism, log4j is still there. Now you notice two down, by the way, I'm keeping the versions intact. So these dependency chain is correct. Log4j, number two, that's different. That's weird, 9.8 critical. Where happened to my 10? So even using this, I've lost track of a very critical vulnerability that's being used by the open Caesar platform in a particular version at a particular time. Now, if I go and take a look at the graph, now I go back down by a graph, I can see, sure enough, I realize the writing's pretty small here, so I apologize for that. It's very difficult to make it really big, but what that says is log4j 1.2.17. That's a pretty old version, but it's right there at the surface of open Caesar. Now, if I saw the log4shell vulnerability out there, I'd be like, ah, and I looked at my bill of materials and I looked at my vulnerabilities and I saw log4j there, I'd be like, aha, I got it. And there's an SQL injection, so well, this is great. I'm gonna upgrade my log4j and the problem is solved. Is it? Well, I am using Genocore, and if I look at Genocore, I'm still using log4j. Here's an example where I'm actually using two different versions of the same package, both vulnerable in different ways, but the only one I saw was the top one. And that's really a really important lesson and it was so easy for me to find. I didn't pour over all these dependencies to create this presentation. It was found almost immediately. Okay, so let's talk about what this really means at scale. The sources of vulnerabilities, they are very often deeper than surface level and the impact in the environment matters. So if you could see over on the right, I have all these dependencies with all these vulnerabilities and you can see there's multiple layers and multiple packages that are involved and the application of these things is critical. You're using Terraform, you're using Kubernetes, you're generating all this by a different CICD workflows. You're even using containers within CICD workflows, which means CICD workflows themselves have dependencies on open source that could be vulnerable. And if we're not combining all of this and we're not using this knowledge, it can be something that becomes either noise such that we ignore it or such that we're potentially vulnerable. Or we just spend a lot of time on it. It's all bad really. But just the fact that we're aware that we can do this and we can be better at it, I think is a really critical thing. All right, so this is just a quick look at what the open source looks like. I'm gonna try not to run it because I realize I pressed for time. Now, check off as an open source tool runs in the command line. You just run check off and we're good to go. You can see the example. I can just run checkup.h, it gives me help. But it also integrates into JetBrains and VS Code and it's pretty easy to do it. I just go here, you can see I've got check off installed, I go back here and it scans just about anything I said in terms of infrastructure's code. So if I'm just say as much as looking at something, my POM XML, you can see that there's check off down here. And if I hover over, well, I can see that I've actually added a dependency that has logged for J. This is really quite cool. What I didn't count on is that I've got something else here which it's found, which is another, a few other issues that are high. Not too bad. I'm cool with that. But while I'm in the same application environment, so this is the developer perspective. I might even just go look at my Docker file and say, did I do my Docker file right? Cause I'm going to pack this all in my Docker file. And I can see, oh no, I didn't create a, the Docker file is going to run as root. That's really quite bad. Okay, that's cool. Well, if I look at the Kubernetes manifest, the manifest is actually making reference to a built version of that application. And down here. So there's a real abstraction between what this means and what this means. And I can hover over. I can see what the annotations that check off has given me here. And right away I can see that I've not got the most secure manifest. I haven't done some of the low hanging fruit for security for my Kubernetes, let alone some of the earlier things that are really bad. So I don't have a sec comp profile installed. So if for some reason I do have log for J and I can get remote coded execution inside that container, there's already a way I can break it to the host. If I roll down, I see some of the other things, image tags should be fixed. Okay, I've got some bad practices. I can see I've got a high vulnerability here. Container should not be privileged. What on earth am I doing there? But I, I scrolled a little low. If you saw it there before, I roll a little bit further. Hey, it's taking a look at my reference to the image. And it's included vulnerabilities from the image. So I can see that I have all these things in my infrastructure and I'm referencing a third party that is showing me that I've got log for J. What's also interesting, if I roll down one more, there's a critical vulnerability in Apache Tomcat. How to figure that out? Well, I'll tell you how. Amazing Tomcat has my base image for my Docker file. And actually that has a vulnerability. And I got all of that actually in one shot. Now I can do it on the command line. I did mention that I can see all my vulnerabilities. And actually I can, I can see the vulnerability. I can see log for J. I can see my Tomcat issues, all of them. And I'm given, in spite of all these vulnerabilities, I like here that I'm being told, well, you can knock all these on the head just by fixing with this one version. And I have a question, but I'm gonna answer it right away because it's an easier one. The editor I'm using here is Visual Studio Code, but it does work with PyCharm. So thank you, Paul. So that was a, that's a very, very quick whip through on how you can make use of the check-offs. And yes, it will report as you go, palm file to Docker file to final manifest, or even a Terraform. If you're using Terraform to deploy code build and you're making reference to containers, same thing will happen. Exact same thing. It will tell you what you're doing wrong. So this is kind of where we're trying to take a step forward in the way we're doing SCA and make combined scans of things we know we're going to deploy and where we're going to deploy them in what context by bringing you vulnerabilities in the context of your infrastructure. Not too bad, I really like it. I'm a big fan. And we only released it like this week. So you're the first that I've ever told about it. So that's pretty exciting stuff. All right, quick recap and some dependencies and some takeaways before I will open the floor for any more questions. That was a great question, Paul. Thank you. Dependencies. Dependencies means it's a package that you use. Separate piece of software that's imported. It could be a POM. It could be a package JSON. It could be requirements.text. It could be a PIP file. It could be anything that's saying these are the things I want to use. A transitive dependency. This is more like a techie term that is used in security and I don't really hear it in the development world, but it means that's a dependency on a dependency. And if we saw the log for J1, we're talking four levels deep, this can go. You saw how deep those graphs are. They get a bit crazy. A dependent means somebody that's using you for something. Going in the opposite direction of vulnerability is a publicly logged securely related defect often indicated by a CV and CVSS. A misconfiguration is just a bad practice from a security perspective. Does not conform to safer best practices and can result in unwanted behavior, data exfiltration, or lateral movement. So key initiatives I encourage you to look into. Salsa, the supply chain levels for software artifacts. They've just really been releasing different versions of it. It's very new, but it's giving you best practices at different levels as you secure your entire supply chain. Sigstore, this is an open source package for signing and verifying artifacts. I really love it. Sbom, I mentioned it, software bill of materials. Now, this is becoming a big deal in the USA because they're the recent mandate for all open source software. All software, really, to have Sbom's associated with them. It's a lightweight bill of materials. You can use Chekhov to generate a bill of materials. I didn't even say that, but you can. Chekhov is the open source software that can check or SCA. And that can do SCA scanning in Sbom generation. OpenSSA says as the open software security foundation is the brains behind some of these initiatives, like Salsa I'm talking about. And of course, I'd be remiss if I didn't mention OASP because not just for web anymore, tons of free open source security tools other than Chekhov. So your key takeaways, detect vulnerabilities early, embed SCA in the DevOps and DevSecOps pipeline to improve remediation rates. Make sure you're generating comprehensive multi-layer Sbom for risk tracking and compliance. And do when possible, try to create, do context sensitive infrastructure aware vulnerability detection. Of course, wouldn't be the same without license, watching your licenses as well. Actually, I didn't show you this, but back here, I'm showing the licenses as well on the command line so you can get licensed detail as well. Okay, this is the end, just on time I think. There is a blog there below, which is in white that talks a little bit more about what I said today. And that is the end. Thanks very much. Thank you so much, Steve, for your time today. And thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. We hope you join us for future webinars and have a wonderful day.