 Right, hello, and welcome to the Road to Zero CVEs, People and Technology. I am Andy from Control Plane, very proud to be involved with the CNCF's tag security as a co-chair, where we help to assure software as it rises through the CNCF for graduation. I'm also CISO at Open UK, where we try and prevent government foot guns with advisory into the UK government, various other pieces of authorship. And this is my esteemed co-presenter. Hi, I'm Mike Lieberman, I'm co-founder and CTO of Kusari, software supply chain security company. I'm also co-author of Securing the Software Supply Chain from Manning, and there's a QR code to the book with a discount. I'm also co-lead, I was with the co-lead of the CNCF Secure Software Factory reference architecture, which is what sort of was the precursor to Open SSF, it was an Open SSF project called Fresca. I'm also a member of the Open SSF Technical Advisory Council, member of the Salsa Steering Committee, a lead on the CNCF tag security, working with Andy quite closely, and also co-creator and maintainer of Guac and Open SSF Inc. Baiting Project, and if folks are interested a little bit more on that one, giving the talk after this one, where I'll be giving a demo on that. And now handing it back off to Andy for the next few slides. Wonderful, I'll try and move these out of the main line. Okay, so let's start with a definition. What is a CVE? It's something we didn't expect that does something bad, an untested code path with a security side effect, which is then known as a vulnerability. So, the problem space that we're looking at, supply chain attacks are on the rise, these are introducing vulnerabilities. We have recent attacks from nation states around the world. We see dependency, confusion and typoscorting on all of the major package repositories. These are intentional attacks that are sometimes difficult to detect or difficult to notice without embellishment and tooling to identify them. And we have as a result of the proliferation of open source software and increasing reliance on the dependencies and transitive dependencies, and therefore the trust relationships with those authors of those pieces of software that is proliferating through governments and, of course, financial services. The problem space is significant. It is large because supply chains are complex and multi-dimensional. We can see this from physically the way that ships are built, the way that medicines are shipped, the way food is produced in software terms, the way that software is composed, the artifacts that we use, the materials that they're made from. And it's not just in terms of technology. The weakest link in many supply chains are the people, the socio-technical issues that rise from having humans. And if there's a second thing that computer security teaches us, it's people are often the biggest threat, with or without intent, and often hopefully have compliance and governance requirements. So we have all these ominous storm clouds of security brewing and in-steps. The governments, the US and the European Union took the first steps into helping to supply the secure, the supply chain, but with also seeing efforts throughout Asia to robustify and secure these things. Regulations and legislations are being developed like the US Executive Orders, the European Union Cyber Resilience Act, and these are not just regulations, governments are actively trying to help provide guidance to people building and utilizing open source software. Biden SBOM ordinances, infrastructure as code mandates, and general secure software development guidelines are excellent examples of sensible and security-focused legislation. And the UK isn't doing too badly either, which is where I'm from, I'm from London. We have guidelines for secure AI development out of the National Cyber Security Centre, and there's more than 20 of the world's leading organisations collaborating in that space, including Japan's NISC. Sadly, we were unable to stop the CRA, which has recently achieved political approval as of December the 1st between the European Parliament and the European Council, and now goes into a process to become law. However, according to early reports, the draft, which is not publicly available, states that non-profit organisations that sell open source software on the market and reinvest all of the revenues in not-for-profit activities were also excluded from the scope. This seems like a very good thing. This might be the greatest victory for open source in Europe in the last few years, so it remains to be seen when that draft is published if we have actually, through lobbying and pressure and the open source community in general, made that massive change to the wording of the legislation. Fingers crossed, I remain hopeful. So one of the pieces of legislation that will cause some concern for those of us that have been patching our software in open source for the last 30 years is zero CVEs. Is this a technical impossibility? Zero days will always perpetuate they are by their very nature undiscovered and there's no effective way to entirely secure any piece of software in the same way that it's not possible to entirely secure anything physical. So the path to zero CVEs is not a destination but a journey. Despite our best efforts, vulnerabilities and attacks will always be present. Fort Knox is technically impregnable but an archetypal James Bond bad guy will normally find some way to burrow in underground or skydive directly in. And as we've seen with supply chain attacks over the last few years, even a secure supply chain can be attacked by dependencies outside of this immediate purview. So we do our best. We practice good hygiene, we implement the necessary guard rails and effectively attempt to address issues that arise along the way. It is about building resilience, staying one step ahead of the ever-evolving cyber security landscape and software supply chain resilience is key as we all know. Composition analysis and an understanding of what comprises, which materials comprise the packages that we ultimately build because there is no such thing as secure software. Software that stands still dies and software that is new has a higher chance of untested code paths which as we know may lead to insecurity. We want to take advantage of new features new features bring those risks of vulnerability once more and companies that don't ship features will be beaten by less risk averse competitor. Our poor security teams. We struggle with new software, vulnerability scanners bring variable results as do S-bomb generators which sit in broadly the same category. Build materials have trust issues, VEX which gives us a way to run those things in an insecure things in a secure manner also rely upon the producer consumer quandary and we don't always know what's going down in production. So while vulnerability management is important solely focusing on it can be a losing battle. The process of prioritizing vulnerabilities can be time consuming and may divert attention from fixing underlying issues. It's a game of whack-a-mole for many larger organizations. It's crucial to strike a balance and allocate sufficient resources to address vulnerabilities promptly and efficiently but burning out security teams with a constant stream of patches isn't an effective strategy and the reality of vulnerability management is that a secure configuration and sandboxing can allow vulnerable software to run in production. This is the dream of VEX. So it's not all bad despite the proclamations that software should be shipped without vulnerabilities they are a fact of life and with rapid innovation this new feature delivery brings bugs and vulnerabilities however this guy is still patching orc after many, many years and the rapid pace of open source innovation requires security maintainers to patch and features and quash bugs or a rival project may beat their adoption curve. So this isn't just related to the CVEs that are raised for your packages. It's a very broad stroke and software CVEs often don't tell you if the way that you're using a package makes it vulnerable. So modern vulnerability assessment is scan a package, panic, guess it exploitability, raise an exception and deploy the thing anyway and running through a log for shelf example for a large organization is a prime example. As we know, everything is potentially vulnerable. Any software that can be run has the potential to be exploited. So how do we fix this techno social issue and get to zero CVEs? It may seem unobtainable giving the ever evolving nature of cybersecurity and the ominous threat landscape and it's also important to recognize zero CVEs does not guarantee complete security. Insecure runtime configuration means a misconfiguration for example the classic open S3 buckets or an open relay from a trusted domain or leaked private keys or an upstream compromise of a trusted producer or provider, sorry, with remote access to your systems. The octabreaches of prime example can compromise a system that apparently is secure with no public current CVEs. So we should view this legislation as a starting point and focus on developing a holistic approach to a more all-encompassing, data-driven and continuous improvement practice and include that configuration as part of the supply chain. Consider the humble container. A container with no CVEs with immediate remote code execution vulnerabilities from web or public facing sockets is theoretically secure. But again, zero days are a fact of life. What is apparently secure based on the CVE scanning landscape today does not mean it is secure tomorrow or indeed that it's secure on that almost zero day. So when one turns up, a remote attacker is able to compromise that container, gain local access. Now that might be from bashing a container which we know we shouldn't do. They could also be for a ROP chain. Once you have a remote injection capability into software there's really very little that you can do. So it's about building out defense and depth. The policies in that container are its last line of defense. We're talking Linux-based security contexts, APRMA, LSMs, SC Linux. We're talking network policy. These things just come one step before intrusion detection. Preventive, detective and remediative practices ensure that even with no CVEs we're still focused on ensuring the depth of defense for when the inevitable, that being too doomy or gloomy, relentlessly occurs. I will now step down from that particular soapbox and hand over to Mike. Thanks, okay. All right, so now that we're embarking on sort of the path to zero CVEs, let's look at what we actually need to do. And the first step is really build some guardrails, right? After panicking then we need you to institute some guardrails, right? Make sure it doesn't happen again. And so really when we kind of look at this problem, right? Before addressing actually first whether or not you're dealing with any CVEs, right? You wanna understand, am I pulling in the right code in the first place, right? And what am I actually, when I look at the landscape and I'm like trying to figure out, hey, what packages should I download? Am I downloading from the right place? What are the sorts of things I need to do there, right? So you need to start building a deep understanding of your supply chain. So you start leveraging tools like Sigstore, Salsa, Guac, Policy Engines, that sort of thing to ensure sort of the integrity and security of the software you're producing consume. And really this is a great diagram from Isaac Hepworth. Supply chain, he's one of the co-chairs of the Supply Chain Integrity Working Group in the OpenSSF. And so when you start to really look at this problem, you start off with a trust foundation, right? Who is doing what in my supply chain? Whether it's a bad actor or a good actor, who is doing what? Then you want to be able to kind of say, what sorts of claims are being made about my software, right? Who is building that software and what are they claiming about how it was built? Who is claiming what dependencies, what things depended on what, right? What information about, you know, for example, VEX as Andy brought up before. And then you want to be able to take all of that data, you want to aggregate it all, you want to analyze it, and then you want to be able to build sort of policy and insight on top of that to then actually action on it and understand, you know, am I blocking that bad stuff from getting into my environment? Am I only pulling in stuff that I expect? All right, so it's also important here to recognize that not all practices will apply universally to every vendor or open source dependency. We will explore alternatives and suggestions tailored to different contexts, ensuring that even if you are not using specific frameworks like Salsa, you can still adopt secure practices when consuming software. So this is stuff like OpenSSF S2C2F, which is a framework that looks at like, if you're not using the latest and greatest, what sorts of things can you still do to protect yourself, especially when you're ingesting software, right? And the ultimate goal here is to enhance security across the board. Now, you know, as we embark on the journey to zero CVEs, you know, we're gonna make a couple of assumptions here, right, you have some sort of SDLC, you're using some sort of CI system, it might not be the most secure CI system, but you're using something, you know, you have a source repository, you have some sort of package cache that you can use to actually do the stuff. You're not just, you know, the assumption here is you're not just, you know, building off your laptop today and just kind of, you know, copying it onto a USB stick and then plugging it right into a production server, right? You have something. All right, so now, really when you start to look at, you know, the supply chain, you start to look at how you're building your own software, you wanna really make sure that you're incorporating, you know, as you, that you're incorporating from trusted sources, right? And you wanna make sure that those parties are trusted, that, you know, they're reliable, all that good stuff, right? Now, ideally these packages should be signed using frameworks like Sigstore, you know, using stuff like Tuft, which is the update framework to ensure their authenticity and integrity. If that's not feasible, you know, you might wanna consider building from source or performing checksum validation and that sort of thing, right? Trusting software providers requires a thorough assessment of their security hygiene practices. Ideally, software should be built using frameworks like Salsa. You should be scanning the repos with stuff like OpenSSF Scorecard. You should also be potentially using in Toto, right? Like to essentially validate the end-to-end integrity of how that software was, you know, written, how it was built, how it was packaged, and so on. And then alternatively, you might wanna consider building from source, conducting source and package scans when you can't do that sort of thing. Now, you know, so next up, right? So when we start to look at understanding your sort of software ecosystem here, it's crucial, you know, really, at the end of the day, a lot of this is about understanding what's in your environment and understanding what's in your supply chain. And so you want to be able to maintain an up-to-date software inventory. This can be achieved through tools like Guac, which is now an OpenSSF incubating project. And you wanna be able to sort of look at, you know, what's running on what servers, using stuff like OS query. If those options aren't available, you might want to sort of consider, you know, regular vulnerability scans, leverage a configuration management database, which I know is a little old school for some folks. And you wanna be able to sort of have some way, really, of managing and tracking your software assets. And each of these things really is building on the last. And, you know, a key note here, right, is keeping track of everything is important and only figuring out if you're impacted when a major CVE is discovered is bad, right? If we go back to log for shell, the word, you know, you should not be looking at where do I have log for J in my ecosystem? When log for shell happens, you should know where log for J is in your ecosystem. And when a log for shell happens, you should be able to say, great, I know it's running on these servers. This is what needs to be updated. So next up, you know, when we start to look at, so finally, one of the final pieces really is you wanna now, now that you have an understanding of your, you know, what's in your ecosystem, what's in your environment, you wanna be able to start establishing consistent policies and rules to actually action on what's in your environment. And this is really crucial in minimizing vulnerabilities, an ideal approach involves using policy engines like OPPA or Keyverno, which query sources of truth like Guac in conjunction with stuff like artifact proxies. Now, an alternative here is, you know, you still have, you still need to have something like an artifact proxy. You really want to avoid the situation where, you know, in production, you're just pulling software down from the internet with little to no verification because who's to say you're downloading from the right place, who's to say you're downloading the right version and that sort of thing. But what I will say is, you know, the thing, you know, we have up their manual enforcement, right, that sort of thing doesn't scale. It's really, you know, if you're not doing anything else, sure do something like manual enforcement where you're improving each individual package manually and saying, yes, this is allowed to go through. That really does not scale. Try and start using something, you know, try and start building those automated rules quickly. And really, it's about, you know, you just don't want to blindly come in and address issues as they come up. You should really be building processes around this. All right, so I know this is a little bit of a dense diagram, but don't worry about like, if this is just an example here, but basically, right, when we look back at a reference sort of supply chain pre-build, here we'll focus on gathering supply chain metadata from all trusted available, sorry, from all available and trusted sources. This may include things like software bill of materials, additional documentation such as the VEX docs, you know, for understanding the real world impact, what is, you know, as well as, you know, where there might be exceptions. So another thing that's actually mentioned here is guac, and we'll take a look at that in a moment. But all this information really, that's coming out of here with like the S-bombs, the VEXs, the Salsas, the et cetera, really needs to be analyzed to understand if there is actually a risk, just because, you know, something has a vulnerability, doesn't mean it could be exploited, that sort of thing. In addition to this, when you start to look at stuff like scanning, you want to also essentially ensure that generally, are there any code smells? Because once again, it's not just about CVEs and it's not just about the known vulnerabilities, it's about really understanding inside of your environment, is there something that's suspicious as well? Okay, so now, hey, let's start a project, right? What does that project look like? As we talked about before, it's about people and technology, right? You can't, you know, without people, you don't have any code, you don't have any software to begin with, and if you just have the technology, I mean, if you just have the people, obviously there's no technology to actually do anything with. So when it comes to the people, you really want to make sure you have something akin to a DevSecOps culture where, you know, security needs to be a key component of development, not just a gatekeeper, right? Security should not just purely be the thing at the end, saying yes or no, it needs to be baked into the design. You really need to have openness in communication, right? Security issues and things like CVEs sneaking their way in are often the result of poor communication. How many folks have been told, hey, there's a CV in that library when you're in front of a change approval board and you're like, why did you not tell me that three months ago when I ingested the library? And you also want to have clear responsibilities, right? We're all human and developers are being asked to do more and more each and every day. Developers are being asked to understand everything about configuration, whether it's open tofu or being asked to understand front end development, back end development, all that great stuff. And honestly, being asked to also understand Salsa, S-bombs, et cetera, VEX is going to be very difficult, so make it easy. And so when we kind of look at technology here, really it's about, you know, you want to have a secure source repo that means really be building stuff from a repo template that is secure. You want to be able to use those source scanners as we were talking about before, like SAST and DAST. You want to be following best practices in just general code writing. You want to be able to, when you look at securing the build, right? You want to be able to protect what are the build pipelines in the first place? You want to be able to protect the build control plane, and you want to be able to actually go and say, is the workloads themselves being protected? And then when we look at the packaging, right? You want to be able to essentially say, hey, are the output signed? Are, you know, am I getting that provenance in some way, Salsa, whatever? And then you also want to essentially scan the package still once again for issues. And then when you go and ingest all this stuff, you want to verify, verify, verify. That's really the most important thing because the way that these CVEs often get in is you just didn't check. Okay, so now that I talked about it a little bit abstractly, why don't we actually talk a little bit more once again to reiterate what are the actual things that we're trying to hit here? What are the specifics? So that means like small cross-functional teams with security capabilities baked into the team as opposed to it being something sort of tacked on at the end. Security should be part of sprint planning. It should be part of how you look at how you look at sort of budgeting, how you look at how long a feature should take, right? That sort of thing. If you are starting something that's going to include a whole bunch of new dependencies, are you taking the time to understand whether or not you are pulling in stuff that is secure? You want to make sure you have enough people with enough expertise. You don't want to have a team that is completely stressed out that is doing too much. They're going to cut corners. They're going to include software that perhaps hasn't been fully vetted, hasn't been verified, all that good stuff. And then probably one of the most important things is you really want to make sure you have leadership with the motivation to contribute to the open source community. Something like 95% of all code is open source or depends on open source, I should say. And so really when it comes down to it, you want to make sure that you have, you're contributing back to the open source community because by doing so you are just lowering the overall CVEs in the first place. And so now looking at the technology here, we have in Toto and TuF for secure software supply chain and secure updates to reiterate here, like you want to have a source repo with security enabled. So if you're using GitHub, make sure you have branch protection rules, all that good stuff. There's a lot of great best practices coming out of groups like the open SSF. Using things like Wolfi for base images or things like that. You want to be able to look at security focused build tools like Tecton and Tecton chains, Fresca, et cetera, for the build, using something like SIF to generate S-bombs, right? Using things like GRIP for that initial vulnerable management but really that's not the end. You really want to make sure that you're also pulling in and looking at data sources like OSV and depth.dev also from open SSF as the data sources and you want to also then make sure you're keeping track of all that stuff, right? Because you don't want to lose track of it because once you lose track of it, then you have to go back and re-scan, you have to go back and pull all that information again. So you want to use databases like Guac for ingesting and analyzing all that data. And then you want to use policy tools like Kiverno, OPA, et cetera to actually enforce policies to make sure you're only pulling in stuff that you expect to pull in, that stuff that meets those requirements, that meet those regulatory requirements going back to that. All right, so now finally to kind of tie it all together, what are some of the key takeaways here? So you want to have security hygiene activities, these sort of tend to correlate with improved security health. There was actually a pretty good talk from Eddie Knight from Sonotype who's done some, and Sonotype has done some research on this that said, hey, if you're following best practices, if you're doing things like adopting scorecard, using Salsa and similar sorts of practices that are coming out of groups like OpenSSF, CNCF, et cetera, those sorts of things are correlated overall with lower CVEs as well as increased, sorry, lower mean time to remediation of vulnerability. So these open source projects that are adopting these practices, even when CVEs are discovered, they're being addressed much, much quicker. Regulations and legislation are positive steps in the quest for better security, though there are potential shortcomings in their current implementation, CRA as an example here, and really, really, really it is about the data. It's hard to prove that you have zero CVEs when you don't have comprehensive knowledge of vulnerabilities or even know what's running in your environment, because having zero CVEs is, or at least having very few CVEs is important, but knowing you have zero CVEs or very few CVEs is just as important. And then the journey towards zero CVEs really you should be focusing on the journey. It's not, the journey is just as important as the destination here, right? It's, you should always be striving towards zero CVEs. You should not look at, as soon as you achieve zero CVEs, you're like, great, now I can sit down. Nope, you are continually going back and addressing these things, but really it's about that mindset that really values a high level of security, that values doing all the right things, that values investing in open source, that values investing in security, that values investing in developers to make sure that they have the time to do what they need to do to build features securely. And then back to Andy. And this is the shameless plug for Hack and Keepin' Thatis. It's available for free for the download. Thank you very much for listening to our presentation. And I think I'll give this a moment for questions. Yes. Yes. Oh. I've been asking pretty mean questions since lunch. So let me give you both one that's a little more, I don't know, spicy. So one thing that I've noticed when it comes to CVEs is some projects effectively go and say, oh, well, our threat model is we just don't care about this. And so therefore there's no CVE because we've changed our threat model to say that security's not our concern. And I've seen other projects that say, oh, yeah, we're really concerned about this and they've built very intricate mechanisms to protect against it. And then they get a lot of CVEs filed against very subtle attacks on those very intricate, intricate little mechanisms. So can you comment on the quantity of CVEs and what that says about a project in the light of my question? Okay. So I do wanna first preface it where we did say CVEs are the best we have. They're not. Yeah, okay, so the specific issue is it's very much related to S-bombs and vulnerability management. If I depend upon something and instead of releasing a CVE against a vulnerability, they just release a patch version that everyone's gonna keep on running that vulnerable piece of software without being aware of it and updating it. So I mean, that sits at the heart of trying to keep the system secure. There is a perverse, reverse incentive for maintainers to not accept CVEs, as you say, because they become lightning rods for vulnerability researchers and people looking to make a name, to some extent. I mean, Bug Bound is kind of a happy-ish medium where things are triaged in an objective way, but then to stand that up for an open-source project is quite potentially expensive or not something that the maintainers have time to do. So yes, misaligned incentives, I would agree. Yeah, I mean, we see that even now, right, with you look at a lot of the CVEs that are being... It's, there are bad actors on both ends. You have projects that are turning down real CVEs and then you also have vulnerability researchers looking for every single bug in some test code or every single bug that is... Every single edge case in a regular expression that can only be hit in highly specific circumstances and wouldn't really affect much anyway. Which is kind of why I think we keep kind of going back to the thing where the focus is more about understanding what's in your supply chain, really looking at it more holistically, really applying things that we have seen correlated with actual security, things like what we've seen in Scorecard and various OpenSSF best practices and so on and things that we see in the CNCF and the threat modeling there. But right now it's still the best we have and we understand that it's kind of like, if you don't have anything, you have nothing to really point to. And I guess the secondary point that we lost over is VEX, the vulnerability exploitability exchange format, which kind of goes some way to this. So let's say I raise a spurious CVE in Mike's project and instead of having the time to debate it with me, he's just like, yep, fine, okay, we can patch it, it's easy. And then it updates the version. He or another security researcher could then publish this VEX document saying you can actually still run this version because the exploitability of that bug, I mean it's in a piece of test code, it's a prime example, we either tree shake or we don't ship tests to production or something of this type. There is a whole different set of concerns that come with trusting somebody to bypass your security scanning and if it's a third party, I mean then who do we trust in that case? If the vendor produces a VEX to say something deep in my transitive dependency graph is actually not vulnerable, well then you could probably trust the vendor but then they've got some work to do. What about security researchers who are third parties in this question and then one has to establish a trust relationship with those researchers as well. So yeah, I'd love to see VEX take off a bit but that question of fundamental trust is unanswered. Any other questions? Yeah, if there's nothing else, thanks for coming and if folks are interested in guac, I'll be giving the talk right after this on guac.