 joining me today. So thank you very much for joining. I hope that you'll get a lot out of this presentation today because I'm going to be talking about developing and also evaluating secure open source software. So very quickly, here's my outline. I'm just going to jump right into it. So quick, quick background, because I found that some people don't know about some of these things. First of all, I'm talking about open source software. What is open source software? Well, that's software licensed to users with certain freedoms, run it for any purpose, modify, study, freely redistribute either the original or a modified version. There's a full definition in the open source definition, and there's a lot of common licenses like the MIT Apache 20 LGPLGPL. If you're talking about software and it's not open source software, typically it's called closed source or proprietary software. Although it's not the focus of this talk, it's important to understand that open source software, at least under U.S. law, is a kind of commercial software. It's licensed to the general public, and automatically that makes it commercial software. That's actually pretty important if you ever have to interact with some organizations like the U.S. government. Open source software licenses enable worldwide collaborative development of software, and that can have many positive benefits, including positive benefits for security. Open source software is critically important today. Here's some figures. Almost all, 98% of code bases contain open source software. When Synopsys looked into programs, they found that even if the software is a whole press proprietary, when you average things out, 70% of code bases were internally open source, and there's more and more open source components within applications, and that growth continues. Now, so I've talked about open source software, now let me talk about security. The reality today is that all software is under attack. Open source, closed source, doesn't matter. They're all under attack, and here's just some of the many, many examples that I hope, I hope I don't have to strongly prove this in this context, but just, you know, it's not just open source, it's not just closed source, it's all software is under attack, but directly attackers trying to exploit this vulnerabilities and via supply chain attacks, where attackers try to subvert the process of getting the software to the, it's eventually users. If you only can look at one slide today, here's your slide. Now, I'm kind of hoping you'll see some other material I'm going to talk about, but if you are developing open source software, I'm hoping that you understand you need to make it secure as one of those important categories. Well, how do I do that? Well, here's kind of a top list of things that you need to be doing. Number one, and in many ways the most important, learn how to do it. The sad state of affairs today is that most colleges, most universities that teach how to develop software don't teach how to develop secure software. I think that's terrible. I think that's awful, but it is a reality. So, if you don't already know how to do it, take a course, learn. If you don't have that already easily available, please consider taking this secure software development fundamentals. It's a free course. There's a URL right there. I'll show it again later. It's on edX. It costs you exactly zero dollars, costs you nothing, and it will help you understand how to develop. I'll go over some basics of that, but really, take a course. It won't take long. And of course, it's more than learning. You have to apply what you learn, right? So, make the secure that you develop secure by default. Make it easy to use securely, harden it against attacks. Make it so that when the user receives it, it's ready to go and run securely. Number two, if you're developing open-source software, work to earn a CII best practices badge. There's another URL. I'll talk more about these in a moment, but basically, it's a list of criteria that if you do, are going to increase the likelihood of producing secure software. Number three, use a lot of tools to find vulnerabilities in your continuous integration pipeline. There's a lot of different kinds of tools available. I've listed some common ones here. Quality scanners, aka Lenters. There's security code scanners. There's secret standing. There's software component analysis. There's web application scanners. There are fuzzers. You want to use multiple different kinds of tools. You want to use multiple tools. Tools just by themselves aren't enough to make software secure. You still have to know how to make secure software in particular, but they can be an important part of making secure software. Next, monitor for known vulnerabilities and what you depend on. Nowadays, most software brings in a lot of other software. If those dependencies have vulnerabilities, then that may become a vulnerability in the system that you've created using them. You need to monitor them and update when they're vulnerable, which means, number five, you need to enable rapid update of your dependencies. Once you find out that there's a known vulnerability, you need to be able to quickly update, quickly ship. How do you do that? Well, package managers and automated tests are keys to that, because package managers automate the process of managing your dependencies, and there's different kinds, but you need to use them. Your automated tests should include native tests to make sure that things that shouldn't work stay not working and be thorough enough to ship if it passes. Some people ask you, well, how much do I need to test? And the answer is you make enough of an automated test suite so that if it passes your test suite, it's ready to ship. If it's not there yet, then your automated test suite's not good enough yet. Number six, evaluate before selecting dependencies. I guess in some sense, that's out of order, but basically, before you bring into dependency, make sure you evaluate it, make sure you're bringing in the one you think you're bringing in. Make it easy for your users to update, things like providing stable APIs so that they can quickly update when there's a problem. And finally, number eight, continuously update. Attacks get better, so defenders also need to get better. Now, I don't want you to get the idea that perfection is the one and only goal here. Vulnerabilities are risks. It's hard to eliminate all risks, but you can manage risks. You can mitigate them. You reduce the likelihood. You can reduce their impacts. But you have to actually think about doing that. David, there is one question in the question that you and I, if you want to. Yeah, I'm happy to take questions as we go. We probably need to, it might be wise to have not too many because after I'm done with my presentation, I'm hoping to have an open set of questions. But yeah, please do. I want to answer questions. Oh, I need to click on things. Let's see. Where do I click? Q&A. All right. If I'm trying to select to add an application, how would the badges help me? Let's see here. Basically, the badges will give you an idea of whether or not those projects are working to develop secure software and where they stand. Trying to achieve the best practices for your own project helps your project get secure. Looking for projects that have achieved those badges or at least are well on the way to achieving them gives you confidence that the projects you're bringing in as your dependencies are, in fact, more likely to be secure. All right. So let me talk a little bit about some of the things that I gave in my summary slide. I talked about this course about secure software development fundamentals. It's a set of three free courses. It's not a huge time commitment. You've got approximate hours here. If you took an hour a day, you'd be done in less than two weeks. And it covers these topics. What are the design principles for security, things like lease privilege, how to apply those, how to examine designs, how to use accept lists and not delight lists to constrain untrusted inputs. If you're taking inputs from someone you can't trust, you should be very, very strict on what you accept, define very strict patterns on these are the only input values allowed and just reject everything else. This is incredibly effective on limiting a lot of attacks. Know the most common kinds of vulnerabilities. There's various top 10 and top 25 lists and then how to prevent each one. Use hardening methods so that, yes, you're probably going to have some bugs in your code, but not all bugs are equal. You can take a lot of steps to reduce the likelihood that a bug is a vulnerability. I already mentioned adding a lot of vulnerability detection tools to your CI pipeline. It talks in more detail what the different kinds of common tools are, their pluses, their strengths, their weaknesses, how to apply them. In this course, unlike many has materials specific to using and developing open source software. Somebody needs to put mute on. One nice thing about this course is not only is it free, but it has a large number of small modules and almost all of them have a little quiz at the end. This is a very simple technique, but I found that it's really, really helpful to stand track instead of just blindly reading. Oh, wait, I got a little quiz. I got to answer my quiz and that really helps. You can pay to try to earn a certificate. You don't have to, but some people want to be able to prove that they learned the material. We offer that as well. By the way, this is a project within the open source security foundation's best practices working group. Let me talk a little bit about some of those key points that are covered in much more detail in that course or other courses about developing secure software. I've heard some people say that, hey, my software doesn't have a design. Well, your software might not have a documented design, but if it runs, it has a design, because a design is simply how you divide your problem into components and how they interact. That's how I'll ultimate my design. Working software always has a design, but some designs are way better than others in terms of being secure. There are some things called design principles, which are basically rules of thumb to help you avoid common serious design flaws. There's a number of time-tested design principles, things like least privilege. Give your software the least privilege it can to work, because that greatly reduces the impacts if there's a security problem of some kind. Another one is complete mediation, also known as non-bypassability. In other words, if there's some check that's important for security, make sure an attacker can't bypass it. It's remarkable how many client-side JavaScript programs and mobile applications fail this, because they insert security checks in code that's going to be running on a computer you can't trust. This happens over and over again. By the way, the problem with things like failing to know about or apply least privilege or non-bypassability is that it's often a lot of work to change your code later to fix the problem, whereas if you knew about them ahead of time and applied them, it's no big deal. I've often heard that security can be expensive. Well, it's not usually very expensive if you think about it ahead of time, but if you have to rejigger all your code, trying to retrofit security is typically what's expensive, not security, but retrofitting security, oh yeah, that can be very expensive. I mentioned earlier you need to know the most common kinds of vulnerabilities and how to avoid them. Depending on how you measure, over 90% or maybe even 99% of vulnerabilities fit into a relatively small set of categories. So if you know about those categories and know how to prevent each one and do that, you can reduce your vulnerabilities by at least in order of magnitude. There are some widely used, carefully crafted lists of common vulnerabilities. You should know them, you should use them. If you're using web applications, a lot of folks use the OWAS top 10 for web applications. If you're doing anything else, a commonly used list is the CWE top 25 list. Top 25 is actually a little bit misleading because they do list 25, but they also list some extras on the CUSP, which are a couple extra that aren't the top 25, but maybe you should think about those too. The good thing is, once you know about these common kinds of vulnerabilities and how to avoid them, you squash an incredible number of vulnerabilities in your software. So here's some examples. Things like injection vulnerability, something, there's a common problem that can be a disaster, in particular for web applications called SQL injection. And these are incredibly easy to counter. If instead of concatenating strings, you use something called prepared statements, then the code is easier to see, it's sometimes faster, it's easier to understand, and it'll counter SQL injection attacks. Another common problem is cross-site scripting. Another problem is buffer overflows, which is actually a subset of a larger category called memory safety failures. These are still endemic, 70% of the Chrome vulnerabilities and of Microsoft's vulnerabilities are memory safety failures and have been for many, many years. So once you know about these, you can start countering them and eliminating them as a problem for your software as vulnerabilities. I mentioned earlier, you need to do more than just knowing your head, you need to apply, but you know what? It's hard to be perfect. So it's important to also add vulnerability detection tools to your CI pipeline. The key here is to detect problems early. Now, there's many, many different kinds of tools. You should include multiple kinds. And in many cases, include multiples, even of the same kind. You can think of tools as a kind of automated reviewer. Different reviews, human reviews will notice different problems. Different tools notice different problems. And so you really want to bring a suite to bear. Practically all tools have both false positives and false negatives. They're almost always going to have at least one and typically both. False positive means that it's going to report something that isn't really vulnerability. False negative means it's not going to report something and it was a vulnerability. What does this mean? You still need to think. The false positive means just because a tool reports something, it doesn't mean that's an actual problem. You need to figure out what to do about it. Maybe you don't need to change your code. Maybe you just need to tell the tool that no, that's not really a vulnerability. Or maybe you need to change how you do this, develop your software so that false positive is no longer triggered. False negatives are a problem, of course, because the tools aren't going to find everything, which means you still need to know how to develop secure software. You still need to apply those design principles. Again, try to have many tools. Now, there is a really different kind of strategy you'll need to apply, depending on whether or not you're doing a Greenfield project or a Brownfield project. By Greenfield, I mean it's a new start, there's no code. Brownfield, oh man, here it is, congratulations. It's a million lines of code. It may have been around for years. If it's Greenfield, a new start, typically you want to add the tools right now, probably as many as you can, as many as you can, get them into CI pipeline, make them really sensitive. Why? Because the instant you start writing code at all, it'll immediately warn you about problems and constructs that are dangerous. And you can, oh, I guess I see that. Do this instead, and then all the rest of the code will apply those lessons. Brownfield is very, very different. If it's an existing project, you typically need to add tools very slowly and start by greatly reducing their sensitivity, really limit what they report, and then slowly increase their sensitivity over time. The problem here is that if you just add a whole bunch of tools and make them sensitive, you'll be completely overwhelmed with reports. Oh look, there's a billion reports. I can't possibly handle that. Right. Okay. Start with a very few tools, start with most important limit, and then add things over time. There's a question earlier about the CI best practices badge. Let me talk a little bit more about that. The CI best practices badge is essentially a list of criteria of best practices for open source projects, and the goal of those criteria is to better improve quality and security. So here's an example of some of these criteria. The project sites must support HTTPS, they must use at least one automated test suite, at least one static code analysis tool must be applied. That's a tool that analyzed the code without running it, and the project must publish the process for reporting vulnerabilities, because even after you do everything, there may still be a vulnerability, make it as easy as possible for people to report it back to you. And these are based on the practices of well-run open source offer projects. If an open source offer project meets those criteria, it earns a badge, and this enables projects and potential users to know its status. So there's that question earlier about if I'm a potential user, what do these do for me? Well, they help you figure out, oh wait a minute, this project is working hard to apply a lot of good practices. There are actually three badge levels, passing silver and gold, but I'll note that even getting a passing badge is a significant achievement. Participation is widespread. I have on the slide deck is 3700, it's actually over 3900. We're getting really close to 4,000 participating projects. There's over 500 passing projects, and you can see the current statistics right there in that URL. And this is also a project within the OpenSSFS practices working group. Now, if you are a user of open source software, okay, and another including, by the way, a developer software that's reusing other open software as a dependency, and by the way, that includes most software developers. So what do you do? Well, number one, is there evidence that developers work to make it secure? And oh, by the way, all those things I just told you before, but how should developers develop secure software? Now your job is to look to see if they're doing that. Okay, are they doing things like applying the CI best practices batch? Number two, is it easy to use securely? Three, is it maintained? You want to look for things like recent commits, multiple developers? Does it have significant use? You have to be careful here. There is a real problem in the software development world with what I call fad engineering. You know, oh, big company X uses it. Therefore, that must be the right software for me. No, actually, they probably have very, very different problems than you do. So, you know, just because somebody else uses it does not mean it's an appropriate software, it's an appropriate software fewer circumstance. However, there is something to it. If there are no users, there's probably going to be no reviewers. It's probably not going to be maintained well in the future. So, you know, it should have some use. What's the software license? We still have people today who have the misguided thought that if there's no license on the software, it's open source or it's usable by anybody. The law around the world has not changed because some people think that, okay? The law is the law. And if you want to be allowed to use the software, it has to be licensed. This isn't some ideal. This is what laws around the world say. So, there are various tools that can help you identify the components within them to figure out things like the license and also if there's no vulnerabilities in them. If it's important, what is your own evaluation? Okay. And the great thing about open source software is that it makes it possible. I'd like to note to folks that citizenship isn't trustworthiness. If you want to trust something, look at the code. And did you acquire it securely? And the biggest problem there is, did you acquire the right thing? Now, I mentioned evaluation. If the software is important to you, not examining it is a risk. If you just take some bits, whether it's open source or proprietary, that's a risk. Okay. So, I think it's often a good idea to review the software because even a brief review of the code can give some insight. Again, is there evidence that developers are trying to develop secure software? You can often find some evidence of insecure, woefully incomplete software. You can run some tools against it. You can look for evidence of maliciousness. And of course, trying to figure out the likelihood that packages were developed were generated from the source code it claims. And there are folks who can do that for you for a fee if you want to do it. Now, when you're downloading some software, there are some things to think about. And there are some complicated things. Before you go complicated, there's some simple things you can do that can lower risks with almost no cost. First of all, double check the name. Make sure you have exactly the correct name before you add a package as your dependency or download it for use. Today, the most common kind of malicious attack on open source software is typosquadding, creating projects that have similar names, but not exactly the correct names. Why? Because attackers can take the easy road, and this is easy, instead of trying to subvert the software, just make another project. Check for dashes versus underscores, ones versus Ls, Unicode characters, and also check its popularity, things like when was it released, what's the download counts, and so on. If you're about to use a package you know has been around for 10 years and it's widely used, and oh, look, here's this package. It was created last month. It's got three downloads. That's not it. We don't need to be a genius to realize that's not what you were looking for. Some simple checks can actually avoid a lot of problems, and of course, download, install, and trust worthy way, whatever its usual redistribution approach, try to use HTVS instead of HTTP. One technique that is actually really helpful is download and delay. Download, but don't install it quite yet, particularly if you're doing application-level packages, because sometimes if a download site gets attacked, it's noticed, it gets fixed, but if you wait it a little bit, then you didn't end up installing the maliciously subverted one. Try to verify it's digitally signed, but the challenge there is that there are challenges today with digital signature and verification. I'll talk more about some efforts to try to deal with that later. Now, of course, when you're operating the software, protect, detect, respond, you need to protect. Protection is great, but things happen, so you also need to detect and respond to attacks, constantly monitor, and if a vulnerability is found in a dependency, examine it quickly. If you know it can't be exploited in your environment, fine, but otherwise, you want to rapidly update, test, ship, as I mentioned earlier, and the key here is you've got to be faster than your attackers. If you're waiting a month, that's probably pointless because the attackers aren't waiting a month. As soon as the vulnerabilities become known, the clock is ticking. I don't have a lot of time to talk about it, but I should just note that although attackers certainly attack vulnerabilities in the software itself, attackers also attack the process where software emerges from developers' heads all the way out to eventually being used. There is a process that is used to develop software. Typically, developers create a local environment. It gets merged into some data, sourcing data repository, build, verify, approve, released as some sort of distribution platform, and they need to be selected directly by users or selected and brought into larger systems, which build and build and build. Of course, attackers can attack all those steps, but the good news is that there are many ways to counter those once you realize, oh, wait, attackers will try to attack me, but I don't just have to accept that. I can actively work to counter them. Things like using two-factor authentication for my repository so that arbitrary people can't just guess a password and break in and take over control over my software. There is no silver bullet. Now, so you really need to think about a number of different things. Now, what's coming in the future? My crystal ball is a little fuzzy, but here are my best guesses. I think we're going to see more help in evaluating open-source software. I'll note that OpenSSF is working on providing a data dashboard, metricsopenSSF.org, CAS is working to define more metrics. There's a lot of folks working on trying to make it easier to evaluate open-source software so that at least, hey, there's three different open-source software packages, which one should I use? You can have better information. Next up, wider use adoption and requiring software build materials. A little earlier this year, the U.S. White House released something called the executive order on cybersecurity. It includes a number of things, including a number of requirements and statements about software build materials. Today, generally, when users get some software, they have no idea what's inside them. It looks a whole lot like the world of medicine looked like in the early 1900s, late 1800s, where people would hand you medicine and Lord early knows what's in there. It might be some illicit drugs or things that you might not otherwise want to put in your body. Yet, for software, we have no idea what's inside there. There are package measures that can already track within single ecosystems, the software within a system, but there are standards now in development or released that let you share software build materials. Another is the ingredients of a larger piece of software. I'll note that SPDX exists today. It's now at the PRF level in ISO. There's some other work as well. Basically, there's already work on going to enable sharing the software build materials. I expect to see more of that in the days ahead. I think package managers and repositories are going to improve their countermeasures. There's something called verified reproducible builds, which lets you verify that the bits that you're about to install really did come from the source code that was evaluated by developers. The lack of this was, for example, a problem for the recent SolarWinds debacle with Orion, where the code that was being installed and run was actually not generated strictly from the source code that was developed by the developers. The developers developed code and they reviewed it, but that's not what was used to be shipped to developers because someone subverted the development process, the build process actually. Cryptographic signature verification. From a mathematical point of view, cryptographic signatures are solved. We know how to do it, math-wise, but there's a lot more to the world than mathematics. Applying cryptographic signature verification in the real world has turned out to be a challenging problem. One Linux foundation project is called Sigstore, which is working hard to make cryptographic signature verification much easier, also working on improving get signing abilities. Integrity attestation. There are things like in Toto and Alvarium to help with attestation of integrity. Increased use of memory safe and safe languages. If performance is not super important, there's a huge number of programming languages that are available that are also memory safe, but when you really need a strong, very good performance, a lot of programs today are written in C and C++, which are not memory safe, and this has led to 70% of the vulnerabilities today, and a lot of systems are memory safety problems. So increased use of languages that just prevent this whole cost, I think, is very promising and is likely to continue in the future. I think the formal methods are going to continue to be used in rare specialized tasks. I'd like to see a little more of that, but I don't think that's going to be common, happy to be proved wrong in that, but I think we are going to see them in some specialized tasks. Now, please work with others to make things better, because the future is whatever we collectively work to make it. If you are interested in improving open source offer, the open source offer security, then the answer then is get involved. And here are just some of the organizations I've mentioned most of them now. There's, of course, the open source security foundation, Sigstore in Toto. There's many others, and there's many others, frankly, I'm not on this list here. The point here is, if you want to improve the security of open source software, get involved. And this is not to say that, oh my gosh, open source software security is a disaster, not at all. There's a lot of really secure open source software. Open source software has some real potential advantages for security, because many people can review it and get it fixed. But those potentials are not always lived up to. And frankly, we don't want just good. We want really good, because we're all dependent on this stuff. So we want to make things better and better over time. Which really leads to this bigger point here. Developing, deploying secure software, it's really a journey. It's a journey of learning, it's a journey of improving. It's not really a singular event. Because of the way the English language works, it's often easier to use, you know, perfections and goals. But the reality is that there's always going to be ways to improve security, even if it's not vulnerable. Hey, you know, we can change the software so that it's even less likely to get a vulnerability later. It's going to have stronger defenses. Here right below are just a few links to a few of the things that I mentioned here. I already mentioned the free course. I don't have time to cover a whole course in this very short time. But I'm hoping I'm giving you a flavor of it. Because in the end, I think that that's in many ways the most important thing is understanding some education, some training. Once you know how to develop secure software, a lot of other things can become easier. And it's one of those things that will pay you dividends through the rest of your software development career. The best practices core for structure.org is where the CI best practices badge project, well, lives or at least, you know, that's where you can get started to get a badge. Here's a little guide on how to use tools. There's many other guides. But, you know, the key is get some tools into your process so that it can start into your CI process so that you can start detecting vulnerabilities, building on what you already learned from, you know, some course of some kind. All right. So I wanted to leave a lot of time for questions. So I'm, I tried to get through, I know a lot of material relatively quickly. I have lots of backup slides, if you want to ask specific questions or go in a particular way. But I'm really interested in helping you today. I want people to leave here feeling that the important questions to them were answered. And I just want to point out that this is, of course, part of the LF Live Mentorship series. And that's really kind of the goal of all of these events for an LF Live Mentorship is basically we're trying to give you some information, but we also want to, you know, discuss, answer your questions to the best we can so that you can go away and apply and, you know, and be glad that you were part of the experience. All right. With that, let's see here. I'm going to, I guess I'm going to start with looking at some of the questions up that seems to have been built up as we went. So let's see here. Yes. Okay. So the seed, the top 25, there's a link there. Sure. Yeah. There's, if you just do it in CDBE top 25, you should be able to find it quite quickly. Yeah. So, and in fact, if you take that, the course that I've been mentioning earlier, which by the way, I make no money, if you go to, if you go to edX and take the course, I make no money, you pay no money. I am very, very excited about this particular topic. I think it's very, very important. And one of the things that I made sure, because I actually wrote that course, is that I made sure it covered all those top things. So you will actually walk through those various top items and see various options on how to counter ego. What is it? And how do you counter them? All right. How are we going to address the gap to map security from CVE, CPE to SBOM, SWID connection? Okay. So regarding CVE, CPEs and so on, I think that the, okay. So for those of you who aren't familiar with this, there is a well-known method for identifying vulnerabilities, software vulnerabilities. It doesn't identify them all, but it covers many. It's probably the most complete collection of any publicly accessible list of vulnerabilities. The idea is pretty straightforward. Every vulnerability has a unique ID. They look like CVE-year. It was reported-some other number to make it unique. The current problem today with CVEs is although they report vulnerabilities, they don't record almost all the vulnerabilities. They don't record in their information any way to automatically link the vulnerability report to the software that is vulnerable. There's typically some textual description. And you know what? In a world where maybe you had 100 programs, that's fine. You can live with that. You don't need to worry about automation. There's literally millions of open source software projects today. That is complete insanity to think you're going to be able to do that by hand today. There are a number of projects which are different, but share the same name. There are a number of projects which have multiple different names for the same thing. Trying to handle that with a human is ridiculous. Now the CVE does attempt to do this with something called CPEs, but almost no projects have CVEs today. They're not really supporting them. They keep saying they're going to use SWIDS, which don't actually work for this case. I don't know why they keep saying that SWIDS are going to work when it's known they're not going to work because most of the software in there's open source and SWIDS require unique hashes for the binaries. Oh, by the way, you can recompile. That's a thing. So it just doesn't make any sense. So what are we doing to address? I think a number of people have repeatedly told the CVE folks that this is a critical thing to do. I think the reality is either the CVE process is fixed to address that or that the community will abandon the CVE process and switch to something that actually resolves the problem. I'm kind of hoping that the CVE, I mean, it took a live effort for the CVE folks to do what they did. I'm hoping that the CVE folks will fix their process so that there can be an automatic connection. But in my mind, there's really no point in reporting vulnerabilities without a way to connect it to the software that's vulnerable. It doesn't make any sense to me and it doesn't make a sense to a lot of other people, too. I realize they do it with human readable text. That also doesn't make sense in a world with millions and millions of software projects. So I'm hoping that they'll fix it. If not, I expect the community will come up with an alternative that actually resolves the problem. All right. What do you suggest the learning strategy for the beginner? Well, hey, I have an edX course. Start there. It's not complicated. You click and learn. It's simple little reading materials and click on quizzes as you go along. It's not fancy. That's a good place to start. There's always something to learn. I've been doing this for a long time and there's always new things to learn. So start with simple courses like that, like that edX course, and then just be open to learning. Just keep monitoring things. Oh, look, a new kind of vulnerability has been discovered. I'll read the article about that. All right. Now, let's see here. I think there's a... Let me go... That was from the chat. Let me go look at the Q&A. I got to find where... There we go. Q&A. All right. So I'm going to try to answer, open up from the Q&A sections. What vetting occurs to prevent bad actors from getting involved in open source security and subverting these proposed improvements in some way? The answer is actually the same as true for proprietary software, which is you make sure... What you should be doing is having other people review that code either before it's employed or afterwards to look at the code to find those problems. The biggest risk is really open source software projects with single developers. In that case, there often isn't a second person who can review it other than maybe review a later potential users who check. One of the advantages of open source is that you can... Anybody, not just the developers, can look. As the projects get larger, typically they start requiring more and more reviewed by other people. And so that makes it much more risky for an attacker to slip in and propose subverted improvements. I'll note there actually have been some efforts to insert malicious code that haven't worked. Somebody tried to subvert the insert of malicious code, some malicious code in the Linux kernel almost 20 years ago. I think it was like 2003. The year may not be quite right, but basically they tried to insert code that looked right, but it used a single equal instead of a double equal, a very subtle, hard to see flaw if you weren't used to this sort of thing that would quietly subvert, enable someone to take over a Linux-based system. That code was never included in a real Linux kernel version. It was immediately detected by the kernel developers. More recently, some researchers from University of Minnesota tried to create some vulnerable code and tried to get it added to the Linux kernel. The way they did it, frankly, was not appropriate. I'm not a fan of how they did it, but you know what? Since they did it, we may as well learn from what they did. And I'll note that once again, their attempts to insert vulnerabilities were immediately rejected by the Linux kernel developers. And so therefore, both those are Linux kernel examples. Let me add a third one. This is a very, very different one also. Years ago, Borland sold a proprietary database program for years, and they had some success in the medical community, but eventually, it just wasn't profitable. And so they decided after many, many years of releasing it as a proprietary program, they decided to sell it as a proprietary program. They released it as an open source software program. Within less than a year, months, I don't remember exactly how long, someone found a subversion that looked for all the world like a maliciously subverted backdoor. Basically, if you entered the username of politically and the password correct, you were suddenly the database administrator. This was not documented. In fact, it was, of course, a very, very bad idea. Now, maybe this wasn't an intentional malicious backdoor. It certainly looks like one, though. And whether or not it was, it shows that, hey, it was sold as a proprietary software for many, many, many years. Nobody noticed this backdoor is released as open source software people noticed. So while that's no guarantee, that does at least provide decent evidence that people really do look at open source software code and they really do detect and counter problems. Next question. What language is the most vulnerable to attacks in developing open source software? Oh, that one's actually harder to answer than you might think. Let me reverse the question a little bit because I think, because I think that, I mean, not in this particular question or may not have had this line of thinking, but I have seen this line of thinking where, hey, if I just chose language X, I will have no more vulnerabilities. And let me just cut that off right away. There is no programming language that ensures that there are no vulnerabilities possible of any kind. It is always possible to make mistakes in any programming language at that result and what you didn't expect. Now, that said, some languages do allow for a kind of foot gunning that is harder to do in any other language. I think probably the poster child for this kind of thing, and I'm sure no one is shocked by this, is the pair of programming languages, C and C++. They're not necessarily awful languages and they were designed for a particular circumstance and for a particular situation. The problem with C and C++ is that they assume that the programmer never ever makes a mistake and the programmer is willing to take great pains and care to check everything. So, hey, before you access an array, you're going to make sure that the array access is within bounds. If you read from a pointer or write to a pointer, that pointer is going to be where you wanted it to be. It also assumes that the software developer is very, very aware and has studied carefully the specification because C and C++ have a huge number of undefined constructs that a lot of people, frankly, are surprised by. A lot of software developers think, even if they write and see, hey, if I add one to the largest integer, that becomes the most negative integer, right? No, that is undefined behavior. It'll probably result in a vulnerability. It will not necessarily result in wrapping around. But wait a minute. I thought computers did that. That's a machine thing. You're using C. C has different rules. And so, I think, I mentioned earlier, Chrome, 70% of its vulnerabilities in the last number of years, Microsoft, across its software, 70% of their vulnerabilities are all due to memory safety issues. And C and C++ are memory unsafe always. All use of C and C++ always allows memory safety problems. There's no protections against memory safety issues built into the language. Now, it's true that some C++ classes, if you use them in certain ways, will protect against some memory safety problems. But in general, it's easy to escape out of, and in fact, it's easy to escape out of them without realizing that you're escaping out of them. Those are probably the most dangerous in terms of vulnerabilities. But maybe any other language, you can have vulnerabilities in any language. I mentioned earlier a push increasingly to urge memory safe languages, languages like Rust and so on. Now, it's not that you can't have memory safety problems in languages like Rust or Ada or many other or C sharp. But the difference is that you have to specifically enable the unsafe behavior. So they're safe normally. And then you can disable the safeties in special cases when you need them. But as long as you limit the unsafe code to very small sets, you can check carefully, much, much less likely to have at least many classes of vulnerabilities. So that said, can you have vulnerabilities in Java? Yes, absolutely. Can you have many in the other language? Absolutely. But there's a trade-off, and all too often it's just too hard today for software developers to be perfect in every way. And that makes it harder to write secure softwares in Z++. Not impossible, harder. Next, let's see here. What static code analysis tools do you recommend? What about dynamic code analysis tools? Oh, my goodness. Oh, okay. So first of all, I need to clarify that you know, different tools are better for different circumstances. Okay. And so it's really not a matter of here's the one tool and off you go. Okay. So that said, let me hit the dynamic analysis tools first. If you're doing a web application scanner, there's a huge number of them. And it's hard to keep track. But I've used OASP ZAP many times. I've been very happy with it. So there's one. Are there others? You bet. There's a lot of good ones. If using fuzzers, there's tools like AFL++, which is basically a fork and a maintained fork of American fuzzylop. That's something called a coverage guided fuzzer. Coverage guided fuzzers are amazing. They have made it much easier to apply fuzzing without having to be really, really, basically, they lower the barriers to entry. As far as static code analysis tools, that's way harder. Almost every language has at least one linter. Use at least one of those. As far as static analysis tools, that's really challenging. And a lot of it depends on the languages that you're using. I know a lot of folks use Coverty, which is a proprietary tool or Fortify. There's also just a boatload of other tools. Now, one complication that I should also note that makes it a little challenging for me to recommend tools is that some of the proprietary tools come with something called DeWitt clauses and their licenses. I am very much opposed to these things. DeWitt clauses, I think, should be straight up illegal. But basically, they say that if you're going to use the software, you can't publish results publicly unless we approve. And of course, the tool maker is not going to approve anything that doesn't say the tool is wonderful. And so it's had an incredible chilling effect. I believe that these should be just straight up illegal. There's a lot of other free speech laws in the US that prohibit these kinds of things. Somehow this has flown under the radar. And once one supplier adds it to a clause, other vendors are typically pressured to add clauses to their tools, too, because otherwise people can talk bad about their tool, but not their competitors. So it's a bad situation. And it makes it harder to honestly give you recommendations because what I really need to do is go grab benchmarks that people have done that are published. And it's right now very, very hard to get that kind of data. So I've given you just some personal answers. But in the longer term, I'd like to see more and more tools and more evaluations public so that we can make a better evaluation community wide. All right. How can you speak to how to best manage open-source software dependency chain today? If I evaluate widget X, how do I manifest uses gadget A, which uses gizmo B, and so on down to the bottom turtle? I know exactly what you're talking about. Yes, indeed. It's turtles all the way down. All right. It is something of a challenge. How can I best manage a dependency challenge today? Really, if you are developing software or receiving software, the number one thing you can do is look at the ingredients, what's going in, because you can compare those. There are a number of tools that work to compare known vulnerabilities with the software and the version of the software that's actually in there. And you know what? If it's vulnerable, complain and say, hey, wait a minute, you're depending on an old version, you need to update. And I think right now today that's one of the key things is going and complaining to your upstream. And you know what? If we weren't talking about software, if we were talking about physical devices with no software at all, that's actually how we would do it also. If I'm building a device made of other devices, which in turn are made of other devices, you would look at at least at the major components and probably look a little further to see if everything's okay. And if you saw a problem, you would go and complain to your what's called the upstream and think of things in terms of a river flowing down to you. You go to your upstream and say, wait a minute, the thing you're using in there has a problem. Go get that fixed, get it updated or replace it, do something. And really, I think that's what's going to going to need, what we're going to need is more and more people running tools, detecting problems, reporting back, say, go fix, go fix, go fix, and getting people to start moving and updating. Because today, the problem is often that a vulnerability is found, but somebody uses it, uses an old version, doesn't update, and then people use it, end up with this old obsolete subcomponent. So it's not insolvable, we just need to get people moving. All right, really great present. Oh, thank you very much. What's the best way to do responsible disclosure for open source software with a known vulnerability? All right, let me see just a quick side note because you asked me a question. I'm actually not a fan of the phrase responsible disclosure, I prefer the phrase coordinated disclosure. The folks who originally coined the term responsible disclosure actually recommend using the term coordinated disclosure instead. Because it implies that doing anything other than that process is irresponsible, and I don't think that's quite true. Let's see here. So in any case, typically for coordinated disclosure, you report to the supplier, and I'm a fan of coordinated disclosure with a time limit. In other words, not just reporting, but saying, hey, wait, if you don't report to the world within some reasonable time and fix it, if you don't fix it with a reasonable time, I'm going to tell the world. Now, unfortunately, there's still a lot of, I'm not sure I should call them bad actors, but poorly behaved suppliers who instead will try to threaten me with lawsuits if you expose their problems to the fact that they have a vulnerability to the world. But you know what? Too bad. You wrote the software, you included a vulnerability in it, it's your job to fix it. And if you won't fix it, then I think the public has a right to know that the software that they were thinking about using is vulnerable, maybe they shouldn't be using it. If you don't like that answer, then please go fix it. Ideally, start writing your software secure in the first place, to be secure in the first place. So what's the best way to do disclosure to open source software? Well, the number one thing to do is actually not your problem, but the open source software projects problem. And that is the open source software project should tell everybody how to report vulnerabilities to it. That is one of the CII best practices criteria. I'll also note that is one of the most commonly missed criteria. In other words, when a project doesn't get a passing badge immediately, this is one of the most common reasons for it is they haven't told anybody how to report vulnerabilities to the project. And the other reasons for it, I understand, but you still need to do it. So the best way is to go to the project page, notice that they have told you how to do it, and then go report it. Okay. All right, that's a little bit of a cheat. What happens if I go to the project? In fact, they haven't told me how to report a vulnerability, which is unfortunately a common case. Actually, it's a common case for a lot of open source and closed source software. So what do you do then? Well, what you do is you go find a way to contact them. For example, if they're on GitHub, you can at least probably open an issue and say, hey, I think I may have found a vulnerability in your software. Please contact me, give some contact info, and let's talk about this. In most cases, I recommend what's called a coordinated disclosure. In other words, don't reveal publicly immediately exactly what the vulnerability is. And the reason is that attackers will look for that information and will start exploiting customers immediately, people, well, customers, people who are using that software right away. And that's not fair because the users of that software had no reason to know there was a vulnerability. And it's going to typically take some time for the supplier of the software to create a proper update and fix it. So it's often best to try to do this quietly. Now, when is that not true? Well, sometimes that's not so true. For example, if all the attackers already know about the vulnerability, it may be the only organization that doesn't know about the vulnerability is the project. The secrecy isn't so important. Maybe what's important now is getting it fixed well and quickly. But really, I think right now is find a way, ideally use whatever process they have for reporting. If they don't have one, quietly ask them, create one and coordinate so that you can report the vulnerability to them. Give them a time limit. You can negotiate a time limit. There's various discussions on what a time limit should be. A lot of suppliers think the time limits should be really long. A lot of people who are potentially vulnerable, who are possibly impacted by these vulnerabilities, want them very short. But a lot of these things can be fixed in a week or two weeks, 45 days on the outset and the outside. Some will even give up to 90 days. But basically, you'll create a time line because otherwise it's easy just to let these things go on and on. And the problem with that is, of course, with that no deadline and just going on and on, eventually, attackers are going to start exploiting it. Attackers may find it independently. Heck, they may have already found it independently and are already exploiting it. So once a vulnerability is found, it needs to get priority. Well, if it's important for anybody, it needs to get priority attention, get fixed. Now, if it's a low level of importance, in other words, things like this almost normally never matters, but in this really quirky, weird edge case, edge use that most people wouldn't be doing, it could be exploited, maybe in terms of, it could be shut down. Not revealing any data, just maybe it's a denial of service and an incredibly weird edge case. Maybe that's not so important. And maybe you can give more time for that. But for super important vulnerabilities, you want to get them worked off quickly. Okay, so I'm hoping that answers that question. I'll note that the OpenSSF actually has another group, the Vulnerability Disclosure Working Group, where they're discussing things like that. And there's also other groups like FIRST and so on that have some information about how to report vulnerabilities and vulnerability processes in general. Okay, let me go to the next question. I have a project with a lot of depend-about updates have stacked up. Some of the updates are breaking to the existing code. Oh, how do you recommend updating the code? Oh my goodness. Well, I suppose I could just say carefully and leave it at that, but that's not really fair. All right. Well, first of all, you have my sympathy, because I've lived that joy many, many a time. So actually, so let me quickly go on a hobby horse. I mentioned this in my presentation earlier. If you are developing software for use by others, you should be ashamed if your API changes in ways where you're making it hard for your users to update. Oh my goodness, I misspelled the old interface. Great. Add the new correctly spelled, keep the old one around. Why does it matter to you that it's not, but my API has five interfaces instead of four. How about that? Why don't you think about the users for a change? Okay, for the user's point of view, anything that is a breaking change is a big deal because, oh, they just have to rename something or they just have to restructure some code. Most people have other things to do with their lives than dealing with your API nonsense. So please, please, please try to make it so that software you develop for others isn't breaking or at least not breaking often. Try to avoid it. Give people lots of time to update. Sure, there are cases where you really just have to do a breaking change. Give the morning, make it as easy as possible to update. I know the Python 3 folks thought, oh, hey, the Python 2-3 is easy. Took them years and years and years and years and then, and they finally, after a lot of ignoring the users, finally ended up changing Python 3 to make it easier to update from 2. You don't need to do that. You don't need to abuse your users. Be good to them, not malicious to them. All right, so, but you're not, you're unfortunately in the other circumstance. Some of the updates broke your code. This is a problem. How do you recommend updating the code? I think in general, slowly, incrementally, you know, slowly, incrementally one at a time if you can. Sometimes it doesn't really work because you've got to upgrade things as a unit. But in many, many cases, what you want to do is upgrade very slowly, and after every upgrade, you run your carefully, your very thorough automated test suite. Oh, wait, you say, I don't have a good automated test suite. Well, there's your problem. Go get yourself an automated test suite, okay? It's not rocket science. There's lots of test frameworks out there. And just, you know, pick a test framework, write a bunch of tests. And, you know, and once you have some on automated test suite, now every time you can update, you rerun your test suite. Is it good? Is it not good? You run an update. Ah, man, everything broke. Okay, or maybe it doesn't, you know, maybe it doesn't even compile. All right, well, at least you know what to fix. But you want to try to make each of those changes as small as you can, and then use your automated test suites to help make sure that, yeah, it, I mean, it doesn't complain immediately when I update, but doesn't really work and do what it's supposed to do. Okay, an automated test suite lets you do little tiny increments, update over and over and over again. Package managers also help you because they make it easy for you to say, see that update to this version. Do it just one at a time. And typically, you don't want to update, you want to update incrementally, not just in terms of small number of packages update, but small version increments. If you're using version one and version 20 is out now, it's likely you might want to version update that version to two to three to four, or maybe one to five to eight to 12, you know, something like that. Okay, instead of just trying to do a big jump, one to 20. Okay, oftentimes projects will do things like, for example, they'll have backwards compatibility layers and things that warn you, hey, wait a minute, you need to change it to this way. So if you have to change something, if you change more slowly, if you change the version numbers in a more incremental way, you're more likely to get help in improving, changing the software to be ready for the current version. And again, automated test suites. The great thing about an automated test suite is once you have one, those, instead of saying, man, I got to change it all at once, because I don't want to rerun my tests every time, rerun your tests every time, done, automated. Okay. And then you greatly, greatly reduce your risks of problems down the road. And frankly, I guess this is probably obvious anyway, but I'll just say it, you know, you may need to prioritize. If you've got a lot of updates that have stacked up, some are probably more important than others or easier than others. Figure out what you want to prioritize and prioritize those. Okay. All right, so I think that's enough for that. All right, I'm trying to understand our open source software making money. Can you help me understand? This wasn't really, frankly, we could have a whole presentation on just that. In fact, the people have done a whole long presentations on just that. So I'm not going to be able to answer your question to give it full justice. But let me attempt to do a quick answer. Hopefully it will at least give you a hand. And then by all means, seek out longer presentations that really focus on that. All right. So there's actually a lot of ways that people make money. And a lot of cases, people are doing developing open source software for reasons other than making money in the way that you're thinking about it anyway. So let me start by answering the question that's directly asked. Okay. How do people make money as opens our software? Well, first of all, a lot of open source software projects sell support or companies sell support to it. So hey, you can use the software for free. And a lot of companies talk about in terms of funnels. You can't sell services if nobody uses your products. So your first step is how the heck do you get customers in the door? I'm going to give them the product for free or a low price. And once they're using the software are comfortable with it. Oh, wait a minute. They want to use it, do it more with it sell support. Some companies, they have like the cores open source, and then there are various additions that are proprietary closed source that you can use, but they charge they charge extra for those. Another thing to be fair is that although open source software is often no cost, there's no requirement for that. That's not it. You'll notice in my definition that wasn't part of the definition. Okay. So people have and continue to sell open source software. Now that assumes that you're trying to make money just like typical proprietary or closed source software vendors. But in fact, a lot of folks, this is not the primary reason. For a lot of organizations, open participating in open source software is a money thing, but it's not about making money. It's cost avoidance. You know, if you hopefully you're familiar with how profit is calculated, income versus minus out go, right? So, you know, revenue minus expenses. So basically, if you can reduce what you spend, you end up with more profit. In many, many organizations use and support open source software because it's much cheaper than trying to develop the software themselves. Historically, if you look back, you know, starting from the late 60s, early 70s, there were a number of suppliers who were selling various versions of Unix. And each of them, you know, would take some software, modify, try to sell. And the reality was that it turns out to be really, really costly to do that. But if you can take a vast amount of open source software and either use it and use it directly or use it to sell something else or use it simply to support your infrastructure, it's a lot cheaper than trying to build it all in-house. Even if you have the capabilities, there's a, you know, there's a lost opportunity if you use your resources to do that when payment. Here's some open source software. It's either doesn't we need or almost doesn't we need and we can make some small improvements and make it better. And now we can use it for our purposes. Now, once a organization makes an improvement to open source software, they have a decision to make. Should they keep that improvement in-house or should they try to release it back? Well, you might think that, oh, keeping it in-house is the obvious solution, but actually no. 90% of the costs of software are actually in the sustainment and the maintenance process, not in the original development. So for most folks, it's a lot cheaper to get that improvement back into the main open source software project because that means then they don't have to keep figuring out how to maintain it. They don't have to keep trying to figure out how to make that work with other improvements that are being made to that software over time. And so as a pure economic model, it is much cheaper for me, reducing my expenses, to take open source software, improve it, get those improvements back to the project. And now I have something that is far less expensive than and possibly more functional than what I might have done myself. And so for many, many folks, it's not making money. It's a cost avoidance. And I should honestly note a third, which is there are a lot of people who just like to write software. It's fun. And in a world where everybody seems to think that the only thing that's important in the world is making money, we sometimes forget that, you know what, humans are humans. And humans are awesome. And humans like being creative and doing things like creating software. And we should be grateful to those folks and celebrate these folks because they make some of the most interesting and useful software in the world. So those are three different, and we can delve in further, but hopefully that at least gives you an answer to that question. All right, many companies prefer to use or reuse open source software, don't want to share anyway. In this case, how is the open source software, how the, I'm not sure, in this case, how the open source software fundamental is achieved? I don't, okay, I don't exactly understand that question. So I'll answer what I think you meant and I'll hope I got it. All right, you know what, this is something called the free writer problem. But the great thing from the open source software world is that, you know what, if somebody wants to reuse, take some open source software and use it and contribute nothing back, it also costs the open source software project nothing. And it turns out that a roughly small percentage, it depends on the software, but typically maybe a percent, one percent, maybe two of the, of the people who use the software turn around and contribute. That's okay as long as you have a massive number of users. So if you only have 100 users and only 1% provide any support back, you only have one person who's giving any help, that's a problem. If you have millions of people using that software, this is not a problem. Okay, the, the Linux kernel is available for anybody to use, not everybody supports back. They literally have thousands of developers and they have an incredible release cadence, same for Kubernetes. So, although it can be a problem, as long as you've got a vast number of users, the fact that many users don't contribute back is okay, as long as some are willing to work with and contribute back. And large numbers help a lot there. Should we take open source software and do changes for our needs without disclosing to the public? Ah, I kind of covered that earlier. You can do that with any code, and believe it or not, you can even do that with GPL code, that is perfectly legal. Whether or not it's wise to do that is a different question. If your needs are so weird and peculiar and no one else is ever likely to do anything like that, maybe that's fine. But while it's legal to do that, that doesn't make it wise. So let's talk about why you would do that. First of all, if you're an academic, you might say, well, I changed the code, but nobody would want to use my ratty code. It's not really intended for production use. Yes, but you still should release it. Why? Because we have a real problem in science, something called the reproducibility crisis. There's an incredible amount of claims that are claimed to be science, but in fact, when people attempt to reproduce the results, we find out that in fact, they're not true. And that's a big problem in science today. Some areas of science much more than others. So if you're an academic and you want to make a claim, and your claim is somehow based on software, you should be releasing that software. Not because you think that the rest of the world is going to just use that as production code, but so that you can answer exactly how you got the results. What was your analysis? Modern academic papers just don't have the space to record the details that are necessary. And there have been some serious problems. There was actually a problem in 2001 where there's a major algorithmic breakthrough and something called the satisfiability problem. But for quite some time, although there was a paper that described this algorithmic breakthrough, no one else could reproduce the results. Finally, the code was made public. And in fact, in this particular case, it turns out that it was true, but the paper had failed to make some important details clear. And really, it's almost inevitable. You just can't make a paper say everything. By releasing the code, they release it. Now, what happens if it's production code and you're using in your environment? Well, you can, but if you do that, every time that project makes an update, you're now going to have to figure out how to merge your improvements with theirs. And they didn't make their changes to work with yours because they don't know what yours are. Very, very quickly, you end up paying an incredible amount of time and effort and money trying to take that code and merge it with the other changes that were made. And the more active that other open source software project is, the harder and more expensive it is. In general, this is a real problem. Whereas if you bring it back, you can avoid that 90%, well, that huge cost of sustainment. And of course, once companies start doing that, once individuals start doing that, the project starts moving faster, and that becomes even more important for other people to contribute back to make sure that their improvements get in. So while it's legal, that doesn't make it wise. And you should be trying to make decisions that are wise, not just legal. Okay. Alrighty, let's see here. I'm going to go back to the chat. Let's see here. Wow, there are a lot of questions here. So let me attempt. I have a limited time, but let me try to get to them. David, there is one question, really. I think there are a couple of questions. Most of them are thanking you for the presentation. If I can orient you, there is one question from Ghosh. Okay, CVE Jason's 5.0 schema. Well, I don't know who the we is. I mean, okay. But I mean, the CVE process does project, process, community, whatever. There is, of course, Jason 5.0. But we need more than that. We need to do a better job of connecting the CVE reports to the software that's vulnerable so that that can be determined completely automatically in almost all cases. So, I mean, yes, there's various improvements as they update their JSON, but in my mind, that's the more important question. Yeah. C assumes the developers can test it. Well, and to be fair, the problems that they had at the time were quite different. They were trying to make very limited equipment to a whole lot. You know, modern phones are thousands of times faster, thousands, probably millions of times faster, you know, a thousand times faster, a thousand times more memory than the equipment that they were working with initially. So, you know, different circumstances, different problems. Let's see here. Yes, the slides will become available. Okay, no matter the language, we do see plenty more vulnerabilities in the application stack where the lower stack or the system stack might have blast radius. I think that's kind of oversimplifying, but there's some truth to it. I would characterize it slightly differently, although maybe it's a subtle distinction. The system tier, because that's used by so many different, completely different applications, gets a lot of focus from a lot of different folks. And so a number of vulnerabilities get squished out from a number of the system tier combined simply because there's more people using them than any one specific application stack. And of course, a lot of the application stack stuff is faster moving, changing, and when you make a lot of changes in a hurry, you're more likely to make mistakes. I mean, that's true for software, it's just for anything else. But I think the reality is that we need to worry about security at all layers. Okay, and there are basic fundamental principles that apply no matter what you need to know. I mean, it's true that, for example, I mentioned memory safety. For many, many languages, you know, your solution is use a language that provides memory safety, you're done. For some folks, though, you usually use a memory language that's usually memory safe, but you can disable them, limit when you disable them, and when you do, now you've got to apply that knowledge that's specific to memory safety issues, because in the unsafe parts, you have to apply that. And of course, if you're programming in C and C++, there is no safety. It's always unsafe, and therefore, you have to write with extra care to make sure you don't have any of those problems. But a vast amount of stuff is just true across the board and you learn it. All right. Okay, humans are awesome. The people who write the code, all the rest depend on, like any other infrastructure, I'm grateful to those many unsung heroes who make our roads, build our buildings, and oh, yes, write the software that we all depend on. So I'm grateful to all of them, including many, many of you who develop the software that the rest of us depend on. But please, please, there's some, I've tried to kind of give that surface. Take a course. I pointed to a free one. You know, learn a little bit of, you know, if you're doing, if you're managing or involved in open source projects, try to get badges, get some tools into your CI pipeline. While it doesn't guarantee, those things don't guarantee everything, they really make a tremendous difference. All right. Wow, I'm out of, am I out of questions? I can't believe that. Surely people have some more questions for me, because I do want to answer questions is kind of the point of this. Yeah, you probably have time for maybe one or two more questions. One or two more questions. All right. So somebody quickly type in a question. So we will, I can't believe I managed to get through them all. There were some interesting, there were a number of interesting questions here. Okay. When I was creating this presentation, I had originally thought about diving in, for example, into the many different common kinds of vulnerabilities, you know, mentioned right here. But I decided not to. There is always a danger, first of all, that thinking that if you know the most common kinds, you know, all there is to know about security, and that's not true. But in 45 minutes, I just wouldn't have had time to get in enough. It's not that I need a lot of time. It's just 45 minutes isn't quite enough. And so, you know, what I really want you to do instead is point you do these top lists, point you do courses like the one I just mentioned, that kind of walk through one of the top ones. How do you counter them? But, you know, counter them in terms of things like, you know, write code this way instead of that way, or use this kind of approach instead of that kind of approach. None of them are terribly complex mentally, particularly if you're just worried on what am I trying to avoid and how do I fix it? I mean, you don't need to know the details of memory management to know how to avoid buffer overflows. You know, don't try to read and write out of the buffers to usually arrays that you're supposed to be using. Check, make sure that you're within them. It's not complicated. At least mentally, it can be complicated to do because you have to keep doing it over and over again. But if you're using C or C++ or any other, I'd say that's your pay. That's what you're paying for. You know, you are getting all the control, but now you have to take responsibility for the awesome level of control you receive. If that's not, if you're not comfortable with that, if that's not what you wanted, maybe these were not the right design decisions for what you have planned. If you see what tools or guidance lines exist to assist GPD and so on, well, in fact, I've already gone through a number of those. There are a number of guidelines. I talked through the course. There's a number, less a number of common vulnerabilities. When I talk in the course itself, it mentions specific things. So, you know, don't foresee. Don't use stir copy if you don't have to. Using stir and copy or S and printf or some other function like that, which provides automated protection is going to, you know, basically replace the dangerous ones with the less dangerous ones. And the course goes through the, you know, instead of this function, do this or, you know, consider using this instead. GDB is not really going to be so much of your friend here. GDB can help you find if you have a particular input that you know is a problem, but the challenge is finding the inputs. Fuzzers can sometimes help you find inputs that will cause trouble. In which case, once if you use a fuzzer and it finds a vulnerability, then yeah, you could use GDB to help you find those. But it's much better to know ahead of time where the common problems, you know, what kinds of function calls or approaches should I use instead, and then using tools to detect problems and fixing those remaining problems is really the way to go if you're using C. And I know you see a lot myself, so it's not like you can't write secure software in C. You just got to be stinking careful. If you're writing in C, also go read the standards. If you have not read the standard, in particular the appendix on undefined behavior, you got to do it. There's a lot of things that are undefined that a lot of people who write in C have no idea, and they're just basically foot guns waiting for you. So hopefully that answers that question. David, GCC seemed to be including some of the CW detection lately with their FNLizer option, that I'm finding useful to run through. Right. Now, as far as I'm concerned, it's just GCC and C-Lang also has some tools. Using GCC and C-Lang to detect some vulnerabilities, it's a tool. It's just like your other tools. Turn on as many as you can. By all means, turn on the mechanisms in GCC to detect vulnerabilities. On the other hand, don't expect that GCC or C-Lang or any other tool will find all the vulnerabilities. You want to learn how to avoid them in the first place, and then you want to use tools to help you find the ones that got through. Right. Absolutely. Thank you. Very good. Very good. Well done. Yes, you did it. We are right at time, everyone. Big thank you to David and to Shua for their time today, and thank you all for participating and asking questions. As a reminder, this recording will be on the Linux Foundation YouTube page later today, and a copy of the presentation slides will be added to the Linux Foundation website. We hope you all join us for some future mentorship sessions and thoughts are out. Have a wonderful day, everybody. Thank you so much, everyone.