 Wonderful. We can go ahead and get started. I'm sure folks will trickle in, but thank you all so much for coming here. My name is Amir Monteseri. I'm the Managing Director for OSTIF or OSTIF, otherwise known as the Open Source Technology Improvement Fund. I'm going to talk a little bit today about the work that we do, managing audits for open source projects, how that ties into finding vulnerabilities, fixing vulnerabilities, and share some lessons learned from doing this for the last seven years. So I'm going to start with a question. Who here has seen this research paper or is familiar with it? It is The Zero Days, Thousands of Nights. Anyone by show of hands? Have you seen this before? No. Okay. I wasn't expecting that. So this is a fantastic research paper. I highly recommend giving it a look. It did some really, really thorough analysis on, as the title suggests, the life and times of zero-day vulnerabilities and their exploits. It took a really in-depth look at really what is the nature of these vulnerabilities, and what are their common causes, how can they be prevented, and so forth. Again, excellent research paper. I highly recommend taking a look. But one of the main findings from this research paper was the average lifetime that a zero-day vulnerability lasts in the wild. Does anyone have a guess or want to take a guess at, at least according to this research, what the average lifetime of a zero-day vulnerability is? Do we have any guesses? Jonathan? A few years. A few years. Okay. Hart? Years. Years. Any other guesses? 200 weeks was in, actually, in CROBE's talk. Yes. That was a different research paper. But they found that a zero-day vulnerability lasts about 6.9 years in the wild before it is discovered and mitigated. That's about 2,500 days, which is, I think, where the paper gets its name thousands of nights, which is probably older than these kids here, whose image I stole from a stock image. But that should shock you all. And, and, and I believe even in, and this was a relatively recent research paper, and other research is suggesting around the same thing. It could be very much years before these zero-day vulnerabilities are discovered and mitigated. So also from that research paper was a very fine point that finding vulnerabilities often requires more in-depth auditing, logic review, and source code analysis in order to go several layers deep. So the way I look at it is using this image kind of as a reference, kind of those higher levels can be those kind of surface level bugs or issues that can be relatively easily found through static analysis or some type of tooling. But again, what the research shows is to find those bigger problems, it takes more than, it takes more. It takes more digging and getting in deep into the source code is a excellent way to find these really deep-seated zero-days and other vulnerabilities that you read about in the news. Another interesting fact is that from this other research paper, threats, risks, and mitigations in the open-source ecosystem found that it's an approximation, but with over 2 million at this time and with over 2 million open-source components, only a small portion of them have been looked at from a security perspective. And typically what we mean by that is what we're talking about is the auditing reviews, independent verification of these projects. So that's taking those two facts into consideration is why we started OSTIF, the open-source technology improvement fund. We found that the bugs are out there, the vulnerabilities are out there, they certainly exist. And to find them you have to go deep and audit the code. So typically doing that with somebody or with a team that is really well-versed in the area that they're auditing, not just a generalist or a security generalist, but really finding really well-suited people and teams to review this stuff has resulted in really good outcomes. So as I briefly mentioned yesterday, we were at over 40 and counting of severe or high-risk vulnerabilities found and patched, fixed. And for a little more context, those would be bugs with a CVSS score of seven or higher. And then for all the medium and low-risk vulnerabilities, which are still security bugs and security issues, we found about 250 and counting. And have facilitated or coordinated over 10,000 hours of security work. Folks love logo walls, so this is just a very brief, just kind of the couple main ones that I could think of, of organizations that we at OSTIF have worked with directly or indirectly in the last seven years to do this kind of work and to do it at scale. And so one of the reasons this has been extremely successful is because based on our experience, you need not only project engagement and involvement. So you have the project itself, the project maintainers. You have the audit team or the audit experts who will go in and audit and review the code. But what you really need is that third layer right there. You need effectively a champion or an independent organization to, again, champion the process from start to finish. And that's because, and we'll go into this a little bit more into some of the further slides, but when you just think of the traditional two players, you know, the auditee, so the project maintainers in question and the auditor, the people reviewing the code. And typically when you think of a traditional audit or an IT audit of a system, typically you have a very focused, it's a focused engagement. But in open source, you know, one of its benefits is essentially the decentralization and that also poses a lot of challenges, which is why we've been so successful in being that kind of independent organization to kind of move that engagement forward and get the work done successfully. So a couple of lessons learned from doing this for over the last seven years. I firmly believe that Zero Days was right. In order to find those bugs, those Zero Day vulnerabilities, those significant findings similar to, you know, your heart bleeds, your log four shells, you need to do in-depth auditing and source code auditing and manual reviews. Another lesson learned from doing this over the last seven years is part of the value that we provide is that kind of independent organization is a platform for transparency. So all of our work after fixes are made is published to the public for anyone and everyone to review. And that provides a lot of benefits, namely when there is a public aspect to it, everyone tends to bring their best foot forward because we do this work. When I say in the open, I don't mean that necessarily the auditing and the issue resolution happens in public, but the process is public and publicized so that we can give credit where it's due and auditors and teams who are really passionate about finding these types of vulnerabilities and making these fixes can have something that they can effectively have their names tied to to demonstrate this kind of work being done. As I mentioned, a strong presence to champion the process results in better outcomes because another thing I briefly mentioned yesterday and a lot of you are probably painfully familiar with is that open source maintainers are busy. They have a lot to do and most of what keeps them busy is keeping the lights on, so to speak. So really adding any kind of extra burden or work on to them is really asking a lot. So by having, and that's another benefit that we provide as an independent third party to support the project maintainers along the process. And lastly, talking about an ounce of prevention, proactively going out and finding and fixing these bugs is the most effective and efficient approach to securing open source software based on our experience. So I'd love to share with you all a case study of some work we just published about two weeks ago, I believe now. It was for Cryo, which is a CNCF project that we facilitated a security engagement and audit for. And to give you, and there's the link if you would like to review the publication and the report, as I mentioned, these are all published for everyone to view. But to give a little idea of kind of the timeline for that and how that process went, it was around November of 2021 when we were, I was effectively introduced to the two maintainers of the Cryo project. Again, engaging them from the very beginning is extremely important. So making those introductions, building that rapport with them early on is really the first and most important step. And then over December and January, going into a little bit of February, you know, it's end of year, holiday times, we had a couple of meetings to discuss the security needs with the project maintainers and the different stakeholders of Cryo. And that actually involved multiple discussions because we really want to get a good understanding of what the project security needs are and really front-loading that scoping by building that relationship with the project. Again, understanding their security needs, talking about potential gaps in testing and looking for opportunities to scope the audit to have the most positive effect on the project security. So then March to May of this year, we finalized the scope, sourced the audit team and launched the review. And I'm going to go into a little bit more about kind of what really that entails, but it's a very intensive process that requires a lot of coordination. But it's one of those processes that when front-loaded, when focused on from the get-go, typically does result in those really good outcomes because we understand the project, we understand what the project's needs are and can tailor the audit review to those needs. And then in June, again, about two weeks ago, the review is complete. We published the results and the security improvements were made. What are those improvements you ask? Great question. So the main result of this review was a high severity denial of service issue related to effectively knocking clusters off. And interestingly enough, this issue affected Container D, which is a similar project to Cryo, but also very, very widely used in the container space. That issue, which I'm going to talk about a little bit later, that issue was proactively fixed in both Cryo and in Container D. We coordinated amongst the teams to make that fix, validate that fix, issued security advisories, and were able to validate that that fix was made. Another thing that came as a result of this engagement was the implementation of 14 new fuzzers targeting the Cryo code and also integrating the project into OSS Buzz. And this is important because it gives the project tools and improves their tooling to maintain and sustain their security posture over time. We're going to talk a little bit about kind of perception of security audits and what have you and one that we can talk about right now, because it relates to this point of implementing fuzzers and improving tooling, is that audits are not just, at least when they're done in the way that we've been doing them, are not just a one and done kind of a thing where we are just dumping effectively, dumping bug reports on projects and saying, hey, here's all the stuff you need to fix. We're not only working with the project maintainers to make all of those fixes, but we are improving their tooling to effectively increase their security posture over time and make it more sustainable. Five other findings and fixes, ranging from low to medium severity, those were all found and fixed and validated as part of the review. And lastly, something that we've been doing a lot more recently with the increased focus on supply chain security was as part of the audit and as part of the security engagement, doing a supply chain security assessment against Salsa and giving the project a roadmap and opportunities to increase their Salsa compliance, making their entire supply chain more secure. Some lessons learned from this engagement. So we had learned, so prior to that November meeting, we learned that the cryo team was, in fact, very interested in getting a security audit. The CNCF has a great policy where in order for projects to reach graduation status, they need independent third-party audit verification. So that was kind of the driving policy and why they were very keen on getting security audit. And from those conversations, we had learned that they had a lot of trouble navigating the waters. Again, these folks are very busy. They only have so much extra time to go out and find auditors and do everything that's required, which I will go into in a second to procure good audit resources. So they had a lot of trouble with that. And again, that is really the main value out of OSTIF is that we do that whole process start to finish. So we had met with the cryo team in November and within about six months, the security engagement end-to-end was done. All of the findings were fixed and we put everything into the nice report, published it out all within six months. And then another lesson learned from this project, which has fed into really what has made OSTIF into what it is, is that due to our strong due diligence and scoping, staffing up front, we were able to get a really well-suited team to do this kind of work to not only find those problems, audit the main risk areas, but also fix those problems as well and implement tools that would help the project for a long time to come. So with that, though, with those lessons learned also come some hurdles that we've experienced and seen a lot in the space. So those would, the three main ones that seem to come up pretty consistently are the level of coordination required to do this kind of work, the funding responsibility and industry perception of security audits as a whole. So that first common hurdle about levels of coordination required, again, goes back to that lesson learned from cryo, where all of these things that need to be done in order to do a successful audit, it's a lot to ask of an open-source project maintainer to effectively find the right people to put that work into a document that can be sent to different audit teams to ingest proposals, to review all of those proposals, to find the best-suited team to do this kind of work. All of the introductory meetings, the sync meetings, the pre-closing meeting, the closing meeting, all of this stuff is a lot to ask of an individual project or even an individual audit shop to do all of this work. And the reason for the graphic is because it's a lot like herding cats because, again, we have maintainers from all around the world. Different time zones, different schedules, different goals sometimes. So it's a significant amount of project management and coordination. The next common hurdle, Crobe and Anbertuccio were talking about this in their talk about two hours ago about the common tragedy of the commons problem within open-source. And when it comes to actually who is responsible for funding this kind of work, that same question comes up. Is anyone here? Who here is familiar with the credit card roulette? Has anyone ever heard that or played that? Okay, we've got a couple hands. So the reason for the graphic with the credit cards is it's kind of like the concept of you go out to eat with a bunch of friends, you all put your cards in the middle, and one person, you play credit card roulette, and one person, whoever's card is chosen, is responsible for paying the entire bill, for footing the entire bill. So it's a fun game if you have friends close enough that you can play that with, but the point is that it really begs the question, who is responsible for funding this work? Is it the individual projects? Is it corporate backers? Is it governments? Is it foundations? Little column A, little column B? It's a great question. There is no real firm answer to that question, but an opinion here is that the stakeholders that directly benefit most from open source projects should take on the brunt of responsibility for funding this type of work, which thankfully does seem to be the case. We have cross-industry collaboration efforts like OpenSSF and Linux Foundation bringing all these different companies and organizations together, establishing that, hey, this is kind of our collective responsibility and we should fund some of this work. But it still remains a key question and a key hurdle to doing more security audits because folks ask, you know, who is responsible for funding it? It is a good question and it's a really hard question to answer. But again, these cross-industry collaborative efforts like OpenSSF do seem to be a very good avenue for supporting this kind of work at a larger scale. And lastly, these are just some things that I've heard. These are just some opinions, but it seems like another common hurdle is the industry perception of really what security audits are and what they do and the utility that they provide. A lot of focus seems to be on reactive security. So asking questions like, how do we respond to vulnerabilities after they've wreaked havoc or after our systems have been compromised or how do we react to threats faster, which is completely understandable from a human nature standpoint, right? When something is real to us, when something happens to us and we experience it, we react to it as opposed to thinking proactively about preventing certain outcomes in the future. Another common perception is that auditing projects is not scalable. While I will agree that it is probably the most manual process and the most manual part of the different types of security tools and methods for improving the security of projects, it certainly would likely not be able to be scaled to like a kind of a traditional tool, like a scanning tool or something would. But that being said, I mentioned us at OSTIF, we work with about 15 different security teams from around the world and an additional 100 security researchers. And I know from working with them throughout the years that they are very hungry for more work. They are busy because it's a very in-demand skill and there's not a lot of people in the world that can do it. But going back to the funding question as well, they're very hungry to do more work. So if between 15 teams and at the rate that we are currently producing audits, we're currently doing about 25 to 50 per year, and that number could easily increase 2 to 4x with dedicated funding. Another industry perception is that auditing projects is not effective. My response to that is simply nope because we personally have a track record showing that it very much is and the research such as the research from Zero Days, Thousands of Nights suggests otherwise. Another common one is that auditing projects is expensive. Now to that I say, would you rather spend $70,000 which is about the median cost of a normal-sized security audit or security engagement, or $700 million, which is what Equifax paid as a fine for not maintaining their software. So it's one of those that sure, you know, I think that speaks for itself. Proactively spending a small amount of money will very likely prevent a large amount of money being to fix the problems. What else do I have here? Oh yeah. So a couple conclusions here. Security audits are a must if the goal is to find and fix vulnerabilities and secure open source projects. It is a extremely valuable method and tool, especially when done to a high level of quality to find these bugs and fix them. Security audits can be done more effectively and efficiently with a third party or an independent organization championing the process. And lastly, more funding into the audit space will make the process more efficient and effective. It's one of those things where more funding makes the cost per project to go down and over time that effectively pays for itself. A little quote I have here. Derek Zimmer, who is the executive director of OSTIF and the founder along with myself. It's amazing. We started this project with a list of issues on a sheet of paper. It was a literal sheet of paper. We did some brainstorming down and identifying these problems. And we're now at a worldwide coalition of people and organizations working together to improve the security of open source software. And that is actually my last slide. So with the remaining time, which I believe is eight minutes. Oh, awesome, 12. I would love to have a discussion with you all if you have any questions. We can have it. And if there are any questions online, too, I'd love to talk about it. And yeah, I'd love to hear from you all. So thank you all so much for coming. I know it's late in the day and everyone's a little tired, but really appreciate you coming out. And I'd love to hear any questions that you have. Yes, please. So I'm kind of speaking from, like, CNCF tax security perspective and, like, you say for graduation like, yeah, you do an audit and then you kind of, like, go along your way, right? And we kind of, like, that's a point in time kind of thing. And we get a lot of feedback because we also do, like, security assessments, which are not quite audits, but just, like, looking at security posture. And I think one of the things we're trying to see is, like, what are some recommendations can we do around audit that aren't only a one-time thing? And, like, I think a lot of projects and we are trying to figure out, like, what's a good recommendation for after doing an audit, when should you follow up and how should you follow up? Like, I think a lot of projects are not willing to do, like, a full-blown audit again and, like, rehash everything. So, like, how do you recommend projects go ahead with, like, full-out audits? What is the frequency and how to go about it? That's a great question. Couple thoughts that come to mind. So, previous to OSTIF, I was an IT auditor and one really common... and we actually draw a lot of best practices from that space, but one thing that we did very commonly is what's called a risk-based audit plan. So the really high-risk areas we would audit yearly. The lower-risk areas we would do every three years and the lowest-risk areas we would do every five years. So I think something like that could be incorporated into foundations or stewards of open-source software where... and actually, and then the most high-risk projects undergo what's called continuous monitoring or continuous auditing. So, kind of drawing from that experience, I would say, you know, certain things, like, really high-risk projects, putting some time and effort into fuzzing and CI-CD pipeline improvements and things that can effectively always be OSSFODs, for example, things that can always be reviewing the software kind of continuously would be extremely helpful and then those really high-risk areas can be audited on a risk-based scale. Or schedule. And the beauty is actually projects that get audited multiple times. One reason that audits are extremely effective is because it's almost like a unit of security improvement. It's a very focused effort where it's like for three weeks we are going to focus on this and review all of this and make some improvements. So I feel like that really can't be replaced by anything, but the beauty is as a project undergoes more audits it actually becomes easier and easier both for the project itself because they know, you know, they're familiar with the process, they're familiar with how it works, and then auditors can use previous audit reports or even if you work with the same audit team they already kind of have gotten over that initial learning curve and understanding the project and the risk areas. So the audit can be much more effective and less costly for that reason in particular. So to answer your question, I would say focusing on continuous monitoring as well as a risk-based audit schedule would be probably the best way to do it in more of a kind of formulaic or methodic way. Yeah. Brandon asked if there are any examples of that in action and a couple come to mind so which project was that? We're doing so many now I can't remember, but there's this one project in particular where it actually just about to release the results but in 2020 I believe it was they had undergone an independent security audit and that audit report did a really fine job going over what was covered and what wasn't covered or what should be covered because they didn't get to it around to it the first time. So we used that audit report to help scope the current one that we did so that we were able to maybe focus on the areas that didn't get as much love the first time that they can be reviewed and validated. Thank you for the question. Do we have any other questions? Elias, I believe Jillian's going to bring a microphone for you. Hello, could you please explain the steps for your audit for example? How you cannot proceed this? Sure, sure. I can kind of go over it a little bit from this slide here. Kind of what inspired me to write this was kind of thinking about the typical process that we go through. So engaging with the project and identifying contacts that's an extremely important step and sometimes it's very hard. See my point there, yay mailing lists. There have been times where projects just refer us to their generic mailing list and say someone from the mailing list will get back to you. How many times has that happened? Zero. So even that first step finding the right people, identifying the right contacts can be a lot of work. And then again, working with the project to understand their security needs. This is a very manual, very personal process. So we meet with them one on one. And from this I've learned that, one, no two projects are the same. No two projects have the same needs and to be more effective the audit really needs to be tailored to that project and its security needs. Which is why I say here, sorry, no cookie cutter solutions here. Because if you really want the audit to be effective you have to tailor it to the project and its needs. Again, project maintainers are busy. They're doing a lot of other things. They typically have a full-time job and usually about, and their full-time job is the equivalent of three jobs and they're doing ten other things and yeah. And then there's the step of, you know, collaboration on scoping and building an RFP. So capturing those needs in a way that can be communicated to security experts to understand and be able to build their own proposals for how they're going to secure that project. And then getting that, so we call it an RFP. Very common request for proposal. And then getting that RFP out to audit teams and experts. Again, a lot of coordination, a lot of back and forth emailing, a lot of friendly nudges, like hey, did you get that email I sent you two weeks ago? Can you take a look? And then yes, and then ingesting those proposals. So again, in order to be effective you don't want to just go to one generalist or one or two teams. You want to get a breadth of proposals. So ingesting those proposals, analyzing them to determine what's the best one, what's going to result in the best outcomes for the project. Facilitating introductions and sync meetings. I can't tell you how many times I had to wake up at three in the morning to make a 9 a.m. London time call or a call with folks in China or really anywhere in the world that these maintainers can be. And then as the audit is going on, there's news and findings communication and resolution. So again, it's not just we're going to dump a bug report on you at the very end. We're going to report things as we find them and work together to figure out why these problems are coming up and how to fix them. And then coordinating report finalization and publication. So for example, some of our projects that we're doing with like Google or OpenSSF, we want to coordinate publication. So we have to work with at least three different teams and then typically the security team likes to publish on their own personal blog or the project themselves publishes on their own blog. So we really have to coordinate all of this and it's a lot of work. It's a lot of coordination and as I mentioned in the very first bullet there, audits require a significant amount of strategic management coordination and that of course costs money. So does that answer your question a little bit? I was asking more about the tooling aspects. Are you building your tools, for example, to tackle the static analysis or dynamic analysis? Are you depending on some specific hardware to kind of prove this is the bug, this is the fix and then how we fix it, etc. Okay, that's a good question. I don't have as much details on that because I'm not on the really the front lines of the fixes and what have you, but I can tell you that a big part of the scoping process is identifying what their current practices are and what their practices could be. So seeing if they do static analysis, if they have fuzzing, if they have a CI CD pipeline and documenting what they have and don't have and what the potential gaps are so that auditors who are well versed in this can tailor what they propose to improve the project accordingly. So we have, thank you, so we have one more minute. So I will have one question in the back. Oh, was someone in front of you? I'm so sorry. Oh, okay, Asma, yes. So I have a question, I guess, I think kind of relating to the stuff we do as well, but have you had any asks about codifying reviews or codifying maybe some kind of status of the reviews in order to give people some way of ingesting that and determining some kind of level of policy of, okay, I trust this project because Ostef decided that this was blah, blah, blah. Yeah, that's a very great question. It's something that we have been thinking about, yes. So generally, the way it has been up until now is that we will produce this report and let folks basically decide for themselves what they think of the project. Now, in my old audit days, yes, we would use terms like reasonable assurance, you know, controls are at a high level of control and what have you, and effectively producing a rating of some kind. Actually, Brian and I were just talking about this the other day, and I think that's a good idea because, you know, again, as we do more of these, we want a way to kind of codify it, as you said. But overall, because the process is just so nuanced and no two projects are the same, as of now, the main focus has been on how can we improve this project and then putting that into a document for people to consume. So I am out of time. Again, thank you all so much for coming today. If you have any questions, please find me. I'm very easy to contact. It's just my first name at ostif.org or find me around the rest of the day. So again, thank you so much.