 Thank you so much for making it today. Really excited to talk with you all. The name of my talk is Managing Audits of Open Source Projects. And I'm Amir Monteseri, Managing Director of Open Source Technology Improvement Fund, or OSTIF, or OSTIF, or ASTIF. It's pronounced many different ways, but it's all the same company. So I like to start off with this research paper, Zero Days, Thousands of Nights. Is anyone familiar with this research paper, by chance? OK. Couple nods, couple yeses, noes. I was an excellent research done on the nature of vulnerabilities. It was one of the most thorough studies into the lifespan of vulnerabilities in different technology stacks. And one of the main conclusions from that paper was that the average zero day has a lifespan of about 6.9 years, which is over 2,500 days, hence the name Thousands of Nights. And what that research paper really solidified was that finding vulnerabilities often requires in-depth auditing, logic review, source code analysis in order to go several layers deep. One of the reasons for this is that with products that are more popular or have been around for longer, they typically have more eyeballs looking at them. But more so on the surface level. You have folks reviewing things. You could have some very basic reviews, but really finding those problems requires going deep into the code. And what is the next? Oh, yes. And what I'm going to talk about more today is that a security audit, an independent security audit executed well or correctly based on the experience that we've had doing it for over seven years now is a very effective tool for doing exactly this, for finding those deep-seated vulnerabilities. I also like to refer to this research that came out, I believe, in 2020 called Threats, Risks, and Mitigations in the Open Source Ecosystem. But the main takeaway from this research is that essentially almost no open source projects have been looked at from a security perspective, from that independent review perspective. And as you all know, as the ecosystem continues to grow, as new projects come, this number substantially grows, and that small portion of projects still remains the same. So to talk to you a little bit about open source technology improvement fund, OSTIF, we were founded with the mission to improve the security posture of critical open source projects. And the main value add of the organization is facilitating those security audits end to end and implementing a lot of strong controls that lead to a high quality bar and strong cost controls. So for example, we have found and patched a little over 50 severe critical or high risk vulnerabilities. And these would be bugs with a CVSS score of seven or higher. We've found and patched over 300 notable security bugs across different ecosystems and have coordinated little over 10,000 hours of security work. And one really cool metric is back in June when I gave this talk in Austin, that number 50 plus was actually 40 plus. So just in the last couple months, we've been able to continue finding and fixing these critical and high risk vulnerabilities very consistently and in a repeatable fashion. This is a new slide that I added from the June talk just to talk a little bit more about the systematic approach that we take and the refined process that we use to execute audits to a high level of success. Number one, connecting with the project. This is absolutely critical. Having that personal touch, engaging with project maintainers or contributors directly to really communicate those intentions that we would like to collaborate and improve the project's security posture. Understanding the needs of the project. So really listening to what the projects tell us about where their needs are, which helps identify gaps in testing and identify pain points and helps us for the next step, which is scoping and bidding in which we will collaborate with the project to build an RFP and have a document that we send out to all the different teams that we work with. We work with about over 30 security teams all around the world and all with varying levels of specialties, skills. So having that diverse skill set to draw from helps a lot too. So once we have connected a project to an audit team, there's typically what we call a pre-audit orientation. So we'll connect the teams, get everybody in the same room or in the same virtual room, and prepare. So talk about what the goals are. Reiterate those intentions to collaborate on improving the project. And allow for the audit team to hit the ground running when the audit starts. And then we have our audit kickoff, which is just a way to formally kick off the audit engagement. And part of these audits really has kind of three main objectives. So the next one would be threat modeling, which is a very excellent exercise to do. I highly recommend projects to look into threat modeling or even do a threat model themselves internally, just as a great exercise to think about risks, to document those risks, to help guide the project when findings happen, as well as define the attack surface of a project. Then there is the meat and potatoes, so to speak, the code review, in which code is reviewed for logic errors, vulnerabilities, as well as a tooling review. So we very much intend for our projects to not just be a point in time improvement of the project, but really help projects and give them tools so that they can continue to be secure and find new vulnerabilities. So examples would be helping projects integrate into OSS fuzz, working with them on their CI-CD pipelines for continuous testing, and really any kind of tooling that can improve a project's security posture, and also, again, give them those tools to help them moving forward. And then the next step in the process is typically to collaborate on fixes and improvements, typically instead of what some folks think of an audit where you find all these issues and then just dump that onto a project. We work with teams that collaborate with the project contributors and maintainers on fixes to report things as they are found and to collaborate on improving the software and fixing vulnerabilities. And then lastly, audit report and publishing. Transparency plays a very valuable role in our process, and we typically finish an engagement with an audit report that is then published to the public to review, which really documents all of the improvements and serves as an excellent artifact for the work done to improve the security posture of a project. And I can't stress enough the value that this transparency brings. And I think that was very much reinforced just now from Julia's talk about reducing the cost of open source via transparency. I would highly recommend checking it out if you didn't get a chance to earlier. And so as I mentioned, we've been around for quite some time now and have put together a very strong coalition of experts, teams, organizations. You'll notice a lot of these organizations are, oh, it's a little blurry, sorry about that, are Europe-based. We work with folks from all around the world to facilitate this community of security experts and people who care about security and open source. And so what it boils down to in terms of managing a audit for open source projects, there's really three main pillars that lead to successful engagements. One is the project engagement and involvement. Again, we don't want to do this in a vacuum and then dump this onto projects. We understand very much so that typically projects are very busy. They are stretched thin resource-wise and they need all the help they can get. So engaging with them directly, getting them involved in the process is very important. Two, you need strong audit experts. As I mentioned, we work with a lot of different security teams. And a big benefit of that is that we are able to really fine-tune the skills that these independent auditors have and what they can bring to the table and essentially marrying that with the security needs of the project. And three, a champion or independent organization to manage the process and facilitate the process is also very critical. Because as you all likely know, open source contributors and maintainers tend to be distributed. They're around the world. They're in different time zones. They have different goals. They have different schedules. And so having that single entity to really help move the needle and get this work done effectively has helped quite a lot. So a couple of lessons learned from managing audits for the last seven plus years is zero days was right. Based on our experience, a security audit in which you find strong experts to really go those layers deep, dig deep into the code, looking for problems, typically will result in those bigger or more severe vulnerabilities which we are able to then fix. And again, the value of transparency. As I mentioned earlier, I'll reiterate. Do you think that being transparent about the process, about the intentions of what we are setting out to do has been very important and helped us be successful? And then a strong presence to champion the process, as I had just mentioned, typically does result in better outcomes. Because you potentially can have audit teams and then the projects. But you need that independent presence to marry the two and move everything along. And then lastly, an ounce of prevention. Everyone has sure heard the adage, but proactively finding and fixing vulnerabilities is an extremely effective and efficient approach to securing open source software. This is a quick case study just to iterate an example of our work. We did a audit of CRIO. This is a little bit of the timeline of how that went. But as I mentioned earlier, it started with introductions, establishing rapport, meeting with the project representatives, the maintainers, the contributors, and then discussing the security needs with key stakeholders. Again, participating in these discussions, really understanding what their needs are and how we can help. And then we finalized the scope, sourced an audit team, and launched the security review. And about a month or two later, the security review was complete. And we had security improvements made and published a report. So as you can see, this process does take time because it is typically a pretty manually intensive process. But I can say from experience that the results are almost always worth it. So some results from that engagement. We found and fixed a high severity denial of service issue. And interestingly enough, this vulnerability also affected a similar project, container D. And we were able to coordinate with all the teams to get everything patched in a very timely manner. Also, we implemented 14 fuzzers targeting the CRIO code and integrated the project into OSS fuzz. So going back to that point that we're not just helping a project point in time, but we're also helping them and giving them tools and building with them tools that will help them in the long run. Also, there are five more findings and fixes ranging from low to medium severity. These are security related issues. And as the rise of supply chain security has kind of come up, especially recently, it seems in the last year or two, we have adjusted and also have teams that can do supply chain security assessments. So with this project in particular, its salsa compliance was evaluated. And they were given a roadmap and different things that they could do for achieving higher salsa compliance, increasing that supply chain security. Oh, I think I'm going to be early. OK. Let's see. Some more lessons learned from the CRIO engagement. We had learned from that process I mentioned, talking to them, understanding their needs, that they actually had intended on getting audit work done before connecting with us and found it very difficult to navigate the waters, which is actually another reason that we had founded, OSTF was founded in the first place was because there was that recognition that projects are resource strapped. They don't have the time to go out and find auditors to talk to different security teams, to basically do all of these things. And that is why OSTF was created to help and advocate for these projects and do this work for them. And due to a strong due diligence and scoping process, we got a excellent team to not only audit the main risk areas, as I had mentioned, but also implement fuzzers to continuously review and improve the project. And I can actually give another case study because this just happened, where in engagement we did about six months ago, where we did a very similar thing, implementing fuzzers, working with the project team to build these tools, we had just found out about what month is it now, September, about nine months later, that those fuzzers actually found another very high severity vulnerability that the team was able to fix. And so that I think is just another example of not only helping the project at that point in time, which is extremely helpful and useful, but helping them long term. Now I'm gonna talk about a couple of hurdles that we have experienced in terms of auditing open source projects and really just the space in general. It's come down to about three main hurdles. The first one, as I kind of alluded to a little earlier, is the level of coordination required. I put the picture of the cats because sometimes it feels like just that, herding cats. But there is a lot of coordination that goes into these processes. I kind of listed a number of things that typically happen as a part of that process of coordinating that. So I'll go from them very quickly, again engaging with the project and identifying who to talk to, working with them to understand their needs. And this is important because we've really seen that really no two projects are the same. And depending on where a project is, catering the security review to their needs typically results in much better outcomes instead of kind of having a boilerplate template, for example, or just telling them to implement one tool, we really work with them to understand their needs. And that takes time, that takes coordination, that takes meetings and resources to really put together. And then we mentioned a little bit collaborating and scoping the project, getting that proposal to different security teams to review and bid on, facilitating all of the introductions, sync meetings, issue findings and remediation, and lastly coordinating the report and publishing. So as you can see, it's a lot of coordination involved. And asking open source projects to do this themselves, I think is too large for an ask, which is why OSTF was founded to advocate for and help those projects. Another common hurdle is funding responsibility. Who is responsible for funding this kind of work? I have this picture of the credit cards here because the perception seems to be that it can be a case of credit card roulette where one person essentially becomes responsible for paying for everything, which is certainly not the case. But really it's a good question to ask, who is responsible for funding this kind of work? Is it the individual projects? Is it corporate backers, governments, foundations? A little bit of both, a little bit of all of the above. So this is a common hurdle to facilitating audits of projects. And lastly, industry perception. I actually had a great meeting this morning in which the person I was meeting with shared that they actually had some horror stories with getting audit work done. And unsurprisingly that was not the first time that we have heard something like that. Sadly, a lot of folks seem to have had less than positive experiences really doing this themselves, hence why we do what we do. Also, a lot of focus is on reactive security, responding to vulnerabilities after they have wreaked havoc. Lock 4 Shell comes to mind. Auditing projects is not scalable. A lot of focus is on automated tools and things like that. But I cannot stress enough how important this manual review and this manual process is. So while it may not be as scalable as, let's say, an automated tool, for example, it is an extremely important part of securing open source projects and a necessary part. Because as we had alluded to earlier, very, very few projects have been reviewed independently by third party security experts. However, that is also partially untrue. As I had mentioned, we work with over 30 security teams and over 100 researchers and they are typically hungry for more work. They want to do this work and improve open source projects. Another industry perception is that auditing projects is not effective. Again, some people may have had bad experiences. They had, we've heard a lot of feedback from folks that did bug bounty programs which essentially just turned into a lot of low level reports and ended up eating up more time of the projects of engineering time than it was worth. But I can say from experience and from looking at our track record, which all of our work, as I mentioned, is published and can be viewed, going back to our very first security engagement, we have a track record that suggests otherwise, that this work really is effective. And lastly, auditing projects is expensive. But what I say to that is, would you rather spend $70,000, which for my case study example was about the cost to proactively review and secure that project, or $700 million, which Brian Bellendorf mentioned on Tuesday was the cost to Equifax for the Apache Struts breach. So again, with that ounce of prevention, it really goes to show, especially in a case like this. Okay, and so in conclusion, security audits, and when I say security audits, I realize that there isn't a really kind of de facto definition for that, but security audits really have to have that independent third-party aspect. So I would maybe call them our third-party security audits or independent security audits are a must if the goal is to find fixed vulnerabilities, secure projects, improve tooling. Also, security audits can be done more effectively and efficiently when there is a champion, someone there to champion the process. And also, more funding in the audit space will make the process more efficient and effective. So as we do more of these, as we get better at doing them, as we have better relationships with the different teams who can do them, the process does become more efficient and we get better at doing what we do. So I'd like to end with a quote from Derek Zimmer, who is here in the audience and who will hopefully join me to answer some questions, but OSTF has had an incredible journey from a list of issues on a sheet of paper to a worldwide coalition of people and organizations working together to improve the security of open source software. And that is actually quite true. I do remember back, I think it was in 2014, when we literally had a sheet of paper as we were writing down these common problems that open source projects face and what we can do to help them. And I'm really happy to say that over seven years of working to solve this problem, doing it as best we can, we've learned tons of lessons along the way, some of them the hard way, but you know, if you gotta scab your knees sometimes to grow and I'm really proud of what we've been able to accomplish and continue to accomplish and just even being here in front of all of you find people today and everyone online and everyone at home is, can't tell you how grateful I am for that, that our work is actually doing what we set out to do, helping open source projects and yeah, I just couldn't be more grateful for the work that we're doing. So with that, I think I'm early, let's see five, 10, 15, 20. Oh, we have about 10 to 15 minutes of questions. So we can maybe turn this into a little bit of Q and A and I'd love to field any questions. I believe folks might have asked some questions online as well. Anything that you would like us to go into more detail on or all ears, but I guess I do talk kind of fast and concluded a little early, but it looks like we have our first question with Dwayne. Hi, Amir. Hi. Reminder for both of us, we need to catch up, but the number of years you've been doing this and acknowledging this question goes outside the scope of like the work that you're doing. Once you've gotten the fix, there are all these opportunities on the other side of, please roll out this fix, right? And the Equifax one in particular that we keep setting was a known and patched vulnerability that just hadn't been updated. So I'd be interested to hear after your experience working in this space, the opportunities you see on the other side of this equation to help people with their rollouts, their adoptions of these fixes once they're in place. Excellent question. Thank you, Dwayne. What immediately comes to mind is, goes back to that first step, that really that personal touch because we understand at the end of the day, these are not, this isn't software, these are people. People is what really drives all of this. So building a good relationship with the projects, with the project maintainers, being very open with our intentions of let's work together to improve this and make fixes has resulted in, I would say, much faster in terms of getting the project engaged to make the fix. And again, by working with folks who understand open source and understand the constraints that some projects might have, by working with them to actually make the fixes instead of just dumping a bug report, like actually walking them through the finding, recreating that and working together to make the fix has been very successful. Thankfully, we haven't had really any cases of let's say a high or critical risk severity that the project just said, we don't think this is a problem. I think that also is a call back to the value of transparency. I think because we know, and the projects know, the audit teams know that this is essentially gonna be published for folks to consume, people typically put their best foot forward because they know that the social aspect of it comes into play. Derek, is there anything that you could think of for Dwayne's question? Okay, we have a follow-up. I'll buy you a second, but just to be clear, what I'm talking about here is how we help the Equifaxes of the world get better at adopting the fix and struts once and for all. Or the cryo project is a good example. You found this great vulnerability, the cryo team fixed it, and now there's this plea that goes out to the industry, please, please update to the latest version and there's this long tail of people who are slow to update. And interested to hear about opportunities, the rest of us can take on to work the other side of the problem. I see, okay, thank you for clarifying. I do think as we grow, because like our website, for example, when we get the most traffic is when we publish reports. So I'd really like to think that these reports are being consumed on a large scale. And by doing things like uploading all of our security reviews to the open SSF, initiative to house a collection of security reviews for folks to consume, I think is just another channel to get it out there. But it's an excellent question. And it's not as high, I would say kind of on our kind of our focus because we're so hyper focused on improving the project and making those fixes, but by being just, by being vocal with the results. And hopefully, again, as we grow, I'd like to engage with more potential like media folks and folks who actually write about this stuff that can get the word out some more and have folks look out for our audit reports. But that is a very common problem, yes, is what to do after and begging the projects to fix things upstream. But I think the best thing that we do in terms of the scope of what we're capable of is just sharing that report for free. Again, we don't, this is all free to consume, will increase the chances that it's gonna get in front of more people and they can take that to the people that they would need to to be like, hey, are we considering this? You know, I saw this report and so I think the reporting and really kind of promotion of the work is what will help us there. Thank you, thank you, Dwayne. Billy, you have a question? Yeah, sort of in a similar vein. So beyond just like being responsive and available and having the bandwidth to sort of address these issues, what can open source projects do to just help the security audit process along? Like, what have you seen in your reports and sort of what can we do as contributors to help us? That's a great question. You see, do you have any, do you have an answer? I don't, I've got a question. Yeah, yeah, it's a fantastic question. A couple of immediate things that come to mind are that engagement piece, you know, as we build that relationship with projects, being responsive to kind of what our needs are because we do understand and really we have honed in this process to be as minimally disruptive to the project teams because, again, we understand that they're extremely busy and clear documentation. Oh, thank you, Carson. Yeah, I think for the folks online too, yeah. Yeah, come on, that's right. Okay, how do people ask questions now? We can pass the mic around. Yeah, but going back to Billy's question about what projects can do to help the process. Okay, so I'm gonna repeat what I said so everybody online can hear. It's very important that we understand your documentation that currently exists and it's very important that we can see what testing you're doing. So if you have a CI pipeline that is not transparent, that can lead to us doing redundant work trying to figure out what testing you're currently do so that we can identify gaps and work from there. Does that make sense to everybody? Okay. And I would just say, yeah, just really the engagement piece because we've noticed that the more engaged a project is, the more that they are involved in the process that typically has been leading to better outcomes. But I would definitely agree that documentation is a big one and some of the things like with the kind of the pre-audit orientation, so providing the teams with what they need, sometimes they'll need like a test environment, for example, set up or things like that. So just that engagement between the two. But again, we understand projects are very busy and so we try to do it in a way that is minimally disruptive but also can have maximum value. Thank you, great question, Billy. Do we have any other questions or something maybe you'd like to hear more about, more details about that we talked about earlier? Okay, we have one more. Oh, we do? Yeah, actually we have time for one more, perfect. Yeah, thanks for the presentation. I was going to ask in terms of sustainability or long-term, how to make sure that the results of the audit leave over time, do you recommend doing another audit in a number of years or what's the process there? Yeah, very good question. Generally, yes, it is good practice to, if for example, a project implements a bunch of green code or new features or things that potentially change kind of a lot of the basic nature of the project, that is an excellent time to revisit the project and do another audit. For my past life as an IT auditor, that was our guiding philosophy, which is you do risk-based audit schedules. So if a project, for example, is rated very, very high risk, you would want to maybe audit that every year or something like that, whereas if a project, let's say, is a very simple tool, has very simple functionality, not so much in terms of adding new green code or new features, maybe that can be revisited, let's say every three years or every five years. So it is something that ideally is an ongoing process. So we have had projects actually come back to us and say, we've done a lot of new features, can you come back and review them? Is there anything else that you could think of? No, but I would add that the way that we approach auditing is for longevity. So when we scope out a project, we typically don't want to do the fire and forget. One time we review your code and get a snapshot and declare it secure from that point forward. That's why we focus so hard on tooling, what testing is being done, improving the CI pipeline, writing tests for them if they don't have the capacity to do it, because that gives you the long-term impact of increasing their security on that long tail. But yes, if it's a very critical project, we would recommend revisiting green code whenever there's a significant new feature or something is rewritten. Yeah, and I would also add continuous monitoring. So that's where fuzzing helps a lot, the tooling that Derek mentioned, because that'll help you indefinitely, essentially, as those tools are a part of the project. So I think we have time for one more. Do you want to? Yeah, OK. We have a question from Steven. So real quick, can you talk about how foundations, organizations, open source projects can get involved with those? Absolutely, yeah. I can give you two examples of things that we're currently doing that seem to work pretty well. So we have, as part of our strategic partnership with Linux Foundation, we work a lot with the CNCF, which has very strong policy controls over with their graduation criteria. So for a project to graduate or to get graduation status, they need to undergo independent security audit. So essentially the way that works is anytime a new CNCF project wants to be audited, they just contact us and we do it. We do it on kind of an ongoing project by project basis. Another thing we do is with some of the larger companies where they'll give us essentially a yearly grant and with the goal to set out and let's say do five to seven projects and using that grant will basically go through them on a best effort basis. So I'd say those two are the best ways. And what we're exploring more is I think the yearly stipend model seems to work where we'll get a list of projects and we'll go through them to the best of our ability and put those funds to the best use possible and of course publish the results, which serves as a very strong, almost marketing tool to show that, hey, CNCF is putting a lot of money into security audits because we have all these audit reports to show for it. So and the way that can be done is to simply contact us. So it's just Derek and myself as the program manager. So please, we could just sit down, talk, and work something out. But we definitely have capacity for more projects and would love to bring on more partners. So wonderful. And I think with that, I give you a whole minute back. And I really want to thank everybody for coming today. And yeah, any questions, any follow-up, please let us know. I can just be reached out. My first name, Amir, at ostif.org. Derek is Derek at ostif.org. Or come find us. Unfortunately, we are leaving shortly. But yeah, please, if you'd like to continue the conversation, we'd love to. So again, thank you so much.