 Oh, yeah. Oh, there we go. Hot Mike. Hot Mike. Is that our time? Hot. How was lunch? Good? Good. Mmm, that's a really good lobster roll place, right down the street. Feeding there three times this week. Is Texas known for their lobster rolls, Grobe? They are not, but the owner's from Boston, so it's okay. But where are the lobsters from? Didn't ask. That was not in my, uh, throat model. Mmm, big mistake. Well, hello everybody. I guess we are kicking off the afternoon track. Welcome back. Today we're going to talk about securing open source software. End to end. Together. That's right. It's going to be pretty awesome. Let's see if I hit the right button. So, very briefly, we're here to, uh, we're going to define the scope. We're going to talk about threat factors of open source. And then, uh, at the end we have a modest proposal. And we'll take, hopefully, have some time for questions and answers. Yes. Hello. Good afternoon. Uh, my name is Anne Bertusio. I'm one of the leads in Google's open source programs office. One of the things we do and how our OSPOs organized is essentially all Googlers are able to contribute to open source. And my job is to help them learn those skills and, and do that, you know, successfully, effectively, and in the spirit of open source. And that includes security and things like vulnerability disclosures. One of my favorite topics. I'm CROBE. I do stuff. Uh, my day job is at Intel, but I spend a lot of time working with the open SSF and the, uh, form of incident response and security teams. Uh, I also love me some vulnerability management. Uh, so this is going to be a pretty exciting conversation, I hope. Hopefully we'll all learn a little, laugh a little. Have a little lobster. Have a little lobster, not fall asleep. So, um, Anne and I both have the privilege of working in an organization called, uh, the open source security foundation. Anybody here heard of that? Everybody here a participant in a working group? Oh, lessons. I smell new recruits. Close the door. And this is a industry coalition of both, uh, industry vendors, um, individual participants, maintainers, security researchers. It's a wide spectrum of people to come together and participate all with the goal of trying to help improve the overall security posture of open source. We'll talk a little bit more about some of the actions the foundation does and how we might be able to recruit some of you, uh, towards the end. And we should mention this is open source. So you do not have to be employed by any of these people. That's right. You can be independent. You can be between jobs. This is, we're here for you, the people. Exactly. So we're going to start off, then help kind of define the scope of our problem. Who here has seen this logo? I was mandated by the foundation to put this on a slide. All, there was no problems before this. This is the dawn of vulnerability management. But for those of you that may not have raised your hand, this is our friend heart lead. It was a problem with a open source project called open SSL, which was very popular back in the day still is. And, uh, at the time of disclosure, when this vulnerability was found, it was estimated that about 17% of the internet was vulnerable to the flaw here. And you can see, uh, and when it was around when it was announced, there were about all just under 400,000 public web servers that had the vulnerability. Three years later, that dropped down to 144,000. And in 2019, just a few years ago, there were still 90, over 90,000 devices that were susceptible to it. And then the website that tracked that went offline. So I don't know what it's like today. Maybe they got hacked by open SSL. I don't know. But I don't, uh, we're not going to be talking specifically about the coordination of this, but Heart Lead highlighted a problem that is very common within many open source projects, critical or not. At the time the vulnerability was published, open SSL had two full-time developers. And they were managing about 500,000 lines of code for the development, maintenance, testing, patching. It's a lot of work for two people for a library that essentially was the foundation of the internet. We're going to throw some statistics at you again to try just to kind of remind you why this is a very omnipresent problem. Depending on what day of the week and which website or which company's article you read, open source software is a major component of 80 to 90% of all software that exists. So regardless of the specific number, it's an incredibly high volume. One report found that about 84% of these code bases had at least one vulnerability in them. With the average of that said having about 158 vulnerabilities in the code base. Depending on the report you're looking at, the average applications can have 118 libraries, with roughly a third of them only being in active development. So fully two-thirds of the dependencies of a project might not even have active development on it. And you're looking at average library age of being about a little over two and a half years old. Looking at a 10-year period of vulnerabilities within open source. During that timeframe, as measured by CVE, there was a four-fold increase in 10 years. And that isn't necessarily that the code is worse today than it was back in days of yore. It's just that now we have a lot more eyes, a lot more people looking. They're interested, a lot more participants. So we're discovering a lot more vulnerabilities. But within that 10-year time span, a 4x increase of reported flaws that ultimately consumers had to end up dealing with through patches. Another study sites that open source vulnerabilities are often discovered through indirect dependencies. And some sobering facts here. A typical vulnerability can go undetected for about 218 weeks. It typically takes four weeks to get resolved once a project is alerted to it. So flaws are around for a very long time, potentially, and can take a considerable amount of effort and time once they're reported. And I would say on the indirect dependencies, we're not just talking one hop of one package away. People are familiar with log for shell, that was the incident with log for J. I've heard of that. Yeah, yeah. There's a team called open source insights that's kind of tracking a lot of this dependency. You can access their website at depths.dev. And what they found when log for shell came out was that on average it was eight layers deep in somebody's tree. That's where it was. So when we think about vulnerability management, it's not just one hop away. It is branched so far out. I hope you can find it. Well, and think about that in regards to S-bomb. I know there's a debate about how far down do S-bombs do their review. And if log for J was an average of eight layers deep. It's a big S-bomb. Good luck. So my context here is to share with you. Vulnerabilities in open source can be found up and down the stack. If you look at, for example, the solar winds attack, that was not necessarily a problem with open source, but an open source library is compromised. And that was way in the back end of their systems. Thinking further towards in the middleware, our friend logged for J, which I just heard about just now. That's considered middleware. So in the middle of your app stack. And then you also may have heard of a little organization called Equifax. Really nice folks had some opportunities keeping up with timely patches. Well, that was on the front end of their systems. So open source is everywhere up and down the stack. So there is not any one specific area you can focus on and try to whack that mole and keep it gone. So let's talk about how truly massive this massiveness is. Oh, should we do like some crowd work? Let's do crowd work. I like that. All right. So maybe when people think of proprietary software, can we think of some factors that make it a target for adversaries? Yes, sir, in the back there. Critical infrastructure. Certainly. Yes, the system is critical to my company. Certainly a reason to attack it. We got a gentleman here in the second row. Yeah. Yeah. And sorry, just repeating the questions for our folks online or the answers for our folks online. You know, maybe somebody's reverse engineering it because it was closed source. So they got curious, pulled it apart, found something interesting. Let's flip to open source. You know, I think what makes this a challenging massive problem is some of the things that we love about open source. It's public facing source code. So like your fine example of where somebody had to really go and reverse engineer it, source is available to everyone. Super easy. Super easy. Yeah. Community driven development, including the ability to use pseudonyms to be anonymous. We like these things about open source. We like working together. We like having kind of our ways that people like to handle their identities as respected in open source. A tragedy of the commons. You know, I was talking this morning about when it comes to security, open sources are critical public infrastructure. If one bit like Krob is talking about open SSL, that was really, really critical to folks when un-maintained can have massive impact. Absolutely. Yeah. Yeah. A lack of consistency in security standards, reviews and tooling, you know, developers are out there on their own using what they have available to them and what they're familiar with. That can mean projects that range widely in what their security hardening looks like. That's also the beauty of working in a community is you get together and you agree upon your methodologies and your tools and that isn't necessarily exactly the same way your neighbors do that, which is good for you, bad for, a little challenging for consumers. Certainly. Certainly. And but still, you know, if we jump that down to the end, still a high value target. Mm-hmm. There are many consumers companies take a walk on across our show floor here today. Many companies are now relying on open source. It is critical. So the other thing that makes this so difficult when we talk about vulnerability management is we are not just talking about one particular threat back vector. This bottom diagram is from s l s a dot dev the salsa project. It is a beautified interpretation of this more academic report on supply chain security, but essentially they're they're saying the same thing that, you know, you can look at these red caution triangles or danger danger here. There's eight up there. And that's eight different attack, you know, attack points that an attacker can take to your system here from social engineering on your developer identity takeover. We're moving into our source code injecting malicious source moving into our build system can compromise the whole darn build system can inject something in a dependency can compromise the package through something like typo squatting. And then it finally gets to the consumer. And each of those different triangles has to be mitigated a different way. These are different attacks after all. So that's what makes this all a very challenging, massive problem. Our comedy show will be going on the road next summer. Join us for tour. Sure. Tip your waitresses. Yeah. Try the view. So yes, you know, just keep driving at home. Open source projects on average have 180 package dependencies. Top 50 open source projects. The most downstream dependencies had an average of 3.6 million projects dependent on them. So if you're, you know, a visual person again, you can go to I love this. I'll just keep plugging the steps dot dev visualization. If you put in a package name, there's a dependent and a dependent number. And you can see like, oh, maybe I have my dependence and then or dependencies, but then my dependence just go out and out and out. We love package reuse and makes our graphs large. And the, you know, the spider-man helps illustrate one of the awesome things about open source is you develop your project and you can pull in all these other great ideas as a consumer or a defender. That gets very challenging. And you also have the ability that once you pull down potentially one version of the package, you can pull in a whole bunch of extra stuff to help support that. Yay. Yay users. The creation of potentially exploitable vulnerabilities increasingly outpaces the rate at which we can search and remediate them. And this is only getting worse with time. So as Krob was talking about, you know, that chart earlier where we're talking about known CVEs. That doesn't mean like he was saying our code is getting worse or that things are more insecure. It's that work. These are known vulnerabilities. The unknowns continue to compile on and on and on and just increase in time. Just give you some more data, drive it home. The number of vulnerabilities in the wild. Because they weren't sad enough. We're going to make them happy at the end. You're going to cheer. Oh, excellent. Everyone's getting, not everybody eats lobster. Anyways, carrying on. It's afternoon. We're feeling punchy. Yes. So, you know, every year more lines of open source are written than ever before because we're all becoming participants in this ecosystem. We're pulling things down from Docker Hub and we're pushing them back up to do a favor for our friends. Well, that's lovely, but we are adding to this collection of how much open source is out there in the world. Vulnerabilities seem to scale with lines of code, but other metrics aside from lines of code show similar patterns with this. And we're talking about that continuous increase. And the number of reported vulnerabilities in open source code bases is growing every year. So, we're finding them. That means we have to remediate them and respond to them, which is a whole other tax on maintainers. And to Ann's comment about, you know, your friend puts some patches into your repository, that's awesome. But not everybody has the same capabilities, the same tooling to help them, the same training, the same support ecosystem. They might not have a second developer to review the code with them. They might not have specific expertise in doing security auditing on reviewing code. So, not everybody, not all patches are equal. But I think we're here to offer you some solutions and some suggestions on how we might be able to stem some of this while we're working together. First and foremost, I think many of us in the room feel very strongly about this first thing. If you consume something, it's really important that you need to understand what's going on with that project. You need to monitor it. And whenever, you know, as new updates are patched and pushed, you need to upgrade along with it to keep pace. So, to remove any known or unknown flaws. Different from vendor software, open source, generally with a vendor, when you have a commercial entity supplying you software, they have mechanisms to reach out to their customer base. And just the very free flow nature of open source, the fact that, you know, we all kind of depend and consume each other. It's sometimes going to be very challenging. Different communities communicate differently on, hey, I patched the thing. Some of them may say I patched a thing. Some of them might never say they patched anything. Some might have a very nice security advisory. Others might not tell you there's any security implications at all. The implications of all the patch. But as responsible consumers, you need to make sure you understand you're monitoring that community. And as whatever, however they're communicating updates and patches, you need to be able to get that alert and analyze it in regards to your utilization of that software. You know, potentially there's a patch upstream that fixes a thing, and your implementation of that software might not include that area of the code base. So you might not necessarily need to react quickly. And if you get nothing else, if you are not an open source maintainer, you need to understand that open source maintainers have very little understanding of who uses their projects. Unlike a commercial vendor that has a customer list in the sales team and has very well managed understanding of who uses them, upstream maintainers don't have that insight. They generally it's not as interesting to them. They may have partnerships where I understand that my project is baked into the Kubernetes ecosystem. But in general, if you're looking at the millions of open source projects, those maintainers don't understand who uses them. And they definitely don't understand how you've taken their software, put it in to solve your business problem, how you've configured it. And so the whole log for J debacle from my standpoint is when you had downstream consumers yelling upstream at the maintainers and asking them to fill out their risk assessment, that's just not, it doesn't jive with how open source software works. That maintainer would have no idea how any of those layers and layers of downstream consumers leverage that or that you even got put in there at all. Anything more to add there? No, you hit it all out of the park. All right. Yes. So, you know, this idea of what can we do though beyond responsible consumption? So you're a user, you're keeping track of your dependencies, you're monitoring vulnerabilities. What else is a way that you can kind of help this dynamic? I have an interesting idea that's never been floated before. Really? I think maybe if you use software from a project that you could contribute back to that project to kind of help them know and understand that you use and love their software and you want to help make their software better. And our approach in addition to that contributing back to software you use is find areas where you can coordinate and work with others to try to help make software better. This is a big laundry list of stuff that we put together as part of the open SSF, just kind of brainstorming on things we can do to help improve the overall security posture of any project. But, you know, if you're a traditional infosec person doing things like threat modeling, most upstream maintainers might not understand those concepts. And that's something by working with a friend that understands how to do that. Maybe you could, that's something you could donate to the project. Let me help you walk you through a threat model to show you all the areas where how your application might be broken. We like to try to be data-driven. We have a project within the foundation called the Critical Projects Group. And they are working with organizations like Harvard and others to try to analyze use patterns of software and identify those most used, most critical pieces of software. And then we can focus our efforts on those first knowing that we can't help all 60, 80, 100 million open source projects. But we know we can start with projects that have a broad exposure, broad usage, and we're able to affect those. Ideally, we'll be able to work our way down and help everybody else. One of the things we're doing as part of, like, Alpha and Omega is working with upstream projects. I believe Amir mentioned it the other day, that they, working with one of the projects, they were able to eliminate not just individual vulnerabilities, but hopefully you can eliminate whole classes of issues. You know, teaching those developers how to avoid things like SQL injection or using better memory management or avoiding buffer overflows. You can eliminate whole classes of vulnerabilities at once. So instead of eliminating 1Z, 2Z, 3Z, you can eliminate whole things, whole cloth. We're working on continued tool development. So there's work on things like OSS Fuzz and other tools where we're trying to find ways to automate security practices for developers to make it easier for them so that they can either integrate that into their development environments and their IDEs or plug it into their CICD pipelines, however it might be, whether it's a fuzzer or a static analysis or whatever it might be. Including a GitHub app called Allstar. So if you, you know, take away from today, if you have a GitHub repo, locking that down, if you remember our chart with all the little red, red attack vectors there, Allstar is an app in the OpenSSF. It's available on GitHub and hopefully one day there will be a non-GitHub available version of it as well. Essentially it is auto, kind of auto-correcting your GitHub policies to enforce best security practices. So with that one click you can have something working through, taking a look, keeping you in line with where you want to go. So those are the kinds of tools that OpenSSF has been working on. Absolutely. And, you know, GitHub is one of the biggest repositories of software on the internet. It's not the only one. We also try to work with GitLab and other repositories. And one of our chief goals is, as we find a tool, is we want to make sure that it's usable in as many places as possible. And it's not always available day one, but that generally is always in the backlog. So you say, hey, I'm a GitLab user, can I get Allstar? Well, let's get an issue open for that and make sure we get some people working on that conversion so it works in that environment for you. We're also working on better vulnerability disclosure practices, which is the whole theme of this little track. And I think a fine red-haired young lady had a little talk about it this morning. But that's one thing we're committed to, is to try to find ways to teach maintainers, teach projects and communities to be able to intake and evaluate a vulnerability and have a process that they can share that with their ecosystem and their consumers. So we have a lot of different things we'll talk about in a minute. And then thinking about the foundation itself is trying to focus and coordinate funding so that we can have this high impact, so that we can try to, again, eliminate whole classes of issues or whole patterns of problems at once. Let's talk a little bit about the seven working groups. It will soon be eight, but now it's seven. You want to start? Yes, absolutely. And I think what we were most of the room knew of the open SSF, but maybe three or four hands, so they were active. So see if any of these working groups resonate with you with things you like to work on or think would be most beneficial for your work. So best practices for open source developers. Yay! These are things where that tooling really lives, tools like All-Star, scorecards, thinking about, from the developer experience, what can be most useful and how do we get that education out there. Securing critical projects like CROBE was talking about, there are millions upon millions of packages, but is there a set of 150 that we should really be thinking about first because when we talk about those dependencies and that blast radius of open source, are those the ones that we can make the most change if we focus there? Supply chain integrity. CROBE, do you want to take that one? Yeah, that's a group of people focused in on how to promote things like the Salsa framework. I believe that is a sub-project that lives there, but trying to help not only harden infrastructure that we're aware of, but also, again, teaching developers on how to configure these tools well, how to get the most bang for their buck, and ultimately kind of thinking through the supply chain and the complex web of dependencies that exist in software we use. We have a new working group that they're focused in on securing software repositories. So think about things like Maven. NPM? NPM environment. So we're looking at creating very specific practices, tools, and processes to help secure these very critical infrastructures that are one of the big points where folks get software. We have identifying security threats and open source projects. So this would be things along thinking about things like threat modeling. I believe that's where the scorecards is used a lot, kind of identifying there's a list of good practices, and every week they're scanning about a million different projects, and they're able to kind of showcase, hey, these things have some things that kind of deviate from what we think is good practice. Consumers might want to investigate a little more. Security tooling. I must have had some of your Texas lobster for lunch where I mixed up that best practices is mostly content and development. Security tooling, that's where places like scorecards and all start to live. Vulnerability disclosure is my favorite working group. I think Krob has told me it's his favorite working group. What's it twice? Is our next slide our fun? Our next slide is our kind of call to action. So we have, and we will be adding, based off of OpenSSF Day. So we had a whole day of the conference focused in on OpenSSF things. And based off of that day, I believe we're going to be adding an eighth working group focused in on AI and machine learning. So that's another area where we've identified there's a potential problem set and we're going to try to get some industry experts together to start thinking about it and hopefully avoid some problems down the road. But more specifically, the foundation got together. Question? Well, fun fact, if you wait just a couple more slides. I may have links at the end. But it's all OpenSSF.org is the primary landing page and that's where you can find links to the calendar, links to the Slack, links to all the working group repos. But we'll get to that very soon, sir. We staged that. So has anyone ever heard of the White House Executive Order on cybersecurity that recently came out? Oh, fun. Look at this whole well educated group. Well educated group. Well, the foundation recognized that this was kind of a very important call to arms. So many of us in the foundation got together and we developed a 10 point plan where we have listed a prescriptive set of ideas and tasks that we want to accomplish to help generically open source but specifically for the benefit of the open source ecosystem and open source consumers by helping improve the security posture in areas like security education which there's a special interest group that will start meeting the first week of August focusing in on how do we take those developer best practices and how do we provide that education to people going for a college degree, people going through trade school, people going through boot camps, high schools, professional certifications or people kind of changing careers. So we're looking at chopping up our content, adding more content and finding ways to get that out into the hands of learners so we can kind of spread and hopefully get more people actively typing on keyboards a little better in the future. Our next group would be the risk assessment group. This is leveraging the work of things like scorecard and all stars where they'll be providing a dashboard to allow consumers of open source to understand kind of the risk properties of these different projects and help describe the practices that they use or don't use possibly. You want to talk about digital six? Sure. Yeah, signing. This is kind of when we go back to, again, that graph with the red danger danger, being able to verify that a package is what it is and digital signatures are a really big part of that. So there's a project, Sigstore, that's looking into one way to do this but really a big focus area on thinking about how can we apply cryptographic signing to these packages to think about provenance and integrity. Do you want to do memory safety? Memory safety, yes. So have people seen things, you know, kind of different initiatives like such and such got rewritten and rust? People seen a little bit of that here and there? Yeah. So I see somebody in the back who pumped for rust. Woo! Yeah. So yeah, so that's one initiative you'll see in the industry and something that the OpenSF cares a lot about as well. Memory safety is kind of addressing the class of vulnerabilities that come when memory overflows and certain languages are more prone to this than others and a lot of, you know, kind of things that we've been dependent on for a long time are written in languages like C. So people are going back and rewriting things in languages like Rust that have a little bit better in memory safety and memory handling. Incident response. Woo! That's us. Go for it, Crow. Yeah. So also, the first week of July, we'll start a Sig. The idea put forward is the creation of an open source security and incident response team. So we're going to be working with the community to help refine that idea, find something that's actionable. Maybe there are people that volunteer for office hours. Maybe a certain set of people are hired. More likely, I think we're going to curate a lot of the good practice and very aggressively kind of go out and help evangelize. Here's how CVD works. Here's some tools you need. There's some amazing tools we're thinking about adopting potentially. The Vince vulnerability disclosure tool is something that they just recently assert CC open sourced. And that is a mechanism that we might be able to endorse maintainers to use to help broadcast and coordinate information around flaws that are reported to them. Lots of good activity there and we're very excited about that. What else do we have? We have better scanning. So who here has ever read the results of a fuzzer? Who here liked reading the results of a fuzzer? They're God awful. It's thousands and thousands of lines of gibberish. Most of it tends to be false positives. So this is a group of folks that we're dedicated towards focusing first on open source scanning tools and trying to identify ways we can make those open source fuzzers and scanners more reliable so the results are more actionable once that scan's been run. And then hopefully we can endorse developers start to use them because scanning use today is challenging for a developer. It's very archaic and hard to read and we want to fix that. Yeah and just because we only had about four hands go up for people who have read scanning fuzzing results for the rest of you if you want to get into fuzzing there's some apply that to your project. So oh fuzzing is great. Essentially if you're not are people familiar with that? I see a couple head nods. Okay essentially TLDRs it's like throwing garbage at your project to see what happens. And sometimes what happens is you find there are issues. Yeah thank you. There are issues like it raises things like memory safety all this stuff because bananas things will happen. So anyways there's a handful of projects within the OpenSF to help get fuzzing to folks. OSS Fuzz is one of them. There's some infrastructure called cluster fuzz. If that interests you. If you're not ready for the full cluster fuzz there's cluster fuzz light. So just things to poke around apply to you. Right? Yeah. There was a time kid of the year was here I better. I don't know. Let's move on to the next one. Question? For the virtual audience the comment was that one of the ways that Heartbleed was found was through a fuzzer. So they have their uses. There can be a little challenging. We want to make that better. That experience better for the developer. Code audits. If you weren't aware one of the most effective information security practices around application security is an expert code audit. Reading the code line by line working with the developer. Going through it it's very tedious. It's very time consuming and expensive. You'll find problems that automated scanners will overlook. So one of the things we're looking at doing through the alpha and omega project is taking a handful of these critical projects and sitting down and giving them an expert code review and hopefully correct some egregious problems before they become CVEs. We're looking at creating a data sharing mark. So this is something that's still being developed. So if you're interested in data around open source the foundation is looking at ways of collecting anonymized information about open source usage. Being able to provide that to researchers for the broader community can learn more about how open source is used. Then we have, oh, we're going to skip that one. And we're going to look at improving software security. But the S-bombs! Oh, yes, we're looking at S-bombs everywhere. This has been a long effort. I've been talking with certain people about S-bombs for five years now. So we're looking at trying to find ways to bring S-bombs to open source maintainers so that it's easy. We don't want to put more burdensome tasks on the maintainers and projects, but we want to find ways, the downstream community, there's a lot of benefit from having these S-bombs. So we want to try to find ways and simple tools that developers can quickly and hopefully automatically generate these S-bombs so that downstream can take that and make decisions based off of what's inside the software. And then, again, working on improving supply chain security. So if you're curious about the plan, there is an awesome link that you could go read the 10-point plan. If you're interested in participating in any of those efforts, whether it's the plan itself or any of the working groups, you can talk to Ann or myself from the OpenSSF. We'll be glad to help get you routed to an exciting community of vibrant, active people. Oh, I don't have it on this one. I have it on the other one. Also, pardon me one second. So fun fact, Ann and I are giving a panel speech with the greatest hair in cybersecurity in just about 20 minutes. Upstairs on the fourth floor, we have a conversation about preparing for zero day with Art Mania, Ann and I. But here is a list of some pretty awesome links. And I think, if I go back to the slide, where is it? There it is. So here's all the OpenSSF links. But if you want to join a working group, you can go to GitHub, OpenSSF, and explore there. You can go to openssf.org, it's an HTML page if you prefer that. It's a metric ton of mailing lists including the announce list. This is where announcements are made. We have public calendars. And we have a Slack on Slack.OpenSSF.org where all the working groups work transparently in the open. We have a YouTube channel where you can watch every working group ever recorded. So if you don't get enough meetings during your day, you can go home and watch meetings and open source on YouTube at night. Very excited. But then if you have a question specifically about anything in the foundation, operations.openssf.org. I just want to make a quick plug that we've had meetings in the past and we will have them in the future. So recognizing that March 21st has passed. Head to the calendar. Sorry. Does the room have questions we can answer before we get kicked out? The question was there appears to be some overlap or some synergy between items in the mobilization plan and official work groups. So the answer is yes. Next question. But in all seriousness, yes, and that is something we're working on. Ideally, each of the work streams from the plan will either be honed within one of the existing working groups or potentially a new working group will be spun up that focuses only on that effort. So for example, stream one, which was education, will be owned by the development best practices working group and then stream five, which was open source cert, that will be owned by the vulnerability disclosures group. But ideally, each of these work streams will get funneled into one of the working groups. If you want to participate in both, you may. I do. But generally the working groups tend more towards writing white papers or procedures or software, whereas these work streams are going to be focused on we have this plan, we need to refine the budget and we're going to start taking action as soon as we can. So the question was, each of the work streams, is there a definitive documented plan or is it more open-ended like we're going to fix the supply chains? The answer is yes. Each of those work streams has right now a proposed plan. We have proposed resources that are needed to accomplish that plan and what people like myself who are adopting these streams, our job is going to be to review the plan, is it still accurate, does the community agree with it and then strive to tighten up the estimates and then actually formally put a budget request together to say this is what is going to take to solve X or Y problem. Any other questions? Any questions online? Dylan? Yeah, one more question. I think it's a little bit of a known, some of it's a known problem, I can give you an example that until very recently I believe Cooper Netties had a PDF reader in it, which you know, super useful. So here's, oh and I apologize for the folks online. You apologize to Dylan online who didn't have a question? Who didn't review the question? The question was, is there anybody looking at a project does not necessarily actually use all the code in what it's ingesting through a package because of its use case, what can we do to look at that, identify that and you know, chop it off because we're not using that. I think that's a great question to your point. We were dragging along all those vulnerabilities. I'd love to see somebody look more into making that actionable, otherwise other than just saying we know there's a PDF reader in Cooper Netties. I'm not aware of any specific efforts focused on that right now, but that definitely is something we can recommend to like Alpha and Omega who's going to be engaging 150 projects, most critical and then like 10,000 others. So we definitely can recommend that as a practice that as you're going through any kind of review or a good practice for projects is to periodically review dependencies and make sure that things are still relevant. But if you are interested and passionate about that, we would welcome. Any other questions or comments? Are we all excited? All right. Well, thank you, Anne. Awesome job. Well done. Thank you all. If you have any questions, we're around and we're going to have an exciting panel discussion on vulnerabilities in just a few minutes. Thanks all.