 Well, let's get started then. You can all hear me, right? Okay, well, thank you for coming to the session today called Summing Up the Summit. This is really a chance to let you all know about this event that we had a month ago in D.C., but really the sequence of events that led us there and led to the development of this thing called the OpenSSF Mobilization Plan. I am joined on this panel by a number of esteemed esteemed guests. Here's that max-headroom glitching going. Why don't I start with Amelie Cron. Amelie, you're a special fellow with the Atlantic Council, but could you go deeper in your background and tell us just more about you and what you're doing with the AC, but have done before? Yeah, so let me see. 27 years in the technology space, but actually getting into it from OpenSource way back prior to college, so I've been really tied to this community since the early 90s at this point, but also been an advocate in and out of the private and public sector. I've spent, it was a decade in as a federal employee with a stint, breaking between that for Disney and a couple of companies since then, but I was anywhere from starting out as an engineering grunt to having to put all this stuff together to architecture primarily along the security side of the things and right now as a, I guess it would be the technology relationship manager if you want to do an office-based thing for a large gaming company. So, but doing a lot of writing on the side because this is an interest area that not only do I have a vested personal interest in, but professionally, I think it's a good area for folks who have the skills and interest in very backgrounds to participate in. Thanks, thanks. And you and I both testified on a house committee panel right before this event. It was like this crazy week on May 11th, it was, right? The house science committee. They were asking about security and open source code. That was a lot of fun. Yeah, it was. Yeah, it's surprising to get testimony in within a five-day window, so that was always fun too. And I also have next to me, Trey Her. Trey works at the Atlantic Council as well. Trey's a coworker of Omely's. Trey, could you introduce yourself and let folks know about both what you're doing at AC, but also maybe a little bit of context for the work that you're doing. Yeah, sure. So, hi everybody, I'm Trey. I run the Cyber Statecraft Initiative at the Atlantic Council. AC's a foreign policy national security think tank based in Washington, DC. And our team works on technology policy, geopolitics and InfoSec, sort of the crossover of the three. Omely is one of our fellows who's doing this work with us on open source. We have been working on software supply chain issues since 2019, trying to profile this as an important supply chain policy issue, trying to put some data together on attacks that we've seen, vulnerability disclosures that could have led to attacks that were interesting for our community and with conjunction with Brian and some of the other folks at OpenSF, this year and next we'll be doing some work on trying to bring more policy attention and hopefully help into the work that the mobilization streams are doing, as well as some ongoing security projects across open source as well. And I think for anybody who's here who's not necessarily as directly involved with OpenSF, we're really hoping to broaden the community of maintainers and developers and repo owners who are engaged in the policy process. So for folks that maybe haven't been in policy before interested in it, want to see how to get involved, please feel free to shoot us a note or find me after the talk, because there'll be a lot to do in the next, I don't know, 10 years, but at least the next year and a half. Now, Trace played an integral role in the last six months in the development of both the plan and the event and some of our thinking in this space. But somebody who's been with the OpenSF project longer than I have, who's been with the Linux Foundation for quite a while as well, is David Wheeler. And David, could you introduce yourself as well and kind of give a little bit of a background of the brief background of the OpenSF, kind of where it came from. And also your background is very much in the Washington area and you have some insight into kind of engagement with government as well, don't you? Yeah, so David Wheeler, I actually joined the Linux Foundation in April 2020, and we will make adjustments. So I've been working in open-source software or developing secure software for literally decades. As far as the, you ask, what is my role? My title says I'm the director of open-source supply chain security and what is the exact, does that mean? That means basically I'm a subject matter expert that goes around, I'm a developer, subject matter expert that goes around to various foundations with the Linux Foundation trying to help develop more secure software that we all around the world depend on. A significant part of my work is spent with OpenSF, with various, whatever can be done to help, including, Brian mentioned this mobilization plan. I think you and I had like a 10 hour Zoom call trying to turn all the wonderful ideas from individuals into something that looked like a cohesive whole. And in fact, I think, I'm happy with the result. Where did OpenSSF come from? Fundamentally, a lot of people have been concerned about security and, well, security and software in general, and including open-source software. Because open-source software is still software, it's not immune to the problems of software. And so at that time, there were a number of different groups. There was the CII Core Infrastructure Initiative, there was Jossi, there was another group called the OSSC. I can't remember all of them, what these acronyms stand for. But there were three different groups of organizations all interested in improving the state of security and open-source software. And it really didn't make any sense to have three different groups. There were some overlaps between even some of the members. And so, hey, let's get all together, one place, work on a common goal. And so that's really the origin story of the OpenSSF is getting all together so we can all collaborate and address these issues that are of concern to everybody. Thank you, David. And so let me also give a little bit of a background then on what led us to now, or what led us to at least to the May meeting. And by the way, I'm Brian Bellenorff, executive director of OpenSSF. Like David, I work for the Linux Foundation. I'd actually been with the Linux Foundation for five years and kind of parachuted into the OpenSSF community in September because they were frankly doing some really cool things. And it was about that pivot point where it was, all right, let's see what we can do going from kind of an ad hoc informal collection of great ideas and actually some great content and a couple of projects to a funded project that can go out and tell the world about what it's doing, recruit more efforts, recruit more people to come in and do something really ambitious. But up until that point and even at that point in September, the OpenSSF was kind of like other Linux Foundation projects where the role for the Linux Foundation was very much to act. You could think of it as like air traffic controller, convener, facilitator. It's still up to all of the people who show up to do something, right, to build the thing. And each of the Linux Foundation projects are a little bit optimistic, a little ambitious, a little aspirational might be the term. And maybe they find, you know, millions and millions of users like Kubernetes has. Maybe they find 10 really important users and that sustains them, right? But generally speaking, the Linux Foundation just tries to support whatever community it is that's willing to show up. We don't think like a software company which is let's go out and, you know, get 80% of the market for this thing and drive everyone top down that way. It's all very bottoms up. But there was this thing that happened in December that I think galvanized a lot of folks' attention on the questions of not just the security of open source software in terms of vulnerabilities and the rate that they get fixed, but also are there weaknesses in the supply chain? Then maybe it's unfair to kind of overemphasize the log for shell compromise and things that had come before. And in some cases since, you know, compromises in JavaScript libraries that are widely used but maintained by one person. But there is this sense in December of a mad scramble ruining lots of people's holidays to go and just understand where am I running log for J? Let alone am I vulnerable to it? Let alone how do I get it fixed and get out there? And the kind of people asking that question weren't just the CISOs of companies large and small. It was people like the National Security Council. In fact, we got a letter as some companies did and the Apache Software Foundation did, kind of, sorry, inviting us to a conversation. When you get a letter from the White House inviting you to a conversation, it's a little bit intimidating, right? But the framing of the letter was all about, we're concerned. Are you all okay? Is this business as usual? Which, you know, no one expects software to be bug free. But, you know, this is like if you were building, if you are building the bridges and highways of the digital world, you know, this is like a truck rumbling by and a bridge falling and us going, oops, let's just build the bridge again. So the question was, are there systematic improvements? Are there systematic things that are wrong about the open source process? The way we've all depended upon the largesse of companies and the individual selflessness of developers to build things that now run our digital bridges and highways, but more importantly, are there things that could be targeted improvements that particularly the government could be helpful with? Even if it's just a focused attention on this, let alone start to make some investments. And to give due credit, this administration had started to make some of those, send some of those signals, start to make some of those investments. In May, they issued, of last year, they issued something called executive order 14.028, which called for a whole lot of different things among them, the further adoption of software bill of materials and Alan Friedman here, it deserves a lot of credit for making good on that promise. You probably even fed a little bit into the EU itself. Okay, but really starting to raise the awareness of the importance of some systematic improvements, not open source specific, but here's some things that we might need to do. But in January this meeting, it really was about open source software. And we all had to kind of be honest, open source software was developed at a time when there was a lot of, when open source development was high trust, when the community was smaller, when it was still not too far beyond Dunbar's number, which is 150 people, right? That any individual could keep in their head social connections and you kind of could reasonably had a chance of meeting people involved in the software you're building on top of or your downstream users in aggregate and a lot depended upon those interpersonal relationships to build the trust between components that led us to overall reliable software. And even that scaled a little bit. A lot of people decided to use the Linux kernel because they saw that Linus himself was a little bit of a badass and had his problems, but like he and the community around it generally developed a reputation for delivering, taking security seriously and delivering quality software. But that doesn't work when you're 40 million components out there, when there's thousands of dependencies being included into things now, when a lot of those are one person projects on and on. So we had this meeting, it was a six hour Zoom call. Some of us flew out for the Zoom call because it was like, it was to be in person and then all Macron happened and so we flew home. But it was a six hour Zoom call at the end of which, they basically prompted us in the private sector to come up with steps that might solve it because we certainly talked about here's what we're doing at the OpenSSF, our peer member organizations that were there said, yeah, there's a bunch of great ideas, but the thing was thrown back to us, how do you go from good ideas and even rough consensus and running code to actually solving these problems? So we took that back a little bit. Things got a little bit complicated in February and March, one country invaded another, but we kept asking the White House, like, do you wanna have a follow-up meeting to kind of continue this conversation? And in about early April, they said, yeah, why don't we do this? And how about you guys convene it for us because we're busy, but we'll come and we'll bring the same level of people that we had attend the last meeting, we'll support you in doing this, but we'd kind of like an update. Have you all taken some of these goals and these issues seriously? And so starting in like early April, we in the OpenSSF scrambled a little bit and started to try to take these different ideas, some of them based on existing projects like Sigstore and Salsa and kind of the problems they solve, but some of them asking based on that conversation, are there new things that we could be doing? Things like, where is the center of gravity around incident response in open source projects? How do you help a project that is under resourced but gets a kind of a notification of a really bad vulnerability? How do you help them walk through the guide that we've created for coordinated vulnerability disclosure? And yet, there are certain skills that presumes, there are political communication skills, coordination skills that maybe call for the kind of thing we have in other areas you've perhaps heard of P-Cert. What does the P stand for in P-Cert? Product security incident response teams. That kind of exists out there, but no one's really doing that for open source or if they do, it's kind of ad hoc and it's not well advertised. How do you scale up things like third party code audits, right? And other things that we know tend to lead to better software. So we kind of assembled a set of 10 different ideas and pulled together volunteer teams around those 10 to say, can we take this for more than just a one sentence or one paragraph description into a reasonable body of work, a reasonable plan that looks at one level deeper at what are some targets? What are some existing efforts that we could build upon? What are some, how would we measure success in this? And then based on all that, what's the first tranche at what's like the minimum viable team, a minimum viable product really, that over the course of a year or two years could deliver on some major impact on that issue, right? And that, those teams that kind of all scrambled to develop a three to five page or plan, some of them deeper, more detailed than others, some of them with much larger dollar figures than the other. Now in that January meeting, by the way, we had said if we do this, it could cost billions of dollars because like that's, it's a, it could be a lot of work, right? But the value we create would be even higher. I should note that it cost $700 million for Equifax to pay the fine on the breach that happened, was it 2018, 17? Due to their lack of updating Apache struts, they were like a couple months behind, weren't they, or something like that? And so if one breach for one company is a $700 million fine, what's the economic value of having prevented that for that one company, for N number of companies, for the industry as a whole, it could be large. So we pulled these teams together. We asked them, not that money is no object, but what budget would allow you to actually have an impact? We added it all up and it came to $70 million in year one, $80 million in year two across these 10 different plans, which is like both an ambitious number when you think about the size and money associated with open source, but also really affordable compared to the impact that we think we would have. So we pulled together this meeting to kind of roll the plan out, to release it publicly, had folks there in DC. And our point of going back to DC, it wasn't to go and say to the White House, like, could you write us a check? Just write the check and we will solve the open source security problem, right? First off, we kind of don't think that's how procurement works and I do want to talk a little bit about that. But secondly, to try to find ways for the investments the government is already making to align well with what we wanted to do. And to be clear, this was not a US government driven plan. The plan itself says nothing about US government shall do this or needs to do this or that we do this to align with that standard or this executive order or whatever, but does try to lay out a template for if we do all work together in line or efforts, here's the impact we could have. So the plan has now been out. We've been talking a lot about it and a couple of the teams have started now taking the next step in honing those plans. First off, it's very much a draft. It's still version zero dot nine dot one. Just to make it clear that this stuff was going to be a work in progress. One thing that we did get in that May meeting was a commitment from our existing number of organizations, about six of them, towards $30 million in pledges against that $150 million number. Now we still have to, in each of these 10 different streams, come up with an investable plan, right? It very much is like a startup. You have to say, here's a credible set of targets and here's what it's going to take and whether the Linux Foundation acts as the coordination for that or somebody else, we'll work the details out. We won't start the plan all at once. We'll probably start a couple streams early and then my hope is that by the end of this year we get eight of those 10 running in some way, maybe all 10, but that's now the hard work, the sausage making, I think, to actually get the stuff implemented. But I'm really excited about where we got and we've already received some interest from other governments in having a similar conversation. So with all of that put together and set, now we are trying to figure out how to work with government to go forward with this, but also now take an honest step back. This was a sprint by many of these teams, including the two-day kind of 16-hour Zoom call sprint between David and I to try to align those things. We took a whole lot of liberties and a lot of it is kind of throwing things on the wall, but some of that perhaps doesn't mesh cleanly with the reality of how things are actually gonna work on the ground. So I'd kind of like to throw to the panel and maybe Trey, I could start with you. When you read the plan, and you did contribute a couple pieces of it, deeply appreciated that, what were the things that perhaps you saw that were either missed opportunities or places we could have done more with the plan? Yeah, no, appreciate that. I think a credit to OpenSSF, this was something pulled together. It was a way to scope mitigation of a software, set of software risks that I think hadn't really been done at that level before. It's one of the challenges that we're trying to address us, and I think OpenSSF is trying to address with a mobilization plan, is this is not vendor specific, it's not vertical specific, it really is across the ecosystem. And so these are significant interleaving sources of risk in how you build software, how you attribute software, how you attest to the security of software that affect a massive diversity of programs and packages. So two things I think that were hopeful to try to help you guys grow on, and I think maybe nudge on in the next year and a half is, one, we recognize that all 10 of these aren't gonna succeed in the same way. And we recognize that there are a finite number of contributors, there's a finite amount of resources inside of this community now to do this kind of very sophisticated security engineering. So of the 10, where do we need to start? What's gonna have the greatest impact relative to the risk that we see today? And of the 10, which is most likely to succeed against its stated goals? And I think those are two different questions. Answering them in parallel is gonna be a really important process for this community as we go forward. And it's an area where I think having some government demand signals is gonna be useful as a way of understanding the space. Not because what we're doing with the mobilization plan needs to be responsive entirely to what the US government or the Japanese government or the French government wants, but because they're a consumer for this kind of information, they are an assessor of this kind of risk. So they're a stakeholder in this community as we go. So I think that's one area that's really thinking about prioritization. The second thing that I think about though is a lot of the folks in this room, how do you all plug into this? How do you take the efforts, maybe the spare volunteer time you have, the existing program and security work that you're doing, either with a company or as an independent, and plug into these 10 work streams? Because these are really, I think the word stream is very helpful. They're guides, they're the opening of a conversation. They're a place to channel resources time and energy. And as we're thinking about this from the policy standpoint, how can we help the US government and other entities understand where the gaps are? Where are volunteer energies naturally gonna concentrate? Where are the parts that are missing? What's the stuff that's just hard to get a lot of PR press for, hard to get a lot of company effort for, places where resources aren't necessarily being flowed in that same way to support these streams? And so I think the other aspect of this is really thinking about for you all, how can you help? How can you contribute? Where do you plug into this? And so I think that's somewhere that all the work streams are gonna need to be thinking about more aggressively is how do I get people involved in this process? Thanks, Dre. Amelie, based on the parts of the plan that you've seen and the conversations that we had with the House Committee on Science the day before the plan was released, and looking at the questions that they asked, do you think there's parts of this plan that are responsive to concerns of folks like beyond the people we had in that room? And what do you think, put yourself back in those situations where you were working inside government. If the private sector came up, came to you with this plan and said, we want to engage with your organizations in the role out of this, what would have been your reaction to it? Like, how do you think it was received within the executive branch, within agencies? Well, thinking back to the time where the first major wake-up call was, and this was mentioned in a testimony which was the Heartbleed incident back in 2014, that was kind of the first, oh my gosh, open source software exists, and the reactivity to it was was to send out a data call to find out what web servers it was, because it was open SSL. So the literacy of what open source software was, but what it contributed has definitely increased over time. My first interactions with that committee was way back in 1998 on my second job out of college and did some stuff and they were amazed that I had a laptop and all sorts of stuff. So having folks on the panel there that were versing in AI and machine learning and former software developers and whatnot was a benefit. But thinking about talking about the SBOM everywhere, for me, my contributions when I was at the office of management budget during the Heartbleed incident, something like an SBOM would have been very useful for the response. That is definitely something that, with a historical eye towards that, that would have been useful. It wouldn't have taken me to say, hey, these libraries exist in your cell phones, exist in networking equipment, other applications because they were compiled in, they weren't, I don't think people at the time were really aware of the prevalence of open source software because it didn't, much like SBOM there, was like, it wasn't an ingredients list. No one had an idea, even in commercial software, what existed. Now, because with Log4j and a couple of other incidents since then, folks are realizing that even the commercial vendors are liberally picking from the open source ecosystem to kind of rapidly increase capabilities and features of not only the software that they're selling in embedded devices like your smart TVs and stuff at home, but also anytime someone accesses a cloud provider, a lot of that infrastructure, that digital infrastructure that's there is all based on open source software components and whatnot. And when those things go down, that root cause analysis is one of the biggest challenges there because you're having to kind of work through all those dependencies. So, say Amazon goes out, then they're looking at what service it was and then what software supports that service. And if it's based on a bunch of open source libraries, who do you got to contact to basically fix those vulnerabilities or those issues? And that's one of those challenges I don't think necessarily got conveyed very well, but we're at the first step here. To Congress understand that code is infrastructure at this point in time. And to treat it like your roads, your waterways, the rest of your transportation systems, your power delivery, even equating it to say, the whole issues with ransomware with the colonial pipeline issue, say we had an incident roughly earlier this year where you had an open source developer that was frustrated with not getting paid and decided to kind of put a message in a library. How do you kind of control that process and build that trust as well? And I don't think that awareness still exists at this point. It's still, you know, because we're tickling this more and more of that awareness is starting to come out. And I think that's gonna be the biggest challenges are those next steps. And also, you know, my big thing with government is even though it has a significant size to it, it can't be everywhere all the time. People don't scale well and the government doesn't really scale well. You know, you can see that for stuff. You know, we have natural disasters and whatnot. There's a finite amount of resources that can be applied. It's just a matter of where the best points are to kind of reinforce the comments that Trey and Brian made. You know, I think there was a moment that was, there was one moment where it was like the enterprise world realized they could use open source code legitimately. I mean, people had been using it quietly under the covers for a long time, even before that moment. But then it was like, all right, there's companies like IBM and Red Hat, now obviously part of IBM. But others who were starting HP and others who said, okay, this is goodness, we'll incorporate it in. And then there was a distinctly later moment by a couple of years at least where you started to see them return patches. I'd say actually IBM very early on in the Linux and Apache communities, but en masse this recognition that use was a two way street, that as a user, you're always gonna find insufficiencies, bugs, improvements to docs, and that as a first order principle of use, being able to contribute back upstream to open source is important. I don't know that we've ever gotten there with at least the US federal government as a first order principle that if agencies use externally developed open source code, they should contribute back upstream. But I wonder if we're about to see a change in that. And I mean, the optimist in me, right? I wonder if we're about to see perhaps a recognition that as a national security thing, investing in the underlying security of again, bridges and highways and code is a critical infrastructure argument. At least I was hearing tendrils of that in the conversations we had with them. Is that just optimism speaking or is it to me or is that something of substance? Let me actually ask, that is a follow up to Omelie. Cause that. I was gonna say, just the NSA has entered the chat, kind of thing where it commits to the tree where you didn't see them before. Yeah, I mean, to think back to you, you mentioned Linux before, you have a whole idea of SE Linux that was addressing that particular issue way back when. And now, you know, I don't know of anybody who actively uses SE Linux because it was such a pain to kind of configure, but it solved a particular problem. Now, you know, I know having been at Omelie right around the time they were doing the open source policy, that was the biggest difficulty we had, which was to find the right levers to get people to contribute back. It wasn't that there weren't CTOs and CIOs and developers within the federal government that didn't want to. It just that there wasn't a process or mechanism or governance behind that. I even found that, you know, after I left the government for the first time, you know, I went to a large media conglomerate and they had just gotten through a multi-year process to allow, you know, open source contributions back. So it is literally, there's an inertia that needs to be overcome, both on the public and private sector. And it's just a matter of someone, sometimes it's just a matter of willpower and somebody to kind of champion that. And I think that's probably mirrored across a lot of organizations. David, David, by the way, co-authored the, or wrote the original open sources COTS memo, which allowed so many people at least in the DOD to justify, hey, we can use this stuff. Like it's just like commercial off the shelf. It was a 2009 DOD policy and open source software. And, but you're right, that was actually one of my earlier presentations that got into that policy and got me involved in some of that quick comments. We already are seeing occasional release from governments. I mean, not just US government, but governments as open source software. You may have heard of the internet, you know, that's due to both development of protocols and development of open source and release. Most of, you know, you're either using one of those open source, you know, stacks or another stack that was written after looking at those open source ones. And, you know, you mentioned, you know, SE Linux, if you're using Android, if you're using red added price Linux, it's using that. There's more, but that said, Brian, what you said earlier is absolutely true. The amount of contribution back from, certainly from the US government, it's far below what it could and I would argue should be. There are cases where it makes sense not to, but there's formal cases where, you know, they're shooting themselves in the foot by not participating and collaborating. And although I know a little less about other governments, I suspect that's true for many other governments as well, because as you mentioned, you know, this is not a US only problem, this is a global challenge and we need to work globally for it. Well, I wanna pull us back to the plan a little bit. And I do wanna leave some time for a couple of questions as well, if any of you have them. But, and I apologize, I don't think we had the time to go through like all 10 of them and give enough of a presentation to help you understand each of them, but I certainly wanna highlight a few of them as noteworthy, but you can go get the plan from the OpenSF website, of course. But, David, what's your take on which of the streams you think we might, well, first off, are most fundable, like are the ones that we can get, could most likely get started with most quickly, and then the ones that are perhaps ripest for collaboration with government? Well, as far as that first question, we actually work hard to make sure that all those streams were things that we could do in the near term. We weren't trying to create, hey, do this research and in 30 years, maybe something will pop out. They're all very much, you can at least have a basic idea of what they do. I would say, for example, the six-door streams is particularly straightforward, in part because it's very specifically focused and that makes it easier to execute. But I think all of them can, we can at least get started on. I mean, the education piece, we've already done some work. I think there's some clear steps forward. And really, I think that's true for all of them. I think after a certain point, there's a yes, you need to plan, but you need to actually start executing instead of just admiring and creating better plans. There's the saying, plans are irrelevant, planning is essential, right? Or no plan survives first contact with the enemy. I don't know who the enemy is in this case, but let me highlight one of them then, which is the one we called S-bombs everywhere in stream number nine. And it wasn't just about echoing, hey, S-bombs are important, let's throw money at it. It was also about trying to say, the kind of future that I think we want is one where software bill of materials are assembled continuously throughout the supply chain, starting as far upstream as you can by default by the development tools, so that it's a minimal lift for developers to pick up. And it is something that is pervasively created and easily created all the way through, rather than the alternative, which it seems like we're trending towards in some places where S-bombs are assembled at the very end as a certificate to hit a certification checklist. And because it's a lot of work to do it at the end, the companies that do it see that as a proprietary value rather than as something that's ubiquitous and frankly open source, right? And to do that, the logic was, well, we should be investing in tooling that embeds it into the build tools and the infrastructure and makes it easy, makes them easy to validate. The challenge there is that there's a couple of different formats with adoption out there. And Alan Sissa very helpfully told us, any of these formats will work, so use them all. But, which is kind of like saying, hey, DNS, there could be two or three of them. But so what it actually triggered though was a bunch of us in the S-bomb community saying, well, what do we have in common? Is there a greatest common denominator to the security use cases and the security metadata that's worth tracking? And could we find a way to have convertibility between these different formats around some core data attributes and a taxonomy so that you could use these different formats to some degree interchangeably, right? And it's hard to do in practice, but if we could channel investment into tooling, that would make that easier to at least validate different languages, different formats like SBDX, CyclingDX, then that would avoid us having to get into a holy war of one format overall, right? So that's where the streams actually served as an opportunity to have some provocative conversations about the right strategy, right? And that I found one of the most valuable parts of it. Now, government doesn't have to help us fund that work, but given the role that not only executive order 14.08 to 8 played in that, but also things we've seen out of the Department of Health and Human Services around S-bombs for medical devices, right? That we see potentially in other regulated industries or in the automotive industry, they're starting to talk a lot about this as well. I have a lot of confidence that they can create the poll for what otherwise would just be a checklist nice to have bureaucratic kind of thing, but I still think it's up to us in the open source community to figure out how to create the supply on that side to meet that demand. Any other thoughts though that come specifically from the S-bomb kind of stream and things that practically will have to do to get that launched and any other possibilities you see with working with government? Obviously, you've already mentioned, tooling, tooling, tooling, we need to make this as easy as possible, as default as possible. I don't think it's going to be instantaneously done in a day. I think there is a fundamental challenge with S-bombs specifically because it's not something that traditionally we've done, we mean the software development community and what's, I guess, worse in a sense is it's primarily a help to the end user. It's not necessarily as, it's something the developer does to help the ones downstreams, not necessarily to themselves. Yeah, I see Alan going, the problem is we need, there's an incentives challenge and we really need to try to help incentivize and bring down the barriers to application. And I'll throw one more question to, oh, Amelie, you wanted to comment on that too. Yeah, I actually had an entire Twitter thread because I woke up inspired this morning on this, but, and I think this may appeal because I tagged Alan about 7,000 times in it, but. Alan is the fifth Beatle on this panel, by the way. Yeah, he's definitely said. But having come from done an executive level thing as a CIO and a CTO and a background in decision science, one of the biggest things that is a gap between, cool, we've got a tool or we've got a format or some type of interchange, something or other, where that, and I think Dave touched on this about the end user, the end user also needs to be a decision maker. So if you have a CTO looking at reports of S-bombs, the idea is like, well, how bad is my risk of using this? There's a lot of times you just can't tear it out when you highlight, hey, I've got a bunch of vulnerable libraries, it's stuff you may have bought, it's stuff you may have built. So providing that in a format or having tooling that gives you that kind of ability to score and do an assessment there is that next level up that needs to occur. So a lot of this is really good because the work plan addresses more at that technical engineering level, but the next step for all this is then that extra level of engagement to senior leaders who are the ones literally writing the checks and making those decisions about like, hey, we're gonna do this mission support thing, whether it be, I'm gonna be using government terms basically align with the business operations of the mission here, what kind of capabilities can I have if I need to use this and no one else provides it, what is the risk and what kind of mitigations I need to put in place? And that's where a well-defined S-bomb that has tags, extra fields, if you look at the standards and stuff like that that being proposed can actually provide some extra value. So that's how you can kind of use that to sell that up to folks who are in the position of writing the checks, pushing the buttons up for you. Thanks, Emily. We have time for one question. Jeff, I think you had your hand up. I was actually flagged that she had a question. Does anyone out there have a question? Oh, yes. Thank you very much. I'll just repeat for Emily and others' sake. So what was your name? Raven, thank you very much. Raven works at the Smithsonian. She works on open source software. Smithsonian is quasi-governmental. Government educational and non-governmental. Government educational. And her point was that there isn't yet a culture of IP give back, that IP is seen as something you lock down. And it feels like there's still either a culture shift, a mindset shift that needs to happen. I mean, Trey, do you see that as well? Do you see places where that is changing? Are there parts of government changing faster? Well, so I think there's two things that are really important about this. And you've hit on one of them, which is at the moment, government still treats open source as a product that they buy and use. And so the expectation of, I bought it, it's mine, right? Still really locks in. And I think, Emily, to your point about the anonymous large global media company, right, that went through these challenges, there's a cultural shift that's gonna take place at the organizational level about defining what open source is. It's a change that will have to be reflected in policy. But I think the higher level part of this is shifting to the discussion about open source as infrastructure as opposed to product. And where there is understanding, by policy makers, by the executives who run these agencies, by Congress, when there are portioning budgets and making rules about procurement, where they recognize that it's in the national interest to see commits back upstream, to see contributions back into the ecosystem, that it's helping not just other agencies, but it's actually pushing back into the private sector. It's driving economic value. That's where a lot of that sea change is gonna come. And I think that's part of the conversation that you all and that we need to be a part of is, you're not buying products anymore, you're supporting the roads and the bridges that everybody is using. But that's absolutely a shift. Yeah, if I can quick. One quick, yeah. Yeah, quick. So if you're working specifically with the U.S. government, there are some key things you need to understand about how the FAR and the DFARS work. She probably does already. Okay, that's fine. And if you do that, that's wonderful. And also the folks who work with, which you may not. All too often, the issue is that who holds the copyright? The question is who holds which rights? So you need to track that down. Some organizations, for example, the Department of Defense, they actually have a formal policy that says we encourage release back if there's a good reason to do so. And one of those is to make it so that the future versions will have the fixes. Because otherwise, project forks come up all over the place, people have to fix it 50, 70 times. So there's very good reasons to avoid it. And the problem is really, I think in part, a matter of education of various decision makers. And Amelie, we're right at time. Any closing thoughts from you on other guidance that you could give to folks in government or to government in general about encouraging kind of givebacks and participation in big efforts like open SSF, the mobilization plan, others? Yeah, promises that it won't bite much. You know, what she mentioned from the Smithsonian was that it is a culture change. I know that was the biggest, you know, change, sea change I saw from interacting with government originally, which was, you know, I don't trust it because I don't trust the lineage of it. It could come from China, Russia, whatever. Any of those folks that the DSS doesn't like us to work with. But understand, you know, I think Ava commented on my Twitter feed, this thing about trust the process, not the person. I want to give her credit for that. Or them credit for it, sorry, for an ounce. And that's the key thing is understand, you know, for the automation that it does have eyes on, it does get checked. If we get these work streams in place where there's third party code audit, which allows that independence of, you know, not just trusting that person or maybe where they're from, but the process that builds the trust and that builds the scaffolding that's gonna be required for governments both the US and elsewhere to basically consume it without the fear. And that part takes then into that culture change because there's no second guessing, much like we have with FedRAM, which is it goes through a FedRAM process and then yet another agency goes and basically refed ramps it. So, you know, to de-dupe the work, that's the key thing about building that trust and following these things within the work streams. Thank you, Amelie. Thank you, David. Thank you, Trey. Thank you, Raven. And yeah, that's it for our session. Thank you all very much.