 Hello everyone. Thank you. Thank you for coming to our webinar today. We're really excited to tell you more about this project that we've begun at the open SSF called Alpha Omega. It is the beginnings of this project. It is, as we call it, an experiment. It's something that we've put a lot of thought into but it's still very early days and we'd love to find ways to involve all of you and push this forward and faster. So just thought we'd try to lay the groundwork and look for perhaps some opportunities to collaborate with all of you. So let me give you just kind of an overview of what you'll hear about today. We'll give you a bit of the background, what drove some of the thinking and the conversations behind this. We'll do an overview of the mission and the kind of the vision for the project and then into some details about how we plan to actually deliver on that mission and vision. And we'll wrap up with some places to go to where you all can contribute. And then we really want to leave a good chunk of time. We anticipate perhaps about half the time of the webinar for open conversation and Q&A about where we can take this. There's still so much to think about and build on this. And so involving as many of you on that would be great. And just to get started, I'm Brian Bellendorf. I'm General Manager for the Open Source Security Foundation, which is part of the Lenox Foundation. I've been with the Lenox Foundation since 2016 in a couple of other capacities and leading open SSF since October of last year. And I've been in the open source space for about a hundred million years. Why don't I pass the baton to Mike Skaveta to introduce himself. Hi, I'm Mike Skaveta. I lead an open source security team at Microsoft. Within open SSF, I lead the identifying security that's working group. I've been in software and software security for about 20 years or probably more. I'm super excited to join Brian and Michael in getting this Alpha Omega project off the ground. And I'm super excited to see where it goes to Michael. Hi, everyone. Michael Windsor. I've been building software for way too long. First product was a kitchen design program back in the 80s. I've also been working at Google now for about seven or eight years. A lot of it in development tooling and most recently for the past four or five years really worrying about software supply chain security. I'm excited and terrified about how we're finally waking up to it and paying attention to it. And I've been working with Michael and Brian on Alpha Omega now to try and put together our efforts here and see what we can do. So very excited to meet you, everybody, talk about these things and hope we can all learn more. Thank you to the two Michaels. So Alpha Omega is an attempt to try to really look at how open source software is being written in the modern world. We know that open source software is the foundation of practically all modern technology. I mean, we see stats up there that suggest that 90% of the average software stack is actually open source software underneath. And we know that society needs that foundation to be safe, secure and resilient. There's no better evidence for this than the fact that a bug in a Java logging framework can start to trigger series of meetings and public proclamations by the White House and other policy-making organizations and drive actually a whole lot of disruption and investment by organizations to try to close up that hole. But this is critical infrastructure now. I think everybody's recognizing this even far beyond our own bubble. This is bridges and highways and these are also digital public goods. And so we really need to start to think about how do we best support the existing mechanisms for building open source code and the existing maintainers and the existing foundations and organizations in a way that isn't just about delivering a 300-page tomb of thou shaltz to them and say and penalizing them when they have a defect found but is instead something much more bottoms up, much more supportive, much more systematic in not just looking at a couple of projects but trying to really cover the breadth of all the open source projects that are used out there in a meaningful way. But let's go back and we'll be a bit kind of humble about the scope of that work and that there's a lot of open source projects, a lot of open source developers and we'll never close every hole. The last defect is fixed when the last user has passed away as they say. So we've got some ideas here. This is an experiment but this experiment along a couple of very specific lines. Let me start first before I hand it off to Michael Windsor to kind of elaborate a bit more on what it actually is. Let me just pause and help say what it's not. In particular, it is not a fund to pay open source project maintainers directly. There are plenty of other projects trying to do that, trying to answer the question of what's the sustainability model for open source in different ways. There are some targeted places where we might apply some funding to help get over the hump in a couple of projects, but we'll go into that. It's not a certification body or process. We're not trying to bless or recognize good versus bad or have a formal kind of fits oriented kind of thing. It's not a replacement for normal security practices. Our hope in fact is that this ends up being a capacity building mechanism and helps lift how other organizations, other open source foundations and how companies build open source code and the practices that they adopt. We're going to be reusing a lot of stuff coming from other parts of the open SSF to make that work. This is not a process for forking and taking over open source projects. People love conspiracy theories that doesn't get any traction here, nor is this a replacement for any other existing services. There's very little in this that we think is actually being done well by anybody else out there in the open source ecosystem. We'd really love to partner with anybody who has similar objectives or is complimentary in what we do because again, that's a big challenge. It's also not a private zero day trading club. We will be dealing with vulnerabilities, perhaps ones that haven't yet been disclosed, but there's a whole universe of proper thinking and proper care to be applied to how those get managed and how maintainers and others work through coordinated vulnerability disclosure processes. Finally, it's not a fully automated scanner that will just launch junk vulnerabilities at maintainers and leave it up to them to have to clean up. This is a bit more thoughtful than just trying to scabish off that out. We think at least. Why don't we transition to what it actually is? With that, I'd like to pass the baton to Michael Windsor. Thanks, Brian. Our mission is derived from the open SSF mission. The open SSF is here to create a space where we can, as an industry, collectively understand and solve and work out solutions in the long term for software supply chain security. More and more of us aware, most of you here because you've heard of it and are interested. Alpha Omega is really trying to be in a focused way, applied, directed activity. Some of that direction is specific to certain projects and some of it is meant to be able to allow us to scale and provide scale solutions. So our mission is really to provide that direct maintainer engagement and bring expert analysis to actually achieve concrete outcomes, even as the tool chains and the working groups and all these other machineries that we're putting in place in open SSF are starting to develop the future. We're trying to act now towards just improving things and then scaling it up. Next slide. And so our aspirational vision, this is where we're trying to get to one, is where critical open source projects are actually secure. And it's important to note, every word here matters, critical. Not every open source project is critical. Not everything has to be secure now, just like a startup is going to prioritize certain things over others. Same thing we true for projects and how we consume them. And we also have to be pragmatic. We're not going to eliminate vulnerabilities. We want to find them and fix them quickly. And we've learned over and over again that the operational management of vulnerabilities post facto is as important as actually fixing the source code when you see it or making the build process is more secure. And so if you think of this as sort of constraining our vision, our mission sort of divides the big problem domain, and then our vision starts to say, this is where we're going to do this, how we're going to act our solution space. And then I think Michael Scavetta will now talk to us about each of these, Alpha and Megan, how we think about them and break it down from there. Thanks, Michael. So for Alpha, the main point of Alpha is working with maintainers directly. So I have more information on the next slide. But basically, we're going to target the very most critical open source projects, even among critical, if we think that there are 10,000 or 50,000 critical open source projects, the most critical 100 or 200 of that list would be targets for Alpha. Alpha will be essentially expensive, time-consuming, heavy weight, at least heavy weight on Aaron. We hope it's not heavy weight for the maintainers. A way for us to engage, understand what their security posture is, understand where their gaps are, and help them to fill those gaps and remedial those vulnerabilities and triage those bugs and kind of whatever is needed. If a project, if the thing that they need help with most is moving rocks from A to B, and if that helps their security posture, then we should be there to help them move rocks from A to B. Next slide. So getting super specific here, and while this slide has lots of words on it, it's all intended to be examples of the kinds of things that we can do and where our thinking is. As both Brian and Michael said, this is an experiment and this is very early days. So some of this is subject to change. But imagine you're in a restaurant and you want to know what that restaurant can offer. So on Alpha's menu, at the appetizer, at the beginning of an engagement with a project, we want to learn what their security challenges are. So we'll engage them, we'll have a conversation probably over the course of probably weeks to understand where we could have the most impact. If both we and the project both think that this is a fit, then we move, take a step forward and we look at the main courses and we look at where could Alpha provide the most value. This could be things like a source code audit. This could be setting up tools. This could just be encouraging them to set up two-factor authentication when publishing or commit signing or things like that. There were some projects like the OpenSSF scorecard, which can give a rundown of where a project stands on certain metrics. Maybe improving those metrics is the way forward. Or maybe it's just that they get lots of security vulnerability reports and some of them are low quality and they just need help triaging them. And maybe they need help actually creating fixes for whatever's needed there we want to be on the table as long as it's generally in the direction of improving the security of this critical open source project. And then for dessert, we do want to look back and see how we did because we do want to improve over time. Especially for the first five or 10 of these engagements, we're going to expect to learn a lot with each one and refine things. So I hope that the 11th one will go better than the first one, but we have to start somewhere. Omega is at the opposite end of it and not opposite meaning like the least used or impactful open source project, but still within this critical, let's say 10,000 is the nice round number that we've been using. We want to use a combination of automated tools and scoring and ML if that's reasonable, kind of whatever tool is appropriate to identify critical vulnerabilities. So not all vulnerabilities, but just those are the ones that are most likely to be impactful. And then to have security analysts, so these experts reviewing those results, validating them, and then assuming that they are authentic and important and need to be fixed, working with the project directly, reaching out and doing the coordinated vulnerability disclosure, but also lending a hand in creating those fixes if requested and if appropriate. So while there is a large degree of tooling here, it's not exclusively tooling, it's not fully automated. There are people involved. And if you go to the next slide, sorry, I should have gone here earlier. Yeah, the appetizer is using the tools to collect lots of information and lots of vulnerabilities, lots of facts about these projects. We will have engineering, software engineers or security engineers, I guess, refining this rule set and building the system to automate the triage as much as possible. We want this thing to be like magically efficient such that we can kind of turn the crank on this machine and get a high quality vulnerability out of it. And then we just keep turning the crank. Once we validate it with experts, we reach out, we get the thing fixed. And then again, we look back to see how things are going and improve the tool and process over time. Again, this is an experiment. We think this will be successful in this way. If not, we will adjust and tune our approach as we learn more. And back to Brian for how you. So there are a lot of questions, of course, that all of you may have about how all of this will actually work. We are very much a part of the rest of the Open Source Security Foundation. A lot of these ideas came out of conversations that were had in a number of different working groups. And we plan to stay very close to those working groups. The three that matter most for the activities here are the Securing Critical Projects Working Group. This is a group that's been developing a mix of objective data from things like the Harvard Census Report that talks about kind of their set, their view of critical projects based on metrics that they're able to obtain, to conversations that we have in those working groups about code that sits very critically in kind of the build infrastructure, but might not show up in a software composition analysis, that kind of thing. So that's a group that's chartered with creating and maintaining a list of the top 100 that was used most recently in distributing MFA tokens, for example, at the end of the year to developers on the top 100 projects. So we plan to work with them to kind of come up with these additional lists and refine that over time. There's a related question I see popping up quite a bit, which is, you know, how do we select from those top 100 the ones we'll be working with? And that's still a work in progress, to be honest. But we'll talk about that in a bit. The second working group that we work quite closely with is the Best Practices Working Group. This is going to be a feeder for so much of what's on the menu, especially in alpha, to kind of talk about with projects, try to help them understand how they might adopt the Best Practices badge, those sorts of things, the scorecards, other practices. But that's, again, we do not want to walk in with the, just read all this stuff and use these tools and you'll be okay. It's got to be more bottoms up and needs driven than that. But the Best Practices working group is creating a lot of value in what they're doing through that. And then the vulnerability disclosure is working group. This is obviously going to be a big deal as we work through the scan and the results of the scans and we see things that are problematic, perhaps not yet clearly a vulnerability, but something that is worth talking about with the maintainers. As that evolves, if those evolve to being actual vulnerabilities, then finding a way to work with those maintainers through a graceful disclosure process such that those fixes get rolled out to major stakeholders and everyone can be updated as quickly as possible once it's public is going to be pretty important. And so that's a working group that's done a lot to try to figure out what are the standards and benchmarks that might be appropriate for open source projects because most of what's out there has not been written, particularly with open source in mind. So one way for all of you to get engaged with us is to find us on those working groups. But we know as well that we will need to be developing kind of a public engagement model for each of these two halves of the project. We've got some ideas, we've got some things that we think might work, but we want to evolve that kind of a little bit more with all of you in mind. So one of the ways to stay on top of that just a little bit out of order here is to join a Slack channel. We use Slack pretty extensively at OpenSSF. We know it's not the best medium for having kind of deep thoughtful conversations sometimes, but it is a good way to brainstorm, a good way to kind of share some things coming in from the side that we might want to think about. So if you all of you could join the, if you're interested, join the Alpha Underbar Omega Slack channel at slackopenSSF.org. That's a great conversational way to stay involved. But if that is a little bit more than you want right now, you just want to hear about updates and that kind of thing. Obviously we'll push updates to Twitter and the OpenSSF account and the like, but we've also created a mailing list specifically for announcements related to Alpha Omega. That link and all these links, by the way, are being dropped in chat so you could follow that. We also have just a raw expression of interest form. If you are interested in knowing more, interested in, you know, have some things you might be able to contribute, that's the link there will take you to that place on the website as well, or you can fill that out. This is, you know, this is really what we figured out so far. I think now it's actually appropriate for us to pivot and look at the questions that have been submitted. I've sort of been scanning those a little bit and I know, and let me just kind of paraphrase what I think is about half the questions here to the two Michaels, which is what will be the criteria for, you know, deciding what's critical and which of the projects, particularly for Alpha, I would imagine, that we decide to work with. And I know one part of that is, you know, coming up with the list of 100, you know, or 200, working with critical projects, but we're not going to be able to have the kind of, you know, a hands-on intervention, you know, a helpful kind of kind of approach with all 100 of those projects. So I think a stab and then about jumping in, Michael should chime in as well. You know, this is actually a question we've heard a lot already. We're working with the working groups to get that initial list and build out from there. But our priority up front is really about our ability to get actionable, sort of shovel-ready work that we can start to think about and do and learn from. And as we said several times, we're still figuring out how this is all going to play out over time. And so one of the first sort of criteria for the early project is not necessarily what is the most critical project, what is the most biggest security thing we can fix or whatever. That's not even answerable, really. But where is somewhere we can go off and start doing something now, making a difference now, learning from that effort, and then feeding it back into how we do it, how we interact, how we select for projects. Michael? Just to add to that, there have been multiple attempts, you know, over the years at coming up with a list of the most critical projects. Those lists are usually different. They're all reasonable. There's no standard to judge, you know, which one is better than the other, really. The critical projects working group does have a list. We like the list. That list will inform the project that we choose from. But if we made a mistake and we chose the 150th instead of the third most critical project, according to the criteria that they used, we're still dealing with very important projects. So we don't want to be overly myopic to just start at number one, and we don't look at number two until number one is done or anything like that. We do want to optimize for, as Michael said, optimize for impact, optimize for speed, optimize for learning. We don't want to choose very, let's just say, unimportant or unimpactful projects at the very tail end of it. We're also looking at, you know, projects, you know, in the larger context of how do they interact with the rest of the ecosystem. An individual library may be incredibly important. An ecosystem may be, like, de facto much more important than any individual project. So we're trying to look at it holistically and come up with the just good choices that we all feel good about. In terms of volume, we'll probably have some sort of a beta test kind of pilot phase for this kind of work, where we'll try to evaluate, you know, how successful we've been, is there a repeatable pattern to this kind of thing. And I think, you know, we've been talking about trying to aim for five different projects that we could reach out in that period of time, you know, lasting a couple of months. I think our hope would be that over the course of the first year, we could try to reach more like 15 to 20, perhaps of such engagements. A lot of this is going to be based partly on how will we scale the staff that we're, we'd like to recruit for Alpha Omega. How will we think about, you know, engagement of volunteers in that process? This is hard work and it's harder to ask for folks to volunteer for this kind of thing or to even vet that they have the right approach to this. But I think it's on us to think about how do we scale this out to, you know, to be able to take advantage of volunteers who show up with real skills and are willing to kind of work on a systematic process for this. So I think those are, you know, again, five, 15 is just a dent into the 100. Hopefully we find ways to scale up. Certainly more resources would help with that as well. But, you know, I think it'd be a while before we can claim to cover all 100. My hope as well is that if we can talk about the kind of work we do, hopefully we can have a, you know, kind of a ripple out impact on the other 100 projects as well. An interesting question that came up is, you know, in this work with Log4J, if we had started this a year ago, would Log4J have been on that list? And I think there's a couple of ways to dance this, but I'll leave it first to the two Michaels. No, probably not. I wish it were as a, in the description of Log4J and what we would have seen from a high level, it wouldn't have been up there. Now, obviously it would be, but, and logging frameworks in general, I think, you know, people are starting to think about them a little bit differently in terms of kind of, say, magical functionality. Yeah, I agree with Michael. I think that it's interesting to see how we learn about classes of problems, right? So there's a team I work with here at Google who are focused on fixing all kinds of sort of problems in the Linux kernel. There's a class of problems that try to eliminate from the kernel, not just once, but in a durable way. And we could look now with the, you know, 2020 vision of hindsight and let's say looking, okay, points of extensibility that can have essentially, you know, outbound calls to other network services or whatever are an interesting pattern of potential risk, right? Which is not really earth shattering news. We just hadn't looked at it through the same lens as we have now. So we might entertain an effort to go look at various points of extensibility. That's exactly the kind of direction I would like us to sort of start understanding and how can we reliably detect those things, right? Are there patterns of coding or analysis that we can apply to get there? These are exactly the kind of questions we would like Omega to be able to scale up and answer. And then as we are evaluating any particular project to try and figure out what we can do to make it more secure, we might look for points of extensibility as one of our touch points. So the thing I'll add to this is, you know, that this is a question that kind of even a little bit more focused on securing critical projects working group, because they, you know, one of their data sources, right, is the Harvard Census. And there's an updated version of the Harvard Census coming soon that does rectify this. But the previous Harvard Census is, there's one or two, I can't remember which, but the previous one did not list log for J. And part of that was, you know, it's very dependent upon the data sets that they have access to. Getting access to data about which components are downloaded with what frequency is actually rather hard to get to. And without that, you can do software composition analysis on what's embedded inside what, but you don't get a sense of impact or really, you know, does this matter without also having usage data. So the Harvard project's getting better at that. And again, we're going to try to hang our hat on, you know, that list of 100 coming from securing critical projects. As we whittle that list down, I think engaging with not just, you know, the individual projects that we find interesting to talk to, but also with the foundations around them is going to be important. So it's not just about talking to the log for J. Maintainers, right? We might do that for something critical. You know, we'd look at that, or for, you know, a different JavaScript kind of component here or there, but potentially talking with the organizations around the Apache Software Foundation, the OpenJS Foundation and others to go, where might you think, you know, there's some criticalities here to here's some projects that perhaps don't show up in the stats, but you know from your experience are perhaps a bit more native of some of this kind of thing. So how might, this is perhaps a provocative question, can an open source project request to be included in either Alpha or Omega? Like, do we anticipate having kind of an application form for that kind of thing? Good question. Only a person named Michael can speak at the same time, right, Michael? Look, we would certainly entertain the conversation, right? Early on, we'd love to hear from you and your interest at some level, but there is no sort of like, I'm on the list and therefore someone's doing things for me kind of thing going on here. We're still figuring a lot of the stuff out in the engagement model over time, but if you are interested in being part of this, first of all, I would just say again, what we already said earlier on, become part of the working groups, have a conversation there. We're really going to listen to the working groups around which projects are critical. If you think yours is critical and it's not on that list, that's the consensus place to have that conversation, but we're certainly open to the conversation, chat. Good. The next question I want to take from Emily Fox. Emily, I think we've enabled the ability for folks attending to be able to participate in the conversation. Can you unmute? Is it possible for you to unmute? Great. Do you want to ask your question about automated security analysis? So I'm a little curious because automated security analysis is a large field and there's a lot of potential things that can go in it. So is there a kind of phased implementation approach around the kinds of automated security analysis you intend to do with these projects, or is there one particular one that you think will be the most bang for the buck in its implementation? So right now we have a kind of a proof of concept technically. It's a container with a bunch of tools installed in it. Those tools include CodeQL from GitHub as well as probably 15 or so other static analysis tools. They're all in that kind of, that style tool. We're not constraining ourselves to only static analysis. We're trying to think of what a fuzzing story would be around Omega, particularly when it's low touch, how to automate the fuzzing harness stuff as a whole rabbit hole of challenges. But we want to explore that as well. So those are the kinds of tools that we want to do. But again, another similar tool in a slightly different category, sure we would consider it. What we're really looking to build though is something that has a very, very low false positive rate. And as soon as I say, oh, we just threw a whole bunch of tools in a container, alarm bells should ring and you should say, wait, you're going to get a whole bunch of noise out of it. That's one of the challenges that we want to face head on, particularly with the security engineering talent that we're going to hire for this is to eliminate that either through kind of constantly looking at the rules and whittling them down and scoring them adding more context so that the rules can be more applied more accurately. And then just generally, I guess with the goal of if the security analyst who's like reviewing the output of these tools mark something as false positive, we should consider that a bug in our tool chain and that would be on a list that we would fix. We know we're never going to get completely clean results out of a tool chain. But the only way that we're going to scale is by reducing noise. I'm going to drop the slides though. I'll talk more directly as well. I think there's a couple of things like that to that as well. One is this is I think a useful place for us to think about engaging the community on. Like everything we do, the software will be open source and figuring out how to plug in additional scanning tools is an area we're happy to engage in and think about how do we add it to some common infrastructure. But as well as thinking about things like what are some of the scanning patterns for those tools and how might we work together on devising a rule set publicly, collaboratively with the public on that. But this point about trying to identify and reduce false positives I think is also a real opportunity for us to work with open source developers on whether it's advanced machine learning tools to look at these things or flags that people can put in code to try to highlight. This really isn't a problem. If there's ways in which tooling can help fight the false positives problem because that will be the major burden upon both the staff we hire and maintainers we work with to try to sort through. That's a place we could really use it or use some help. The second one is one of the things that we may end up being bound by is the operational expenses required to do scanning. Anybody who does CI for a modern open source project that pulls them onto dependencies and tries to do testing and security scans with each pull request or each commit knows what I'm talking about. These costs can quickly overwhelm even a bit size project. One of the things we have to look at is how do we cost effectively try to really cover the gamut of 10,000 projects efficiently and where might we try to get other additional resources? Cloud credits, I know some folks do offer this kind of thing as both a paid service and often free for open source projects but corralling that into a uniform environment is going to be perhaps a challenge. But again, we'd love folks helping in trying to answer that. Anything, Michael Windsor, you wanted to add to that before we jump in? We've got a lot of questions. Let's keep going. Okay, great. Is Irving Wadowski-Berger able to unmute himself and would you like to ask his question? Then if not, I'm happy to ask it. I'm here. Hi Irving. Good to see you again. Hi, very nice talking to you. So as I'm hearing you talk, it would appear that the methodology processes tools you're talking about apply to any complex mission critical software project, not just open source, am I correct? 100% Irving. And a lot of the practices that are being developed and discussed and evolved in the working groups came from projects that happened in other organizations or practices that are starting to emerge. And as we in Alpha Omega try to become even more applied and really bring what we have today and refine it and improve it, we want to make sure that people can benefit of that. And there's really nothing specific to open source about it. Obviously, access to the source code is a critical component for some of the analysis techniques we use. If it's within your, since you sort of, you know, ecosystem of source code in this organization, you have access to that code, it's great. There are some very interesting conversations I've been having with other open assistive members about vendor relationships and how do I ensure that the software I'm receiving from a vendor has had similar analysis and using things like scorecard as a way to imagine a vendor producing a scorecard report of their repos and other things and some sort of standardized report about what kind of work I've done to analyze my own software. These are great concepts. We'd love to see that sort of stuff emerging out and playing out. And certainly the lessons we learned through Alpha Omega will be very much shared with the community and made more available. And one reason this is particularly important is let's say at MIT, which I'm affiliated with, there is often people say, well, and where do the kids learn software engineering and things like that turns out, especially schools like MIT and maybe the same Stanford and so on, don't teach it. It's almost like, well, that's plummy or something like that. And part of the reason may be there is no methodology that there is agreement on to teach at the college level. Do you see it that way? I think, first of all, I dropped out of school a very long time ago, so I can't speak to how it happened then or how it happened now. But I think that I share your enthusiasm for ensuring that engineering practices start showing up in all forms of software education, whether it's a computer science degree where it theoretically is theoretical to a more intentionally computer engineering degree. And certainly these software practices need to become part of the norm. One of the phrases we use a lot internally here is we need to make writing secure software easy. And it's not today. And that's, I think, one of the sort of effective goals of Alpha Omega is to learn more about where it's hard and how to make it easier by doing it. So very much in spirit of what you're trying to do. I appreciate it. Thanks. Irving, I'll also direct your attention. The Best Practices Working Group has developed a set of training materials for secure software development that three different courses that have been put up on edX that we've had about 6,000 people register for so far. We have very ambitious goals to grow that to be something that can reach 100,000, although frankly, it's the kind of thing that every software developer should go through and read and understand just how their own code could be twisted against their intent, right? Just how to red team their own code and how actually that matters in open source as well. So, yeah, separately from Alpha Omega, I think there's more investment we'll see in getting that out there and more widely promulgated. And I think some partnerships with schools that we'd love to explore too. Thanks. Yeah. Tom Jones asks, and I'll just kind of be quick to paraphrase, will disclosure be different from CVEs? Maybe one of the two Michaels could talk about kind of your view on how disclosure processes will be managed for the things that are kind of discovered during Omega. So, the CVE is kind of the tail end of the disclosure process. I don't see any reason why we would invent our own, you know, and to be fair, there's a lot of conversation going on about the future of CVE and how to make that better. And we would kind of slipstream into that. I'd rather really smart people are thinking about that stuff and we should leverage that. The disclosure process though, right, you know, up until CVE is coordinated disclosure and working directly as we described. So, that process, you know, we follow kind of just industry best practices of, you know, how to reach out and how, you know, one thing, just to be super quick, there was another question later on, are we going to make vulnerabilities public 90 days after? We haven't, we've talked about that. We haven't really made a decision exactly on precisely what that timeline and workflow would be. The conversation and the principles that we have in mind are, we want to do right by the project in terms of giving them the support and the time that they need to fix things. But we also recognize that, you know, we are doing this on behalf of society that is, by definition, at that point running a vulnerable version of the thing. So, we have to balance that need. And we're trying to do that in a thoughtful way. We'll, we will be transparent when we know what we want to do. And certainly, we're, we want to continue that conversation there. Plus one to what Michael said. That's great. So, plus one. Kayla under the coffler, would you be willing to ask your question live? Yes, yes, I am. So my question is mostly focused, and I wrote Omega in the question. Now I'm thinking, I mean, it's could apply to the alpha side as well. Is there going to be a community focus when it comes to the security researchers who are identifying vulnerabilities or helping to identify vulnerabilities in some of these projects? So yeah, I'm mainly wondering if there's a community emphasis or if we're going to be working with, you know, a selected, consistent group of researchers. So I think initially, the answer is more of a focus on paid staff hired for this project. There's a couple of reasons for that. But most of it is just to be pragmatic, to be able to have someone, you know, fully dedicated for, you know, 40 hours a week doing, you know, doing work and being able to, like, move with the project. I think we're all in agreement that, like, in theory, having a larger community working at that, like, there's a lot of security researchers out there that I would love to, like, direct at interesting, not directed, but have thinking about interesting problems. There's a whole other side of, like, disclosure and how do we vet and, you know, because there are bad actors out there as well. More to come, we're thinking about it actively. We absolutely want to use Alpha Omega generally as a lever to get more out of the system than, you know, we can put in directly. But initially, I think the focus is on direct, you know, like doing things directly and having the community help in terms of what Brian and Michael have said, you know, the Slack channel, the openness of working groups, the core tools that we use and kind of advancing our thinking and how to do this, do this well. Okay. Let me move on to D.J. Ware. D.J., would you be willing to ask your question about Linux distributions on the call? Or I can read your question. I'm kind of calling people out of the blue. I might be surprising them. I apologize. Well, D.J., what is our vision for how this interacts with, say, Linux distributions and their associated repositories? You know, we might also cover some of the other sources of repositories out there, you know, Maven, PiPi, those sorts of things. I don't think that we're limiting ourselves to any particular distribution or package ecosystem or anything else. You know, we should be thoughtful in going, in looking at places where we think we can have the most impact. So for certain Linux distributions, you know, I don't think, well, for me, I haven't thought deeply enough about it to have like a really good can to answer here, but everything's on the table. We'll try to do, we'll try to focus. I wasn't trying to get too much in the weeds. I was just trying to understand if I was looking, there's so much overlap in different packages on different features that they have. How would I identify that as a developer to say, I know this has been vetted and this one hasn't? There's really, really more of the question. Okay. I think, D.J., that's a really good question. And it leaks into some of the other questions around S-bombs and are we doing applications or packages or whatever. I think the granularity of effort in Alpha Omega will be at what you would think of as a package level, whether it's an operating system package or a language package or some sort of open source project of some kind. But when you start aggregating and assembling things into an application, then your S-bomb of like the other day, the person who cares about the S-bomb is the application operator who says, what do I have running live in production? What vulnerabilities have I pushed out there or do I not want to push out there and what has emerged since I pushed it out? And those are two different conversations. Our effort is not in Alpha Omega focused on sort of that aggregated space of what happens and all this stuff comes together or how do I deal with that. It's a great problem and there's a lot of work going into it in various working groups and organizations. But we're looking at essentially the raw ingredients of that cake, if you will, and trying to like on a piece by piece basis. Now, obviously, there's a dependency graph. Some Debian package is built up using some Python library or something like that. And so there's obviously a transitive set of problems that kind of work through there. But the decision about how a particular vulnerability affects the package is a package level interpretation. So you may use Alpine, it may have tar, it may have a vulnerability, but the way you use Alpine and tar does does not create that vulnerability. So that each, again, the granularity of the package becomes the point at which we can start to make a security decision and evaluation of is there a risk or not. And I think that's how we're trying to focus it right now. So this is not a, I checked with Alpha Omega and they said, I'm okay. That's not this is where we're going. This is can we address the industry-wide debt around security posture across a whole bunch of packages. And that leads to, I think, Andrea's question as well. And so I'm actually, can we call Andrea to ask her? Sure. Andrea Brambia, you had a good question about critical projects. Do you want to ask that if you're able to unmute? Apologies again, springing this on you. Andrea asked, I may be naive, but the most critical projects are probably also the ones with the best security posture. Is the current security posture, posture party of picking the projects for Alpha, or part of picking the projects for Alpha? And so you're not naive, but I think that industry-wide everybody is starting to realize that we have a certain amount of security posture debt across how we do things, whether it's how we build them, whether it's whether we've gone off and looked for, you know, vulnerabilities, you know, on a regular basis and looking at these new patterns that are emergent, like I mentioned about extensibility before. And then, you know, what are we going to do about it? How are we going to lean into that? So our feeling is, although there are a lot of projects that have a tremendous amount of eyesight on them and a lot of eyes helping make it better and a lot of investment in security, there's, like, every one of those has work to be done. And there are opportunities where we can make a difference either by scaled approaches or by focused efforts. And again, we'll find out, right? But we certainly had conversations with various projects who would unabashedly tell us that, you know, the way they build their software is not how we might have offered to do so. And those are interesting conversations to have and non-trivial journeys to change, right, to get from where you are to the right place. So not everything is a sort of traditional vulnerability sitting inside a piece of code that hasn't been looked at. Some of them are, as you said, posture in terms of code reviews, two-factor auth, et cetera. Those are some of the more actionable things. But there's also a fair amount of stuff where has anybody looked and said, how can I go off and do this now that I've learned about this new pattern? And that's where I think we can lean in as well. It's a great question. Appreciate it. Let me turn next to John Mark Walker. John, are you, John Mark or Jam, are you able to unmute and want to ask? I am here. Yes, thank you. Can you hear me okay? Yeah, so I've noticed that, like, even when fixes are available, a lot of times vendors, technology, software vendors have very poor practices when it comes to updating their dependencies and getting the upgrades into their products in a timely fashion. I was just wondering if part of this will address, like, education of vendors and trying to, you know, pull them down the righteous path. So vendors specifically, I would say probably not. Yeah. I mean, although I would hope that the Alpha Omega itself kind of spurs future conversations that do have an impact there, I would, but if you change out vendor for another open source project, then I would say part of our analysis, you know, if we see, especially as part of Alpha, that a, you know, a larger project, you know, has a package log file that's has been updated in four years. That's interesting. And that would be part of our, you know, is there a, is there a reason why that's so out of date and testing challenges? And there's all sorts of reasons why, like, package log files are good, but they're also, they're also bad. Yeah, be able to, I think, I think the world needs another year or two of kind of thought going into how to, like, what the right tradeoff there is. This is a really interesting problem, John Mark. And, you know, everybody has it in some way or another. You don't want to live ahead, but you want to live kind of close to head. And what I'm starting to see, right, and I'm starting to see, there's a couple of things, and I'm going to talk about one is the cost of adding a open source project to your organization's dependency graph. You might have a security view, some sort of business policy or whatever, or in some cases, no policy whatsoever, the cost is, can be quite deceptively low. And the total cost of ownership matters here. And then the cost of keeping up to date with it is a hidden cost that people often don't pay. And the metaphor I like to use here, and it's my favorite one and somebody in my team gave it to me is people tend to treat open source as free as in free beer, but it's really free as in free puppies. And you're taking on a responsibility within your organization for maintaining that thing. And it's not, you know, this bunch of other people making some great code for us here, we'll just use them. They're awesome. We're getting free work here. But then you don't pick up those updates. And now you're not living ahead. And now you are essentially an unofficial poorly declared, half baked, Greenspun's 10th rule like fork of that original project. And you just haven't admitted it yet. Wow. Right. And so that's a big problem. I want to be clear. I don't think that that is within our remit within Alpha Omega to solve. It's part of the openness decepts remit. And a lot of people are thinking about things to get there. And one of the things I think is important there is what is stopping people from continuously updating? And I think it's about information. Do you have enough information to make an informed decision about this update? Do you have enough trust in the community such you can pick up changes more reliably, more frequently and live closer to head versus, Oh man, I got to do another three month review of this three line PR, right? Like, you know, and that boundary is really interesting. That's where, you know, love to have continued conversations there, but I don't think that's part of the Alpha Omega scope. I want to follow up on one other thing from the previous question and answer to, which is it's not our intent to be seen as an arbiter of whose security postures are strong or weak or to be publishing stats on that or that kind of thing. There are other efforts to do that. You all are familiar with the best practices badge with the scorecards. You probably know that you can go to metrics.openssf.org and see for, I think a million different repositories, how well they do against the scorecards and best practices badges. Some of the CNCF landscape pictures now offer best practices badge as like a variable to select from some of the other landscape deployments out there. And we do see actually in a different part of OpenSSF expanding kind of that type of understanding risk and security posture across open source projects and other ways too. That look more for that and inevitably that'll be a factor in some of what Alpha Omega work on, but that's certainly going to be a separate kind of initiative. There's a lot of good questions that Emily Fox asked. Can I bring her back to the mic to ask one of her choice? Sure. As part of your engagements with these projects, are you also looking into the IDE extensions such as those that maintainers are using? Not everybody does. To assist them in ensuring that they're writing better, more secure code prior to its commit or merge into these projects. I know this is a newer conversation I've been having with other security professionals across industry that we often forget that IDEs are used and that's really where the code actually starts to happen. I love that question and I'm going to be careful. Yes, we absolutely need to do it, but we as the Royal, we have OpenSSF. The security tooling working group I think is the perfect place to have those conversations and advance those things. We would love to feed our learnings into that working group as well as the broader community because absolutely we need better IDE based squiggly underlines or whatever is needed to write more secure code. Very strong supporter of that. In here. I know we're getting close to time so I'm going to try to be super quick here. There's an interesting question from Yohan Holmberg. Thanks for the initiative. It is a daunting task and lots of real work ahead. At the same time, the security experts are not really idling at the moment. How do we attract volunteers to this project? I'll add on to that. We're going to have to hire some people for this as well. Any thoughts the two Michaels might want to share on that? Frankly, I think that was one of the reasons why we are growing down the route of we need to hire people because it is very difficult finding available security talent, especially ones that you can count on for many hours. We also don't want to treat security researchers as a free resource. Fundamentally, you should get paid for your work and that gets much more complicated in a quasi-volunteer sign-up scenario. Yes, we fully recognize that the market is difficult at the moment or also at the moment, depending on which side you're on. We will try to tackle the remaining list offline and get back to the folks who have asked that question. The backlog is 30 questions, so try to give us a bit of time to get to them. Try to think about the ones that are probably worth trying to answer here in the short term. Michael, do you want to pick one? I think there's a couple of questions I saw about are we going to work with commercial vendors and what's our relationship with there? Would we license commercial tools? I think everything is on the table. I would prefer not to frankly blow a large portion of our budget on licensing a commercial tool. Much prefer partnerships there, partnerships meaning free, but I think the most important bit there, especially for Omega, is the quality of the tool. Having a free tool that generates lots of false positives could be a negative for us. We want to be careful in what we integrate, how we integrate, and how we're able to tune that tool over time. While a large portion of the tool chain that we use and everything is intended to be open source, like CodeQL, the engine is not open source. To be clear, we are using CodeQL. I'm open to using others in the same kind of capacity. Or even better, absorbing data sets of high quality result. At the end of the day, what I really want to do is find more vulnerabilities and fix more vulnerabilities. Whatever helps us do that, I think we're open to at least a conversation about. Then I'm just going to grab one of Emily's questions because I think it's very mentioned a few weeks previously for the appetizer portion of the Alpha project. Is it expected engagement to take one to two months and do you have an intended cap for the engagement time frame? That's a great question, Emily. On a long list of great questions already, the answer is we don't know yet. We're excited by all the interest and we get to hire our first employee. But I think that these are things that we will probably start out with an initial impression of let's spend two months on Project X and then after two months we'll say was that enough? Did we learn enough or whatever and we'll figure out what the right engagement is. If you have experience and thoughts that tell us the average engagement on this takes n months, that would be awesome to know. It would help us plan our thoughts there as well. I don't pretend to know the answer to that question. To me, the reason I chose this question is it embodies the spirit of all the things that we don't know about how to do an Alpha Omega-like effort. I think it's a great way to close the questions and we will, as Brian promised, answer as many or all of them as possible offline. We're here learning and these questions were as much a valuable part for us having this conversation as it was for you to hopefully hear what we had to say because we will then incorporate this back into how we think about it and continue to build on it. We are getting close to time. It's tempting to ask one more question which is from Ben Rockwood, to what degree will Alpha Omega forward other security standards such as salsa provenance or software bill of materials? Which of you would want to take that? I'll take a stab. We're obviously very interested in what the working groups are doing in the Open SSF. Those standards are about practices, they're about tooling. As I said at the beginning, in terms of our mission and vision, they are essentially starting to shape the future that we hope will influence the whole industry and help the whole industry make writing software easier. That's not what we're doing in Alpha Omega. We will look at the signals that they represent. For example, if a project has a very high security posture in a community that has invested continuously in that, that's a signal that's interesting to us. By the end of the day, it doesn't change whether or not we can go and find bugs in there or whatever. Somebody else asked a question about the kernel. There's obviously a tremendous amount of eyes on the kernel from a security point of view. Do we need to make that one of our projects? Those are exactly the questions. We probably don't. We're not going to prioritize them because there's a lot of eyes already looking on it. I think I've answered it. Thank you, Michael. We'd rather wrap up here. Michael Scavetta, any last words? No. I really appreciate everybody's time and taking the hour to listen and engage with us. I'm hoping this is the start of a longer discussion and continued engagement. So please keep the questions coming and hold us accountable to delivering on the vision that we've articulated. Michael Winsor, anything? I think I ended nicely. Okay, great. I want to thank everyone for showing up as well. We dropped the links in the chat, the link to the presentation deck as well as the recording will be on the webinar page and everywhere else we can put it. If you want to continue the conversation, join us over at Slack on the Alpha Omega channel, the open SSF Slack. And with that, thank you all for attending. That's such great questions. Thanks.