 Well, ladies and gentlemen, welcome to one of the last sessions of this amazing week. We hope you've enjoyed it so far. Good time, everybody. Some good talks. Very nice. Hopefully you're in the right room. David and I are talking about implementing developer best practices and OpenSSF scorecard into your project to help improve the security and eminence of your projects across the ecosystem. So if you're not in the right session, I won't feel bad if you run away. That's right. Well, I don't know if there's much else going on, but thank you for your patience. It's been a great week. Why don't you introduce yourself, David? All right. So I'm David A. Wheeler. I work for the Linux Foundation and my business card tells me I'm the director of open source supply chain security. That just means I work run around of different open source software projects trying to help them to be more secure. I'm Crobe. I do stuff on the internet. Collectively, we're here to talk about some OpenSSF projects. OpenSSF is a project underneath the Linux Foundation and it's dedicated to improving the security of the open source ecosystem for consumers, maintainers, and providers. It's a global coalition of we have titans of industry like the corporation I work from and many household brand names that you may know and love every day. We have security researchers and academics and maintainers and just people from all walks of life that want to help improve open source security through a lot of different interesting projects. And we're going to talk about two of them today, right? This is, if you care, how the foundation is laid out. We have several working groups. David and I both work with the best working group, which is focused on developers. But there are many others. There's a vulnerability disclosures working group. There's identifying security threats. There's a security tooling working group, a supply chain integrity working group, securing critical projects, securing software repositories, and an end users working group. So if you're interested in participating, there's a lot of different projects and sigs and working groups that you could come and listen and learn or potentially donate your expertise and participate in those conversations. And I got the awesome little purple box. That's us. Specifically, the best working group has got three main goals. We want to identify good practice and good tools to help developers learn and do secure coding the right way. We want, we provide resources to learn. So we have a class that teaches developers secure coding principles and fundamentals and techniques. We also have a hands on lab tool called security knowledge framework. So if you learn about SQL injection in David's awesome security fundamentals class, you can go to SKF and go through some labs to practice what you learned. Pretty neat. And then we also have ways that you can adopt these things. We have best practices guides, concise guides that tell you how to both create or evaluate software if you want to incorporate things into your project or your work. And we have things like, we had a project about two years ago now, the great MFA distribution where we worked to try to give away multi-factor tokens to developers. And patches are always welcome. We would love everyone to come and listen and learn and participate with us. You want to talk about your awesome little badge project? Absolutely. Probably need to be sure down here down. All right, hopefully up is up. All right. So I'm going to talk a little bit about the open SSF best practices badge project. And the idea here is that if you are running an open, an open source software project, you can work on achieving a badge and that shows, hey, you are are are you are performing best practices that hopefully will lead to more secure software, you know, more better quality software. And it's based on the product on the various practices of various well-run projects. Basically, in order to create these criteria, we went out and looked at a large number of open source software projects. What do you do? What do you do? Looking for those things that were common. And if you're open source software project meets certain criteria, you get a badge. There's actually three badge levels. Silver, I'm sorry, passing silver and gold. And the text of those is actually available in many, many languages, not just English, but we've also got Chinese and French and German and Japanese and Russian. More to come. I know there's we've got some people working on some other languages too. We've got over five over 5,600 participating projects over a thousand are passing. And the badge is a great way to illustrate that you understand security concepts, you're doing security the right way. So if you're looking to kind of increase your prominence within the community or persuade downstream that you are doing things the right way. Yep. Yeah. And this shows that over time, we've got increasingly more and more projects are participating. And these are various at various percentage levels where again, over time, more and more people are achieving passing badges. I'm not going to go through all the criteria in the best practices badge. So instead, I just want to point out how we figured out the criteria. They have to be relevant, attainable, clear, consensus of developers and users. We do not require any particular technology. So we are absolutely doing our best to be technology neutral. We do not require proprietary software. There's no problem if you have an open source software project that plugs into proprietary users, proprietary, we don't require it, though. It doesn't cost anything. We don't take over your project. Okay. We're not going to require everyone to do everything immediately. I'm going to come back to that last point in particular. And like any set of criteria, words matter. If anyone here has read any ITFRFCs, a lot of these text terms will look really familiar. The musts of must not, you must do those or must not do those things. Shouldn't should not normally you should be doing those things. We also have another category called Suggested, and these are where this is a good idea, but there's also a lot of cases where in fact there's reasons you shouldn't. And so we suggest it, we want you to think about it, it's okay if you don't do those. But we do find that people, we do find it helpful to identify those. I mentioned earlier, passing silver gold, let me talk more about those. Passing is kind of those basics. Fundamentals focuses on the best practices that applies to everybody. And interestingly enough, even though each of those criteria are generally widely applied by open source offer projects, it's still an achievement to get a passing badge. For the simple reason that it turns out that if you get a number of criteria, each of which are widely applied, everyone agrees on them. When you say and you have to do them all, actual projects find, oh, we missed one. And that's okay, that means that you're learning something, you're getting something, we're, oh yeah, we should do that too. You do that, congratulations, you have a badge, and that means that projects which have achieved a passing badge are showing that they are having a real accomplishment. Silver's a more stringent set. We have rigged it specifically to be doable by single person projects, because a lot of open source software projects are single person. So, yeah, so we want to make sure that's possible to go for more stringent measures. And finally, gold is where we say, okay, we're going to add things that you have to have multiple people. Basically, things like if someone dies or becomes incapacitated, the project needs to keep going. Those are pretty, from a user point of view, those are very desirable. It's hard to do that as a single person project. But nevertheless, it's a very, very good thing to achieve the goal once. David, I have a question. If I go through with my project and earn a badge, where can I show it off? An excellent case. Well, the best answer is make sure you put it on your project page. And we will show them, but basically you can include it right on your readme. It'll automatically show up if you show up on the GitHub or GetLib pages. And a lot of projects have their own websites and they also include the badges there as well. So we very much include a mechanism so you can quickly put that in there and basically show off what you've achieved. All right, let's see here. So here's some examples. Basically, we went out to various open source projects. Hey, what are you doing? That really seems to help. Once we found those, we've put them into various criteria for time. I'm going to skip ahead a little bit, but we'll come through some of these. For silver, here's a couple of silver criteria. The key thing to note is, first thing you have to do is meet all the criteria for the previous set, OK? So in order to earn silver, first you need during the passing badge. Silver for gold, you need to achieve the silver. And then some other things I mentioned earlier, you know, in the case of gold, you have to have at least two unassociated contributors because again, we want to make sure that any one company or an individual says, I don't want to contribute anymore. The project can keep going. And you'll notice as you move up the progression ladder, as the requirements become stricter, it's going to shift from shoulds to musts. It's exactly right. OK, now I want to talk about some specifics. But before I do, I do want to mention some testimonials from projects that have already achieved these. OASAP is a program that scans. It's a web application scanner. And Simon Bennett had some very kind words about this, basically, in particular. Hey, the process of going to achieve a badge really helped us improve the quality of the project. It helped them focus on things that most need improvement. In particular, they had long agreed within the project, oh, man, we really need to implement automated testing. We really need to implement automated testing. Turns out, in fact, there were problems that they would have been found had they been running automated testing. But then they turned around and said, wait a minute, we want a badge. Oh, we have to do that. OK, and now that once they finally did what they knew they needed to do, but just it was hard to get over that initial cost, they're really glad they did. And I think that's an experience that a lot of other projects have found as well. Common Mark and Jason for Modern C++ are other projects, which again, in the process of achieving a badge, they notice, oh, we missed some things, we'll go fix them, and they're glad they did. So if you're developing open-source software, we absolutely encourage you to get your scorecards, work on a badge. And let me start by talking about what it takes to get a badge. First of all, go start the best practices badge site. All you need to do to start is click on get your badge, enter the URL of the repo, and go get started for your project. The system automatically analyzes and tries to fill in a lot of things where it can. Automated tools are always, you know, they're things they don't notice, there's always imperfections. Nevertheless, we do try to fill in the information for you. And then after that, there's a basically it's a form and you fill in and change whatever. I would in particular focus on the passing badge level first. I know some people, oh, there's a gold badge. I'm going to get that right away. No, start, start square one, okay? Square three is wonderful. Start the first step, work on that, and then go back. Don't worry about silver and gold until after you've got the passing badge. If you have any questions, there's a little details button. And if you don't remember anything else, but you're interested in the badge, go remember, click on the details button whenever you've got a question. It will give you, well, details. What exactly do we mean by that? How might you achieve that? Okay? And in general, expect to do things incrementally. A number of projects, particularly if they're well run, find they're doing almost all of them. A few actually found that, my gosh, I'm doing them all. Congratulations, good for you. Okay, number of five, they're doing almost all of them. So, David, if I want my project to earn a badge, approximately how long does it take to go through this process? It takes about 20 minutes. Wow. Okay, you have to read it and figure it out. Now, that assumes that you know that's your project, and you know it's, if it's a project that you're deeply involved in, there generally going to be questions that you know the answers to. Now, it takes much longer if there's something you're not doing, and you now want to do it. Okay, if you don't have any automated tests, it's going to take you probably more than 20 minutes to add those. But that's a different question, okay? But again, expect to do this incrementally. Don't, you know, feel free to go through, fill it out, but don't, you know, if you find out, wow, there's seven things I'm not doing. Okay, now you know. You don't need to do all seven instantaneously. You'll work on the things that you think are most important. And repeat. Let me give a couple of tips. I love pro tips. You love pro tips. Okay, one of them is, hey, the website has to succinctly describe what the software does. You'd be shocked at how many projects, you know, have basically, there's no read me or there's a read me that says, you know, we are the project. And if I don't know what your software does, I'm unlikely to determine whether or not it's going to meet my needs, or if I might want to change it, what exactly it's supposed to do. You know, try to avoid jargon when you can limit it, I would say. Sometimes jargon really is the best way to explain it, but try to limit it to jargon. You would expect a reader to know who'd be looking for your kind of project. You've got to publish the process for reporting vulnerabilities. The last, yes, the last analysis I did, this was the, I believe the number one cause of failure to earn a batch was you've got a project. It's great doing lots of good stuff, but if there's a problem, there's no information on how to report a vulnerability. A lot of projects don't want vulnerabilities reported publicly, and that's fine. But you need to tell the, you need to tell people how to report a vulnerability if one is found. David, does it matter what my security process is? No, as long as you tell us what it is. That sounds easy. Yeah, now, of course, it's easy in that sense of telling people. Now, as soon as you do that, you'll suddenly realize, oh, I'll have to figure out what to do once I take that in. But really, for outsiders, that's step one, is be able to report it. Let's see here. The project must publish, and the third one here, actually, I guess a little bit down here, decide how you're going to do that. A lot of folks do it via email addresses. That is perfectly fine, okay? Just figure out the email address to use and put it in a security MD file, okay? If you want to use GitHub's private reporting mechanism, which is a new mechanism GitHub's put out, great. Very helpful technique. Tell people that, enable that, tell people to use that, but please tell people how to do it. Finally, at least one of the project's primary developers must know of common errors and how to counter them, okay? The sad reality is that lots of software today has lots of vulnerabilities, and for the most part, it's because they don't know what the common problems are and how to prevent it. If you don't know anything, there's a free course from OpenSSF, go take that. You don't have to take that course, okay? We just want you to know what the most common problems are and how to prevent them, so that as you're writing the code, you prevent them. All right, second page here, let's see here. So, project must use at least one automated test suite. Interestingly enough, we don't care as really what the test suite is, we care that you picked one, okay? Pick how you're going to test it automatically and start using it, because once you've picked one, once you've made that decision, once you've started, you have now started down the path. And more generally, the project needs to have a policy of adding new tests as you add new functionality. You'll notice that we don't at the passing level have some sort of criterion like 80% coverage or anything like that. This is quite intentional. The goal at the passing level is you're on the right trajectory. You have tests, you're adding them as you go, that will get you on the right direction. And finally, you've got at least one static analysis tool that's being applied. Again, we don't mandate any particular tool. In fact, there are lots of different tools. They're good for different circumstances. The goal for the passing badge is you've picked at least one. Frankly, more is good, but at least one. And I want to emphasize a real difference in terms of application. If you're starting a new project, if you have no lines of code and everything is new, absolutely, when you turn on those tools, turn them on maximally. Make them complain a lot, okay? Because you'll quickly learn, oh, this is a dangerous construct. Use this other thing instead and that sort of thing. And that's wonderful. It's an existing project, what's often called a brownfield project. Don't do that. And the reason is if you start these tools and you turn on all the options, what you'll find out is that for every line of code you have 20 reports. And you have 50,000, 100,000 reports, 200,000 reports. There's no way you can deal with that, okay? And instead of what you really need to do is narrow it down to what's most important, get a few warnings, fix those, and then slowly burn them down because it's just not practical to try to handle everything at once. It doesn't really make any sense. Identify the highest priority, work that, repeat. All right, future directions. Eventually we do plan to move the website to bestpractices.dev and eventually move the source code repository under OpenSSF. This was originally started under a different foundation. We do intend to keep improving the automation, always looking for more and better criteria, more human languages, and various code cleanups. And with that, I shall hand it to you to talk about Skorka. And so David's Badges project has existed for many years. How many? At least five? Yeah, five or more years. And it's a great resource. But as we have progressed and with the speed of things these days, we have an alternative, actually a complimentary piece of software that adds some more automation to this kind of auditing and review process. That's the OpenSSF scorecard. And again, best practices badge is actually part of the criteria that the scorecard uses in its evaluation. But it is a lot more extensive and is looking for more automated evidence of things. So the scorecard goes through its battery of checks and it automatically scores projects based off of these security heuristics. Every one of these attributes is scored between zero and 10. And you can kind of evaluate either your own project or you can, if you are looking to consume software, you can use this to remotely view certain attributes of projects. Currently, this only works within the GitHub source code management system. And if anyone is in alternate source code systems and is interested in using this, patches are welcome to get that code working over there. And what this does, it gives you an aggregate score. There are scores in different areas and it provides you ideas for improvement. And this group is, I wouldn't say a huge team, but we've got at least a half dozen, eight people and a lot of major corporations contributing to this. So there's a lot of new features and functionality added. We just recently added a REST API to be able to get the results because it used to just be kind of like a cron job and dumped out a text file. But now we got an awesome API. Yay, modern development. And sometimes you'll see it referred to as the score cards, but the proper name is scorecard. But if you see scorecards, don't get too angry. So here are some of the checks that it goes through. And broadly, it does a security risk assessment. So it's looking at binary artifacts, branch protection, are you doing code review? How many contributors do you have? Do you see any dangerous workflows? Then it looks at a section called maintenance and it's looking about do you have any kind of dependency updating? How are you updating your dependencies? What's your license? How often are they seeing commits? Is it considered maintained? Do you have any commits within the last 90 days or so? And do you have a security policy? Which again, David already mentioned, if you're doing score the badges, you're already qualifying for the scorecard. There's some continuous verification, some testing checks. It's going through your CI tests, fuzzing, do you have static analysis? And then we go in and look at a build risk assessment. Do you pin your dependencies? What are you doing around packaging? Are you signing your releases? Are you using token permissions? Do you have web hooks? So this automatically goes out and probes your repository and collects this information. Then it does a vulnerability scan. So what vulnerabilities do you have? And it uses the open source vulnerability schema. It's another project within the foundation. It's kind of like a companion to the CVE database. So are there any known volumes in your code? And it'll report back if it finds anything. And then it looks at the best practices badge and you get an awesome best practices score. And this project, Davis was one of the first projects that entered into the best working group and scorecard came on during our first year of operation within the foundation and has received a lot of attention, a lot of new contributors, but also a lot of external views. This is something, Sonotype cited it as part of their state of the software security supply chain report. And you can see they found some correlations. So based off of those checks that scorecard is using, they see that if you don't have code review or a peer review process, you are more likely to have vulnerabilities in your code if you check in binaries. This is an attack path to use this transparency and reduces audibility. So you're going to get dinged on a score there. And again, Sonotype has found that this leads to more problems down the road. Pinning dependencies, if you don't enable branch protection, these are all generally, if you invest a little bit of time, generally simple things to add and pays off a lot of dividends down the road with avoiding vulnerabilities. The tool automatically will run on your project code as a GitHub action if you're in GitHub. And remember, if you're in Get Something Else and you want to make it work there, Patch is welcome. And it requires a one-time setup. All that is documented on the scorecards.dev website. It's fairly simple. It takes 15 minutes or so. I don't even know if it's even that to get set up. You can run it manually by command line, if you so desire. And we have, again, that awesome API where we're going out and scanning 1.2 million open source projects. And I can't remember what the cadence is. Is that weekly? It's weekly. Weekly, yeah. So if you are a consumer of open source, this is also another resource if you have libraries or dependencies you're thinking about ingesting, you might want to go flip over and look at their scorecard score or run the scorecard on them yourself before you can add that into your dependencies. Let's see. We got a big query public data set. So if you want to do some big data analysis, that's kind of cool for those statisticians. And I think that's probably where Sonotype also helped kind of do a little review. And we look at depths.dev, which is included in your scorecard results. And it starts with the package ID. So again, if you are a maintainer, you're a project community member, please consider integrating scorecard into your project. It's a one-time setup. And you can easily share the results via cron. You modify your readme to add in some little hooks. And then you check your results. You sit back and let all the awesome security roll in and all the accolades about how awesomely secure your project is or about some opportunities to change. And I'll give you a little fourth example. I work for an organization called Intel. And right now we have between six and 800 public-facing GitHub repositories. So my company sees a lot of value in this software. So as we are ingesting upstream packages into either our internal systems or as part of what we're sharing to our customers, we're checking the scorecard score. But we also see value in the software. So we are integrating all of our external projects into the scorecard ecosystem so you can check to see how good a job we're doing. And then our product managers are looking at the scorecard results and wherever they can, they're working quickly to get off the naughty list where we aren't doing things that potentially are kind of out of the security best practice. So again, it's an idea that we strongly believe in. So we're participating. We're trying to be part of the solution and we're providing feedback and PRs back to the project on things we're finding. So in the future, as I mentioned, very, very busy project, you're going to see some actual traction on adding GitLab support, which is awesome. Oh, our friends at Lockheed are doing that. Cool. So a community member and foundation member is if you're using GitLab, somebody who's already doing the work, but maybe they might want to contribute and help them out and make it go faster. We're looking to improve automation. So we're going to have better CI pipeline support. Right now we support GitHub Actions and we need to add CircleCI and Jenkins and other CI tools. We're looking at better tool detection. We are not able to recognize some static and dynamic tools today. And again, that's where if you come in and start to use the project, you can help us provide that feedback and get these things identified much more quickly. And we're looking to add some new metrics. What mean time to, mean time to update, MTTU. So we'll be adding some new criteria to kind of showcase. So if you're a consumer, you'll be able to see how quickly the software you're using provides patches and updates. And then there's just some general cleanup work we're looking at doing. You want to know, it's your new friend. Yeah, why not? Yeah, so it's always nice when you're giving a presentation to try to give the very latest news. Fresh. Fresh, that's fresh. So in fact, Wednesday, I got handed a copy of some master's work analyzing badges in general for open-source software projects. That includes the best practices badge, scorecard. Basically, hey, and trying to answer the question looking at from an academic view, basically do these things work? And I think based on this survey, at least the basic answer was yes. Overall, there is a consensus from the people who are trying to earn badges, get badges, that yeah, these things do provide a, well, an indicator of health, either positive or negative. Generally, they're focusing on the positive, of course. And that the folks who are creating these badges, hey, their goal is to help improve the health of open-source projects. And you know what? The open-source software project maintainers agree that, yeah, that's a goal and that's also their goal and very much consistent between each other. And for the badging consumers, again, this is their terminology for people who are working on earning these badges, basically expressed a really interest in helping people, basically, they want their projects to be successful. And they view badges as a way to help them be successful because they want to show that they actually care about the quality results and increase the likelihood that people will want to participate because people who want to use the software, people who want to possibly contribute want to know that they're working with a project that is more likely to be sustainable, produce good results, and badges really do help. Indeed. All right, so let's see here. So I guess either one of us... Yeah, please. All right, I've got, I'm over here. Thank you. You have the comment. All right, so basically, we're really encouraging if you're considering using open source and software as a dependency, look for badge, look for scorecards. There's actually a more general document from the OpenSSF called the concise guide for evaluating open source software. When I'm looking at open source software, how do I get an indication whether or not it's good or bad, things I should be concerned about or things I should be really happy to see? And it says a lot of things, including things like, you know, worry about, no, avoid type of squatting because downloading the wrong software means it doesn't really matter how good the wrong software is. But the best practices, badge and scorecard are both mentioned in there in both cases. You know, those are positive signals. If the badge is there, hey, that's a great sign, scorecard, hey, look at the results. You know, and like all tools, scorecards as false positives and false negatives, you want to double check its outputs. But that said, it provides you a lot of quick information that's really, really helpful. And in the future, the OpenSSF is actually working on something called the dashboard. The goal of dashboard is to integrate scorecards results and best practices badge and all sorts of other data that we can get to get a bigger picture. But, you know, that's not here yet, so that's work on the way. So shall I just keep going? Yeah, and this is, again, a quick summary. We think these are two great projects, and we encourage all developers of OpenSource to leverage and all consumers of OpenSource to consider as a check for quality and security posture. Yep. And I like that one. So I think it's useful to contrast because some people say, hey, why don't there are two of these? Why don't just merge into one? Well, each of them has their pros and cons. The big pro of a scorecard is that projects don't need to participate, which means if you are just evaluating some OpenSource software, you can run the scorecard on it yourself. And we already run it for you on at least weekly basis for over a million projects. And that gives you quick automatic results on most OpenSource software projects. In contrast to the badge, it requires participation from the project. It's not a lot of time, 20 minutes, but that is an impedance. On the other hand, the scorecard has some challenges. The biggest challenge for a scorecard, and by the way, I also work with the scorecard team, like all tools, it has false positives and false negatives. This is not unique to scorecard, okay, but it is a challenge for any automated tool. There's a lot of tools and CI systems that aren't handled. We know that. We are trying, please file issues when you find something, but it's scorecards trying to do something very hard. It turns out that there's a million different ways, I'm being a little facetious, but there's a very large number of ways of doing things and trying to detect all of them is actually quite challenging in an automated fashion. So, you know, and the presence of a file doesn't mean that things are doing. Right now we mentioned GitHub is a deal that's supported today. We are working on GitLab. We do want to add more. You know, we are aware of, you know, many folks that use Bitbucket, there's many folks who self-host. We got that. We're working on extending that. In contrast, the best practices badge, because it uses human analysis, it's not as fooled by the tool stuff and we can work with any forage, but and we can put, have criteria that we don't have any idea of what's being automated. We can't, we have no idea how to automate, but the problem of course is now you need much more participation. And so the combination, frankly, is really, really helpful. And indeed, one of the scorecard criteria is, hey, how about the badge? So they're actually not independent. My phrase that I'm claiming is that they work together like chocolate and peanut butter. Yum. So I think that you have one more? I think that is our final slide. Yeah, okay. Well, that plus a cool picture. So we laid down a lot of information for you. What questions or comments do you have about the working group, either of the projects? Or questions. That's right. In both cases, actually, what we encourage people to do is if you are working on a best practices badge, you stick that in your read me. If you've got a scorecard result, you stick that in your read me and in both of them, you'll see a short little summary and then you click on it and yes, and then you get the details. That's exactly right. I'm sorry? Right, right. And I haven't gone into these things, but both, I mean, fake proof is hard in the broad scheme, okay? Yes, exactly. Okay, I mean, scorecard is automated and so it will be fooled only in the sense that you put something in there that fools its automation system. Same for best practices badge. It has some automation to detect certain things and will reject some false answers. But that said, I think in both cases, frankly, at some point, if you see something that's, it got it wrong, but you can understand it's an error, please let us know. If I'll have an issue, we'll try to fix it. If someone's being malicious and trying to actually maliciously fool, please let us know. It's a little more complicated for scorecard, but for best practices, we actually just kick them out. And as a follow-up, do either project kind of list what, who is qualified or who is currently under the scan? For score, I mean, for scorecards, it's over a million. So it's based, I mean, that's a pretty long list. For best practices badge, you can just see the list. I'll use the power of crap, man. Yeah, I mean, for both of them, you can actually get a list of one of the projects. So that might help against your fooling it. No, well, no. If it did on each load, that would be a problem, right? For both, let me start with scorecard. We encourage, and there was a little hint to that earlier in the thing, we encourage people who are using scorecard to embed the scorecard workflow within their normal workflow. And that way, every time you make a commit into the main branch, it'll go rerun and give the updates. If you don't do that, we'll stare at you oddly, but I will note that we'll rerun it once a week anyway. So if you don't put it in your workflow, it'll still get updated on a weekly basis. Best practices badge, it's much more of a form fill, so it's the current data that we have. How long does it take to, oh, to run scorecard? Oh, it's not very long at all. It's about 10 seconds or so. I mean, there's some variance because if you give it a token, if you give it a token, it can do a couple more things. But it doesn't take long. Not at all, yes. I think in general, I mean, really in general, the open source software is very much focused on transparency. And it's okay that, hey, I'm doing a lot of good things, but this is where I'm falling down. Fine, we all now all know that and I find often maintainers, oh, that's actually a good idea. We'll get on that. What's there made aware of it? And I'll say for both efforts, before we go to your question, both projects actively work with people that are interested in participating in the process. And we will provide you resources to like, you should look into this or do that. And the scorecards team especially, they have a Slack channel and they are very responsive to pinging there and then filing issues and PRs and whatnot. Yeah. Question here. They don't expire in one sense, but they can expire in a different sense. And that's, of course, if you're failing to meet something that you used to me and it can be automatically detected, then it might expire. The other expiration possibility is when we add new criteria. We haven't added new criteria for a while, primarily because the new criteria, we would actually focus much more on just trying to get people to get passing badges. We want people to get silver. We want people to get gold. We definitely have talked about adding new criteria, but right now the theory has been, oh my gosh, if you can't get the current passing badge into the existing criteria, you're the higher risk and so we want to work on that. So it is possible right now our focus has been more on trying to get people up to that bar. Was the concern about not updating the badge based off of a new revision or new practices or the fact that maybe somebody that it's never go stale? We do record when the various criteria were done. To be fair, the badge criteria are very much focused on process things like, do you have a test framework? Are you working to improve tests over time as opposed to, at least at the passing level, as opposed to things like, have you met a certain coverage criteria? And so as long as those processes continue, even if the project is no longer as active, as long as they continue to keep that, it's still okay. But the issue is when all of a sudden that is no longer true, then yes, a badge can go away. Yes, no longer meet, be met. Right, okay, that's exactly right. So let me split that. Great question. Let me answer it in parts and hopefully my answer will make sense. We release the code of scorecard that you can download and run yourself. If you download it and run it yourself, it will acquire data and you say, hey, go evaluate this project. It will download the data from that project from, in this case right now, currently GitHub, eventually others too, to your local system and do the analysis and give you the report. That report is local to you. It goes nowhere else, okay? You can, in fact, you can bring in that data into your internal organization and whatever you also want to do with it. It doesn't go anywhere necessarily. Now, that same program is run by another tool that runs weekly, okay? And that analysis is put into a shared data, but that's a, we're running the same tool you would run, okay? It's just we do run it separately and put the data there. In addition, we have a specific GitHub action that if a project chooses to add it, which we encourage, then when they run it, there's some install steps you have to do, but basically, okay, for your one project, we'll let you report from your workflow, your results, and that means that we're always up to date to the very latest commit in terms of what they're doing. So I hope that ends, but it's all running the same code as far as the actual scorecard result goes. Well, actually, no, that's just an input to Alpha Omega. Alpha Omega then uses that among other data, but that's not, that's a separate project. Right, right. And we have that completely separate, completely separate, it's probably a little not quite right, but we certainly wanted to hear from Alpha Omega. If they cared, we care, okay? There's a, basically we tried to make sure that projects that somebody cared about were in that list. How's that? Is that a fair, for a long list of someone's? Sure. This gentleman was first, and then over here. Okay, yeah, great. Good feedback. Okay, good feedback, yeah. Correct. When the project elects to participate or was selected, we worked out, so there's a security token that we need to be able to run some of those checks. And you as an external party don't have access to that private. Yeah, it's the token availability, actually. I mean, the code's all there. It's exactly the same code. But I thought it was a lot more than five. I'll have to, maybe we should talk later, because the list isn't long, but there are a few that we can't do, but it's not because no one can do it, it's because you need to give a token with access to certain data in order to do some of the checks. So it's a data availability thing. We always run every tool in Scorecard. We run every tool that we can if we can get the data. And there's the if. Okay. Okay, yeah, there's two different levels. Let's do have a token at all, and then there's a token. Do you have the token for certain restricted data? But we can talk more about that later. And that's something you might want to try to rerun it round now. Now, yeah. And then if you have issues pop on their Slack channel and ask and they'll be glad to help try to figure that out with you. Yeah, that sounds like something odd. Now there's a separate issue which is sometimes there's rate limiting if you don't give a token at all and you've been doing a lot with GitHub, it will be unhappy with you. That's just a general rule about trying to get access from GitHub. So if you're trying to get a lot of data from them, from the same IP address and there's no token at all, we very much get limited by that, but that's again, you give a token and problems disappear. Well, that problem disappears anyway. Question in the back. That was actually similar to the question earlier. So we have a long list. There's a weekly scan list. And the short answer is if anybody said, hey, would you please scan this? We just, as long as we can, we just pop it onto the list. So if Alpha Omega says, hey, would you please include this? We say, great, and it goes. Okay, we're not. The data is publicly available, so we don't have any particular reason to be picky. So we just, if somebody cares, we'll add that. Now, in addition, any project can say, I want a scorecard. And then you add to the workflow and off you go. You don't need to wait for a mother may I, but the challenge we have is that chicken and egg problem of how do you get things going when there's nothing there? And so for scorecard, the idea is, well, why don't we just scan the ones that somebody cares about using the data we can get that's publicly available and then anyone else can add. And that's the scenario my company Intel is going through. We want to be included in that data set, so we're working on integrating that so we can get both show and get internally and fix with the results. Yep. And if you think that there's an important project that's not on the list, please talk to us. Because it's, I mean, we're running a million, a couple more, what's the difference? So really, and that would be fine. But as I said earlier, we encourage projects to, you know, include in their workflows because then you get, I mean, literally down to the current commit information. Okay. The question was about... So if they're running it locally in their own CI pipeline, where are the scans run? Okay. So like a SAST or DAST. Right, right. Okay. Well, currently, the only mechanism we have for running it within a CI-CD pipeline is GitHub Actions running on GitHub, you know, the main GitHub site. So if you're, so my question, the answer is a little rather constrained because that's the only option there. You can obviously download the code and run it locally. You can run it on your own CI-CD system. But at that point, it's, well, whatever you, however you configured it would be the answer at that point. You download the code and you read it and whatever you chose to do is what you chose to do. And with it being open source, you have the ability to look at it and understand it and modify it to meet your particular needs. So if you wanted to point to an internal scanning service, it's theoretically possible. Right. Now I will, I guess I'll come, I want to reply with a beg. And if you ever heard this beg from us already, we'll just, if you see something that you would like it to do, Patch is very, very welcome because we are well aware that there's all sorts of other ways to use these. You know, we've already mentioned other forges, other tools, other CI-CD systems. And it's very, very challenging because there's so much variety to, so we've tried to build up several cases and we would be very interested in working with you and anybody else. But thank you, everybody. We are at our time. If you have additional questions, we're both reachable all that slide. The OpenSSF Slack, any of our mailing lists, GitHub Repos, please join in and participate and let's give us feedback. Thank you.