 Hi, everybody. I'm Christy Wilson. I'm a software engineer at Google, where I lead the Tecton project. Tecton, which we'll be talking about a little bit, is a continuous delivery platform built on top of Kubernetes. And somehow, I've been working on it for three years already, which is hard to believe. But even more exciting than that is I'm excited to actually be doing this talk in person. It's been at least a year and a half since the last one. So thanks for encouraging me to do this, Kim. You're welcome. Yeah, it's been a while since we did a talk together. And so, hey, everyone. I am Kim Lewandowski. I'm currently at Google, but Friday is actually my last day. So very excited to do this talk with Christy. And she promised me that we're going to do talks even after I leave Google, so yay. I worked in the supply chain space for the last three or four years, started the Tecton project with Christy, and currently leading our Google open-source security team. So it's been exciting. And here we go. So Christy, let's just kick things off with a demo. I hear you've been working on a new website. Yeah, let me show you. I've been working on a cool cat picture website, as one does. And while she's pulling this up, Christy made amazing Goose diagrams of the both of us that you'll see on the slide. She's very, along with being an amazing engineer, she's a very good artist. I didn't know you're going to flatter me. And she even wore the shirt for the occasion. But the open SS of Goose is our Goose mascot in the open-source security foundation. All right. So enough geese and back to cats. I've been working on this cat picture website. If you're familiar with Tecton, you might know that we sometimes have cat mascots for our releases. So I wanted to make a website to show them off. But I feel like it's kind of a little bit plain, like it could use a bit more color. Dogs. Or dogs. No dogs, no dogs. It would be cool if I could make this text rainbow colored somehow, but I don't really know a lot of HTML. So I'm not sure how I'd do that. But maybe I know of a library. Maybe I can just use that library. Let's see. So I happen to know about this rainbow library that will actually make my text rainbow colored. So I'm just going to add that dependency into my project. As one does. As one does. I just call this function. It's really easy. And let's see what happens. Let's go back to the cat website. OK, look at that. I've got rainbow text. It was that easy. Dependencies are really great. Let's go over here. Wait. Wait a second. What is this? Hold on. Every time I refresh this page, I'm getting this output about a malicious binary. What is that? And the only thing I really changed was pulling in a dependency. So let's take a look at this dependency I pulled in. So it's the rainbow HTML project. Look at the source code. There's already something kind of suspicious there. Let's take a look at the actual code that I'm calling. So I'm calling a function called text. It's doing some stuff with HTML. There's some colors. That all seems reasonable. But it's calling this do setup function. So this is actually writing an executable file and executing it. And actually, it's embedding a binary inside of this go package. So I don't think we want to use this dependency after all. Let's go back to our slides. All right, so I'm really glad that we caught that. But what if the binary hadn't just told us that it was there? Most malicious dependencies aren't going to just announce themselves. That was very helpful. So I mean, that's a good point about malicious dependencies. And of course, that's what we're here to talk about today. So here's quickly what we want to cover. Talk to you first about what we consider a dependency. And then talk to you about why you should care about risks like this. And then we'll talk really briefly at a high level about how Google handles third party code. And then we'll see our cool trick. And then we'll follow up with other helpful tools in this space. So let's talk a little bit about malicious dependencies. First, it's probably good to make sure that we're talking about the same thing when we say dependency. So what we're talking about is the code that your code depends on, and the code that that code depends on, the code that depends on. We're talking about all of those libraries that we import so that we don't have to reinvent the wheel every time we want to do something. And of course, not everyone is nice. We have attackers that are trying to take advantage of tricking developers and to downloading and using these malicious dependencies. And today, we're focusing on we're at the open source conference, and that's where a lot of risk lies. So that's what we're talking about today. And so this is a really high level diagram of a typical software supply chain. And this is where you can see where all the weaknesses are along a supply chain and all the different threat vectors. And so getting malicious dependencies into a production system is one of the most common software supply chain attacks today. And we're seeing this, unfortunately, way more frequently than anyone wants to. And here's just a list of some of the common tack types, like the terminology. So if you see these terms in the media or whatever, this is kind of just some of the more common ones that we're seeing. The first one is dependency confusion attack. And this is where it's tricking a build system to pull in the wrong dependency in your software supply chain. And then typosquadding is when an attacker just changes the letters or changes the symbols and tries to confuse you to download the wrong package. So image-editor, they might rename it to image-better-editor or something and try to trick you into downloading this malicious package. And then project takeover, we've seen a few examples of this. And this is where an attacker will try to come in and become an owner of the project and commit rights of a popular project and maybe convince the owner of that project to take it over and then they slip in malicious code. And then all of a sudden, you have a malicious dependency. So this is all pretty scary. And the more you think about it, it just kind of starts to get worse. I've always thought about dependencies as a way to make things easier, but there's a dark side. Of course. It's so easy for me to pull in a dependency. And mostly it makes sense for me to do it. I don't have to implement and maintain the code myself. I can focus on whatever it is I'm actually trying to do. And if there are bug fixes or other improvements in the library, I'm going to benefit from those. But it's not always safe. And just a simple command like this actually has the potential to accidentally give somebody root access to one of my production systems. And so it's getting scarier. I mean, that's the fact of the matter here. This is a recent report, and you may have seen the stat now. And a few other decks at this conference is that there is a 650% increase in just 2021 alone for these software supply chain attacks. And the majority of these are the typosquadding variety, the dependency confusion, and the account takeovers. And again, this is not all just theoretical. People aren't just making this stuff up. Here's a timeline. I run out of space. I don't have all of them on here of recent attacks that we've seen in the last couple of years. Like, for example, Ruby Gems, they found 760 typosquadded packages in the Gems thing and had to pull them down. And then the great suspender attack was one of these account takeover attacks, where someone came in and said, hey, I want to help out with your project. I've got some changes I want to make, earned the owner's trust, and then took it over and then started putting malicious code into a very popular comb extension. And that was taken, the extension was taken down. And the Microsoft Halo attack was a dependency confusion attack that we saw recently. So this is the problem that we're looking at. It makes so much sense to leverage dependencies, but it's really easy to mess up. It's hard to do the right thing, and we don't even really have a lot of guidance about what the right thing is to do. Another interesting fact from the report that Kim was mentioning is that of all these dependencies we're using, only about 25% of them are even being updated regularly and pulling in bug fixes. And as software gets more complex, which it definitely is, you might find yourself pulling in hundreds or in some cases even thousands of dependencies to your project. And like Kim was showing, it's not just theoretical because we are actually seeing more and more attacks that take advantage of this. OK, so now that we've sufficiently scared you, let's just take a really high-level look of how Google handles third-party code. And you can see a lot of this. I added the link at the top of the slide. This is all on our public website about what you need to do if you want to bring in third-party code as a Google engineer. So the first thing is you have to be an assigned owner. So if Christy wanted to ingest her Rainbow HTML, she would have to take ownership of that dependency. She would be responsible for keeping it updated, monitoring for vulnerabilities, patching it, license compliance, and things like that. And again, a really high-level diagram. I think we have some Googlers that can probably speak to more detail if we need in some of how Google works. But I think people know that Google is pretty famous for having a monorepo. So even third-party dependencies come into this monorepo. And we have a trusted build system that's got we have very high security requirements around our build system. We're capturing metadata along the entire supply chain for these artifacts. And we can go back and verify all the properties of them. We know when there's a change that's pushed out, we know where the code is running. We can see what's happening. There's nothing too magic here. I think any organization can cobble something like this together with enough time, whether that's the best thing for a company to do, who knows. But it doesn't work for everyone. You're limited. We can only run one version of something. So if I don't like her rainbow HTML version, I'm kind of stuck with it because she's the owner of that third-party package. And then a few other things that I mentioned. And so of course, we always ask ourselves, can we even do better as a large organization? There's a lot of risk of depending on third-party software. We're seeing the attacks rise. We're seeing examples of this. So what can we do as an organization, as a community, just to make this whole situation better? So there are a lot of different things to address. And we can't get to all of them at once. So is there anything that we can do right now? Let's get back to my cat picture website. What if that malicious dependency hadn't been as helpful as it was, and I didn't realize it's there? Let's see what would happen. OK, so let's say I didn't see any of that. So if you remember, the only changes I've made are pulling in this dependency and then calling the function. I'm going to commit this, and I'm going to push my branch. So over in GitHub, let's open a pull request with those changes. So here is my branch. Here are the changes that we were just looking at. Open pull request. And right away, you can see that we actually have a required GitHub action that's running that's going to be scoring my dependencies, which we're going to talk a bit about what that means. So what that's doing is it's using a project that we're going to be looking at in more detail called OSSF scorecard. So this is a project that's all about looking at a project and looking at some of these kind of known problems that they can have and known weaknesses and then scoring the project. So let's see what happens in our GitHub action. So this GitHub action is actually calling, creating a tecton pipeline and running it. I'm going to get into a bit more detail about that in a second. Let's let it run. It's done a git clone. Now it's running the scorecard. And it's failing. OK, so ultimately, this pipeline is failing. Let's take a look at why. So we have a bunch of output from scorecard about all kinds of different checks that it's doing and the failures it's finding. And actually, most projects do fail. Most of these checks, fortunately. But some are more important than others. And so ultimately, we had a required check, which was the check to make sure there are no binary artifacts in the repo. And this check actually did notice that there's a binary artifact in the repo that I was adding as a dependency, so it failed. And so even if I wasn't paying attention as a reviewer, I didn't notice that there was a dependency being pulled in here. Fortunately, this check has failed and it saved me. So let's look at what actually happened here in a bit more detail. OK, so thanks to the automation, we were able to catch the bad dependency. Let's break down what we just saw. So I used a GitHub actions workflow that connected to my Tecton cluster in GKE, which executed a Tecton pipeline. And then that finally ran a tool for evaluating go dependencies with Scorecard. So at this point, if you're familiar with GitHub actions in Tecton, you might be wondering, why am I using both of them? So the easiest answer is because I'm very biased and I work on Tecton. So I wanted to use Tecton. It's got a cool logo. But secondly, let me just talk a bit about what Tecton is. So Tecton is a continuous delivery system built on top of Kubernetes. One of the goals is to provide a specification for CD workloads that's portable across CD systems. So actually mixing something like GitHub actions with Tecton is very much in line with those goals. And another thing that's cool about this kind of portability is that before I hooked any of this up to GitHub actions, I could actually run the tasks and pipelines in my own cluster. And when I ran into things, I could debug them there. So we just saw GitHub actions triggering Tecton, which used scorecards to evaluate my project's dependencies. That slides me. All right, so a little bit more on the scorecards project. So this is a newer project. We created it in the OpenSSF about a year ago. And the goals were pretty simple. We wanted a way to automate the security posture of these open source projects so we could make better decisions on the risk that we were willing to take. And give developers just more insight into these open source projects. And then the sub-goal is really to help inspire projects to earn a better score. A lot of these open source projects are kind of a single maintainer, not security experts, or just don't even realize that some of these things are best practices. And so that was, hopefully, we want to encourage you to get a better score on your project. Oh, and the last thing, I just got an update today that we have public data for over 150,000 GitHub repos today. This is a public data set, so it's all stored in BigQuery. I'm not sure if you're querying BigQuery, but anyway, it's there. And we're pulling in the data in a few different places. And the project originally started out is like a pass-fail on these security checks. But now we've moved into a model where it's a 0 through 10. And we kind of give a confidence of how confident we are that it meets these specific criteria. I don't think Rainbow HTML made it into that cache yet. No worries. So here's a list of the current heuristics we see today, or that we have today. And of course, I think every single project that we're talking about today, except for one, is actually open source. So if there's a heuristic you'd like to see, definitely open and pull requests or what have you. But a few of these, like our branch protection. So we want to see if branch protection is turned on the repo. We want to see if maintainers have access to push to the main branch without going through a pull request process. We want to see how many contributors are a part of the project, or how many organizations are involved. And then we're doing other checks around like fuzzing. Fuzzing looks for vulnerability. So is the project integrated with fuzzing? It doesn't have a CII badge. This one's near and dear to David's heart. This is a list of best practices that we encourage projects to follow. And then this data is a little stale, but this is an aggregate of, I think, when we had like 50,000 repos that we had scorecard data for, and just looking at the aggregate metrics. And unfortunately, there's a lot more red. And so I think it's a goal. Make this graph more green than red is what we're going for here. So let's talk a little bit again about malicious dependencies and how scorecard fits in with them. So sometimes a library is intentionally malicious, like the one that we were looking at. Other times it's accidental. But lastly, even if there's nothing particularly malicious in a library, if you're not following best practices, then your library is open to some of these attacks. For example, if you're not requiring any kind of code reviews for patches, then there's a higher chance that somebody might be able to sneak something malicious into your library. So what scorecard can do for you is it can identify some of these overt problems, like unpatched vulnerabilities, but it can also give you a signal about how vulnerable a project is to some of these known attacks. So in the demo that you just saw, I used a proof of concept tool I made for evaluating go dependencies. And this is the kind of thing that you could potentially do for your own project and for whatever languages you're working with. So what the tool does is first it grabs all the dependencies of the project, including the dependencies of the dependencies. Then it resolves vanity URLs, which is a Go specific thing. And for all the GitHub-based projects, it runs scorecard against them. It summarizes any of the failures, well, all of the failures that it finds, and it fails completely if it finds any known vulnerabilities or any binary artifacts. So you may be wondering if this is really worth it and what it gets you. It's not foolproof, but I think if you compare it to the alternative, which is relying on people to catch things, then you can kind of see the value. It can also do a lot to just guide reviewers. For example, take the check that will tell you if a project hasn't been updated for several months. As a reviewer, you could see that failing and you could decide whether or not that's important to you. Maybe you're okay with that. Maybe you want something that's being updated more regularly, but without something like scorecard guiding you, as a reviewer you might not even know what to look for. And I think you'll find this especially useful if you haven't been paying very much attention to your dependencies so far and they've just been building up. If you run scorecard against them, I guarantee you'll find some interesting things. So, yeah, so scorecards is one new tool in this space. Unfortunately, it doesn't solve all of our software supply chain security issues. And, but we do have a lot of other tools that we wanted to talk about today that can also help in this journey. And so the first one is another project in the open SSF called All Star. And All Star is a bit newer than scorecards. I think we launched it, I don't know, several, couple months ago. And it's meant to be a complimentary app to the scorecards project. And so what it does is it has real-time enforcements of some of the scorecard checks. And it allows you to define, like what user-defined actions you want it to take if it sees that it's failing one of these. So, for example, if you're a maintainer on a project and you install the All Star app, it actually runs as a GitHub. It's called a GitHub app inside your repo. And then you can say, if it fails the branch protection, it can either try to automatically turn back on branch protection, it could create an issue, or I think maybe it emails you. I don't know, maybe that's a feature that we wanted to add. One of the things that All Star could help with is the CodeCov attack where dependencies weren't pinned and that's something it could pick up on and it'll lure you on. And then another project that I wanted to mention, this is a Google project that was also launched, I think, in June, but one of the interesting things is that it's pulling in the scorecard data. So this is a site called depth.dev. You can go to it, you can type in a project, and you can actually get one of the things that it shows you is a graph of all the dependencies of a project and the scorecard results that are coming out. Right here, we're looking at the Kubernetes project and the graph is a little bit daunting, but just to give you a full breath of how scary this world that we're operating in is many, many dependencies that these big projects rely on. Hey, Kim, I came across this trailer. I was wondering if maybe you could explain it to me. Oh, idea. What is that? I mean, I wish I knew. So another big project that we're excited about within the OpenSSF is something that we call Salsa. That's how we pronounce it. It stands for supply chain levels for software artifacts, and it's a framework around software supply chain integrity. So where scorecards is about the artifact itself and there might be some overlap in the future, Salsa is all about the framework around how does that source code get committed and the path that it takes all the way to being a usable artifact. As one does before they're about to leave Google, you put a film together, a feature film, and trying to explain the Salsa framework to get people more interested in these types of topics, and we try to make it fun. I think the full video's coming out in KeepCon. I think you have to do your own segment before we can release. So this is something we're really excited about in the OpenSSF, and we really want to ensure that software artifacts meet end-to-end integrity standards. It was inspired by what we do internally in Google. Our different build systems and our source repository have to meet these specific requirements before we trust them, and everyone has their own definition of trust. Let's see, and a very big topic in the repo today is Salsa personified as chips, or chips and salsa, or a dancer, and luckily maybe we have that partly solved. I don't know, the video kind of talks to it more as chips, but a Googler, a colleague of ours, drew the amazing Goose logo, as one does, with a salsa dress on. And so here's a table of the requirements at each of the salsa levels. This might be a little outdated, but I think it's fairly accurate now. We're working on this with the community. We have bi-weekly community meetings and lots of folks that are involved, and actually just helping us sort of make sure we have the right requirements right that fits different use cases and whatnot. I think the note, you know, the thing to note here is like, while the higher levels of salsa today look very hard to achieve for these large projects, like any of the requirements that a project ends up meeting, it just means it's that much more secure. So it's not like all are lost if you only make it to, you know, salsa one with a few more requirements or something. I think these things are all very important for the trustworthiness of artifacts. And then, yeah. And so, and this is just a kind of a high level picture again of how like the salsa framework can fit in with dependencies. So as Christy was saying, you got dependency, dependency on turtles all the way down of these dependencies. And it's a, I think it's an open discussion or an open issue maybe in the salsa framework today is like, maybe we should consider actually adding another higher level or like the higher level, you know, all of the direct dependencies need to meet a specific salsa level. So it really can get, you know, the scope of these things can kind of explode. But again, any improvement for security is a good improvement. Another project that's very dear to my heart is Tecton chains. So I talked a little bit earlier about what Tecton is. It's built by extending Kubernetes. Tecton runs pipelines and tasks inside a Kubernetes cluster. And one of the benefits of using Kubernetes is it's so extensible. So Tecton chains is an optional controller that you can add to an existing Tecton cluster and it observes the execution of tasks and pipelines. If it sees an image being built, it can generate provenance and sign those images for you. It's early days, but the idea is that eventually it'll be able to recognize and do this for all kinds of artifacts, which means you'd be able to write your tasks and pipelines without having to add any kind of explicit supply chain security support into them and Tecton chains would just add it for you. Well, cool. We made it almost to the end. So as I said, and as probably most of you know, software supply chain security is kind of this huge, you know, huge problem, huge thing that we all sort of have to deal with today. And there's really no single solution here. I did attempt to break down the problem in a bit more tractable way that we can start framing it. The first part is awareness. And I think this is where the scorecards projects help, like know the hygiene of the stuff that you're shipping into your production systems. And then that ties into automation. I think a key to a lot of these things are it needs to be really easy for developers to either implement or use or understand. And that's where we see the All-Star project and the Tecton chains project doing a lot of that automation for you. And then I think the last bit is really a cultural shift. And that's where we think maybe new standards and processes come into play. That's also a framework being an example of that. And then that's a picture of my kids. They like to pick up trash. And so one of my takeaways, and I was trying to figure out how to sneak a quick, a cute picture of them in there. And this is a Boy Scouts thing, is like always leave the software cleaner than you found it. So if you want to help your fellow developers out, run scorecard against the projects that you work on and see where your gaps are. If we all improve just one or two of these things that we find in our own projects, then open source software as a whole would be that much less vulnerable. Yay. And then, yeah, our last slide. So here's a link to a lot of the projects that we talked about today. All of them except for the depths.dev. I mean, anyone can access depths.dev. All of these live in foundations. Tecton is part of the Continuous Delivery Foundation and community meetings, all that kind of fun stuff. And then there's a couple more that I added at the bottom here just to check out the package feeds and package analysis projects. And the OpenSSF are interesting as well. They're trying to detect things like typosquadding, like while packages are being imported in package managers and not like at real-time scorecard type things. And then OSV is another interesting project that actually ties into scorecards a bit now looking to see if there are any unpatched vulnerabilities. So that's another good one to check out. And I think we made it. Thank you all for coming at the end of the day. So we definitely have time for questions if anyone has any questions. Yes, David. It's not really a question of an observation. Being trouble-finding because of the CI background effect but detecting static analysis tools turns out to be hard. So it detects them well, it misses some, so this is actually the field everybody else here which helps scorecards. Be better. Get better at some of those heuristics because some of them are challenging. Totally. And you know what we're missing. Yeah, I think with all these things too there's a bit of gaming that can happen. You know, and we're just trying to do better, I guess, but yes, always room for improvement. Yeah, I think the CI test one was one I noticed in particular. I think it knows how to look at GitHub actions in Proud but it doesn't know about anything else. So it says a lot of projects don't have CI tests but it just doesn't know how to evaluate like Azure Pipelines. Yeah, yeah. Yeah. Right, so you mentioned the CI best practices in that itself. It has static eye and so it doesn't notice. Yeah. I think it's not an impossible problem. Yeah, yeah. Totally. And I think, I mean, that was one of the things, you know, scorecards when we first built, you know, the project was just making sure that we could automate these things because that was sort of key for us to make sure they were scalable. Like you saw 150K repos now. If you really dig in and try to manually check all these things, you know, we might not be able to scale as much. So I think there is a balance between those two things but hopefully we can keep doing better as the project progresses. Yeah. In the back. So the question is the scorecard, the trick. So yeah, that was. We actually have one cool trick. That was the cool trick. I mean, the idea is like how, you know, how can we make these things more usable again in an easy fashion, automated fashion? We've seen some projects, larger projects like use scorecards for their dependency policy. And so I think that the Envoy project, I'm not sure where they're current, but early on they were looking at it as a way to say for maintainers, you know, for the Envoy project, like we want some sort of guidance. You know, if you're gonna introduce a new dependency, let's try to give some sort of guidance on these things. And they were looking at scorecards as one way just to give some information as to the riskiness of these projects. So yeah, sorry if we disappointed that that was the cool trick. That's a great question. Can you run this with Just GitHub Actions or do you need TecTon? You can definitely run it with Just GitHub Actions. The Scorecard project publishes a Docker image so you can just run that Docker image as part of GitHub Actions or as part of any CI workflow that you want. There's also a TecTon task that will run it. The unknown bit and the answer to the cool trick bit as well is the part where something actually looks at a project, grabs all the dependencies and runs them for that. And the missing thing there is really just a supported tool for each language to do that. But again, nothing that requires TecTon. You could do it with GitHub Actions, you could do with Jenkins, CircleCI, anything you want. Oh, that's a great question. So she's asking on duts.dev if there's not the most recent version of a project in there, how can we get that included? Unfortunately, I don't know the answer. I mean, I can try to dig and get back to you if you want to come give me your information after the talk and I can try to follow up. Easy to find malicious stuff, which is not that different from what rule with Android where people try to do malicious things in binary and they're doing analysis to try to find the root kids in there. And whatever tests you write, the person doing this, once they know your test, they can write another thing to get around and they don't give you the answer. Right. So I have a feeling that the first part is very helpful, but the second part is really just catching people who are behind or not trying very hard. But like a real attacker is not going to get caught by that. Is there anything to say? I think that's a good point. I would say that I think that that's kind of then my observations from the outside of security are that I think that's sort of the nature of security to some extent in that you're kind of always playing catch up. There's always this one group that's kind of finding new things to check and there's this other group that's watching what they're doing and then trying to like dance around it and find new ways. So I think with something like the binary artifact check, like actually full disclosure if you look at it, it's just looking for file extensions at the moment. So tricking it is extremely easy right now. So you can imagine the next iteration, maybe we're looking at what's actually in the file, then an attacker is going to find a way around that. But you're kind of, you're making it, you're narrowing what you can get away with so that it's harder and harder and harder. So I think we'll never have a perfect solution but we'll just be making things more challenging. More challenging. Have those been based on though? Do you mean like the binary artifacts in particular? I haven't written any of this. Yeah, yeah, I don't, we haven't found any yet but I mean I wouldn't be surprised if it can capture something that we're not aware of. Like the pin dependency stuff, like that kind of check would have helped with the code cover attack, for example. And I mean that's one of the things we're trying to do, you know, with scorecards and also the salsa framework is really map these things to real world attacks, real threats that we've seen and not just make this stuff up. Like, you know, because it's fun to build a lot of these things and engineer a lot of these things together but show that it actually would have helped in these specific instances. Yes, please. Oh, thank you, yeah. Yeah, let me repeat it for you. Yeah, they were saying a bunch of the people working on the scorecards have been accumulating these attacks to apply it back to how they're building out the project. In salsa level four you had an O on reproducible builds, what's the story? Hey, David Wheeler. Do you want me to repeat the question? So the question was in the salsa level four, the highest level of the salsa framework right now. There's a little zero that's not like a clear check box or a clear like not required thing for salsa level four on reproducible builds requirement. Do you want to elaborate on that one or do you want me to? All right, yeah, of course, now they're really good. Here's what we think makes sense and now we have a much broader community discussing things. Google's very confident in their building environment and they work incredibly hard. Other people are less confident in either there or just build environments in general. If you are not absolutely certain that your built environment is crystal clear and wonderful, well the strongest countermeasures is reproducible build. If you total trust your build environment, what are you worried about? So this is one of those areas where there's ongoing discussion about what should be a requirement, what should not. I mean, if you look at the docs on the site, it still says it's drafted and it's in progress. See, again, that says that now is the right time to get involved if you have different experiences and different backgrounds. It would be a good time to speak up and say, here's what makes this and have that discussion. Yeah, I mean, I think for, maybe David probably agrees that reproducible builds is a great end state where you get a bit for bit, matching of these software artifacts, no matter what build system you're building on, but actually achieving that all around is no easy task. So I think the indication of the circle here, if I remember correctly, is kind of like best effort. We would love to see this in this level. Maybe Salsa 5 is a check box, you know, TBD. If that's actually what they discussed. Yeah, yeah, yeah. And that's one of the goals and the motivations for the Salsa framework too, is to really just come to a common language with the broader community and organizations. And we can say, like, hey, David, I trust your build system. And so I know if you're building an artifact on your Salsa level four compliant build system, that I trust that, I don't need to rebuild it myself. Or maybe if I do rebuild it myself, we can get the reproducible, you know, bit for bit thing, or maybe, you know, these really critical projects to the community, to organizations, you know, David's building it, I'm building it, you can compare at the end, that these are things that are in, and have more trustworthiness in the process to create it. Yeah. This could also include timestamps and such anything here, but I would have two different things. You know, I don't think the Salsa system stands well known. If you're not doing a pre-producible build, the idea is to rebuild things and get bit for bit answers. If you allow random dates, then clearly you're not gonna get bit for bit answers. There are actually, well, no, no, no, no, no, no. I'm sorry? The part of the build is not included. Actually, if you go to the perl box, they explain how to create it. No, we see why it's a circle. Yeah. Yeah, it's just saying, hey, here we go. This is why it's a circle. There are solutions, but now you have extra steps, which really comes back to the old ways. There are ways to deal with big time savings. There is a new bit of work trying to solve this, separating the metadata of the build process from the actual input and output binaries, like the fourth file has the binary output, and find that, build that tree and find that, there's a new work happening around that. You may actually also want to have a simple question. When you have to do some rewrites in front of it, he has scripts and stuff to actually do, look at only actually the flow of the assembly code, and sometimes build like a random with the power of pictures registered for what? For different reasons. But he was able to actually find out that these two are identical. It's one of the connections that's square. If I could look at the P and L of the M integration of this data, I would put a build tree that would be available. Yeah, yeah. And there are solutions for the compiler so you can set dates actually, set the date to whatever you want to commit to. Sorry? Sometimes we want the date of the last build. Yeah, well, yeah, but oftentimes it's your own, but we really want it to date the last build. Nobody cares when it's built, it's built 35. Great discussions. All right. I'm going to the bar. Thank you all for coming. All these projects would love more input and more thoughts and discussions, so come join us. Thank you. All right, thanks.