 So, hi everybody. Welcome to Dependencies, Do's and Don'ts. I'm Guy Barguil, head of Product Lead Growth at MEND. This is Reese Arkins VP Product at MEND. We'll introduce ourselves further in just a little bit. Yeah, for now let's get started. So, today we're going to talk about sandwiches, which is I'm sure exactly what you were expecting from a- Also Dependencies. Also Dependencies. Yeah, this is probably what you were expecting from a talk called Dependencies, Do's and Don'ts, but also we have Dependencies. So, actually we're going to be talking about a specific sandwich. It's a framework we put together called the Dependency Management Sandwich. Each layer of the sandwich actually represents a key pillar in Dependency Management. So, what we're going to do today is we're going to go from the bottom to the top layer by layer. So, we'll start a little bit with me talking about transparency, which in this context is creating visibility into the Dependencies your application is using. Then we'll go to Reese to talk a little bit about security and maintainability. Bounce back to me for a short story on why it's important to understand the legal implications of your Dependencies licenses. Then finally, Reese is going to finish off with some words on automation. The talk is titled Dependencies, Do's and Don'ts. We're going to be focusing mostly on the Do's to make sure it's practical, and that you have something that you can take home and implement. So, yeah, with that I'll introduce ourselves a little bit. So, as I mentioned, I'm Guy Barguil, Head of Product Light Growth at MEND. I've been at MEND for the past four years in the Product Department. Yeah, in my free time I like to kite surf. It's like the one with the kite that looks like a parachute in the ocean, and I play and watch a lot of basketball and Reese. Hi, so my name is Reese Arkins. I'm VP Product at MEND. I joined MEND when it was known as White Source in 2019. Through the acquisition of a company and dev ops tool, I founded called Renovate Bot, and these days I am responsible for particularly developer tooling and supply chain security as part of the MEND portfolio, and I omitted the free time part because I couldn't think of anything as exciting as kite surfing and basketball. Thanks, Guy. Yeah, no problem. Okay. So, yeah, let's get started. So, we'll begin with the bottom loaf of bread transparency. It's really the one that's holding up the whole sandwich, all the dependency management. So, again, we're going to go layer by layer, and we'll start with transparency, which in this case means creating visibility into the dependencies that your application is using. But in order to create visibility into the dependencies you're using, you need to know what a dependency really is. So, usually when I speak to developers and I say the word dependency, then they think of an open source package or a third party library and this definitely falls under the umbrella of the definition of dependency, but I think the real definition is actually much broader than that and that's going to take us to do number one, which is to treat anything that you use in your app that you didn't create yourself as a dependency. So, that of course includes open source packages. This is definitely the most common use of the word, but also Docker images from Docker Hub, common code that was written internally by other teams that you're using. So, inner source, infrastructure as code files, maybe Kubernetes manifests, and source files that you very hopefully didn't just copy and paste from GitHub into your app. We see some strange things it meant. Yeah, all of these pieces of external code, they need to be managed. So, that could mean scanning them for security vulnerabilities, it could mean making sure that their licenses comply with your organization's policies, and it could also just mean day-to-day maintenance. So, making sure that your components are relatively up-to-date and you don't amass too much technical debt in your apps, so most of the organizations we see are definitely doing the mapping of the open source components and the packages, but without taking into consideration the Docker images that you're basing your deployments on or the infrastructure that you're actually running the application on, then the job is really only being partly done. So, if you do this, then you're at a great starting point for all of your direct dependencies. But, I mean your dependencies also have dependencies that you need to consider, and that's going to take us to do number two, which is to use lock files and to consider pinning your dependencies. We'll begin maybe with the lock files, so if anyone doesn't know what a lock file is, it's just a file that locks all of your dependency versions in place, and this includes direct dependencies, of course, but also transitive dependencies, which are your dependencies' dependencies, and the reason that this is important is because it creates a situation where if somebody clones your repo and builds your app, they're going to be using the exact same dependency versions that you were using when you were building and testing your app, and this helps resolve many like works on my machine types of issues. So, it's a good practice to use a lock file, and as for pinning dependencies, so using just specific dependency versions in your package files as opposed to Sember ranges, then this is important because it helps you prevent unexpected upgrades, prevents unexpected upgrades, and basically when you upgrade the dependency, there are two risks that come into play. The first and the most common one is that you might break something, and this is a problem if you do it unexpectedly because you won't necessarily know what's breaking the build. You don't even know that you upgraded the dependency, and then figuring out what's breaking you can take a lot of time, a lot of frustration, a lot of back and forth, and large enterprises can be a huge, huge waste of time. Pinning your dependency versions, of course, prevents that, and the second, much less common, but much more catastrophic risk is that some malicious code was inserted into the new dependency version, and yeah, you definitely want to vet the dependencies before you do the upgrade, so you don't want to do it unexpectedly. But the reason I say to consider pinning the dependency versions and not to necessarily pin them is because pinning dependencies also has some downsides. So, if for example, you're writing a library that's meant to be used by a downstream developer, and this developer is using maybe a hundred other libraries, and each of those libraries are pinning their dependency versions, this developer could end up with 10 versions of the same dependency in their application, and then that brings two problems. One is just the application size perspective, if that's a consideration, and the second is just it's a burden from a dependency management perspective in general. But if you're writing a web app that's not meant to be used downstream, then pinning your dependencies is probably the best way to go for you. So, to summarize, writing a library meant to be used downstream, probably consider using SEMVA ranges for better user experience, and if you're writing a web app, pinning dependencies, probably a good idea. Yeah, and then let's move on to do number three. So, number three, it's the final one for this section is to track and communicate the projects dependencies using an S-bomb to the software bill of materials. This is the same concept as a physical products bill of materials. So, for example, you're an automobile manufacturer, then you're going to keep a list of all of the components that go into each car. So, that if there's a defective piece, then you know who has the car with a defective piece, and you can communicate it to them, and you can send a replacement part, issue a warning, or in a severe case, maybe issue a complete recall. This is the same idea with the software bill of materials. So, you know exactly which third-party components are going into each piece of software and then if a critical vulnerability is discovered in any of your applications, then you know who you can communicate that to. Well, first of all, you know that the problematic component is there, you know who you can communicate to, and then your customers can make sure they're taking the proper mitigations on their end, you can work to fix everything on your end, and overall, it's definitely the most efficient way to deal with defective or vulnerable components. One common criticism of S-bombs is that it's just like a snapshot in time, and this is a very widely used criticism, and it's only partly true though, because the only part of the S-bomb that's not really static is the vulnerabilities. So, the dependencies names are going to be static, the versions are static, and the license is static for each version, but only the vulnerabilities are being continuously discovered over time. So, if you're receiving an S-bomb, what you want to do is you want to make sure that that initial risk posture is acceptable to you, and then monitor for new critical vulnerabilities, or vulnerabilities that might be problematic for your organization over time, and make sure that that risk posture doesn't get out of hand for you, and yeah, over to Rhys. All right, thanks Guy. She'll switch over so I can do my own slides and ask you. So, as we covered the transparency part first, one way you could think about this is that there's this kind of joke people have about nobody wants to know how sausages are made, or nobody wants to know what goes into the sausage. The reality is it's a little bit like that with software, where we've had a bit of a habit of people being quite neglectful about what goes into their software, and the reality is that anyone who's in the business of making software needs to know what's gone into that software, and as I'll get to it in a little bit, a couple of slides, like there's the time for kind of excuses for that is over, and the better that software teams are about understanding what goes into their software, I think the better prepared they are for the future. So on that note, this section is security, but we're starting with maintainability. So security and maintainability are very closely related. I've got a picture of a desk here, because it brought to mind a little bit this concept of like a messy desk versus a clean desk. It's very hard to be great at security if you have a difficult to maintain project. And so this can include, for example, that you don't declare your dependencies in an easily maintainable or passable way, but it also means scenarios where you are, what we might call deep in technical debt, where you might have some dependencies which are two years behind or even more. So often software projects and software teams have an approach of like, if it ain't broke, don't fix it. And there's of course some merit to that. But the challenge is that you can never be sure that it won't be broken down the future. That one day you won't have the log for shell kind of incidents where somebody says, okay, this dependency needs to be like swapped out or updated like immediately yesterday. And so software projects which are poorly maintained when it comes to dependencies are at a much higher technical risk from a security point of view, even if at the moment, at this moment in time, they seem to look okay, even if one of those very rare projects that has zero vulnerabilities or something like that, if your dependencies are falling further and further behind, that it greatly increases the chance of there being a scenario in the future where you have a vulnerability that you can't patch quickly. You can't update because of how far behind you got. And that can be things like peer dependencies and breaking changes of APIs and things like that. So that's why maintainability is really a building block for security. All right, so the to do number one in security and maintainability, but as Guy mentioned earlier, that the title of the presentation is do's and don'ts, but we decided we should actually just try to put everything as a do. Do's are a lot more actionable than don'ts. So they're all to do's. So number one we put here, highest priority and security maintainability is a process for vulnerability prioritization. The reality is not all vulnerabilities are created equal. And the other reality that people don't like to admit easily enough is that pretty much everybody has vulnerabilities. Even enterprises using the leading software and vendor tools in this area will have dozens, hundreds or thousands of open vulnerabilities. Now, hopefully they're not critical or high, but even in that case, hope is not a strategy. So the reality is that there are a lot of vulnerabilities and you do need the ability to prioritize. That prioritization should go beyond just purely like CVSS score and things like that. So there's other factors to consider and as an industry we're still getting better at these, but for example, like the business risk and exploitability. So for business risk, that's things like, does this contain, does this application contain the type of data, which if it was leaked or exploited would cause a very massive problem due to either regulation or reputational problems. Secondly, like a business risk could also include, is this like a critical system? So for example, if somebody is like an ISP, a network provider and the system is used for provisioning new customers, the business risk would be pretty high if it was say like a DDoS risk, meaning that like, oh, we may not be able to provision customers for days or weeks if we don't fix this. Exploitability is one that for many can be difficult to measure, but like as an example of a highly exploitable case, the log for J, like log for shell, the reason why that one was essentially like dinner table talk was because of its exploitability because it was essentially very easy for people to find and exploit those who are using the vulnerable versions. That was why it rated very high with exploitability. The severity of course can't be ignored, but like if a tool is internal or it's essentially or very difficult to exploit, then even with a high severity, it could be rated a lower priority to fix than one which is easily exploitable. Naturally, the availability of fixes is one, like when there are problems which don't have a fix and would take a long time for mitigation, people generally favor the ones that they can upgrade to fix that's gonna be more of a low hanging fruit. And then the final one is reachability or effective usage analysis. And so if it's not possible to apply a quick fix, running analysis to determine if this code is actually reachable or not, which essentially is another way of saying is it exploitable because if the code cannot be touched, let's say it was the log for shell, if that particular line of code was impossible to reach due to the way you were using the tool, which unfortunately wasn't very common for log for J, but if it were impossible to reach, then you're not truly exploitable. It's still a good idea to get rid of the vulnerability through upgrading, but it's a very important factor to know if that line of code cannot be touched. For example, the Go ecosystem has recently attempted to do this actually as part of their free and open source approach where they're trying to bring reachability analysis into vulnerabilities to let people be able to prioritize. Of course, commercial solutions, including men, have this capability as well. So do number two is then having a process for remediation and acting quickly. So a favorite quote of mine is the enemy always gets a vote. And so this is a little bit different to most other security concerns. So if you have a security concern in your own proprietary code, like, oh, somebody reported to us that it's possible to leak user details with a very carefully crafted string or playing around with cookies or CSRF or something like that. It is feasible that you could look at that and say, okay, well, it was reported to us by like a white hat researcher. The chances of it being exploited by someone else are hopefully low at the same time. We have like a week to fix or something like that. Let's, you know, a week is a reasonable point. But in open source vulnerabilities, generally the baddies, the attackers, they will know about the problem as fast or faster than you will as like a user of open source. And so, you know, by that we mean they get the vote. Like they get the vote, whether they're gonna try to exploit you, you can't, you don't just get to choose, we'll wait a week or a week is fine. In open source vulnerabilities, that attack always gets a vote. And in the case like log4j, they were voting right away. They were, you know, scanning and attempting to exploit. So you don't get the only choice, the only vote about whether to act quickly when it comes to open source security. I, yes? Yes, please go ahead, yeah. Do you have data about that? So, Can you repeat the question? Yeah, sorry, I repeat the question. I do have data about whether open source vulnerabilities are exploited faster than non-open source vulnerabilities. The answer is no, not on that, exactly what you're asking, because the challenge can be that the people that actually are exploiting are generally not like documenting their, you don't know what you don't know. But what there is data on, I don't have it offhand, is about how quickly open source vulnerabilities are getting exploited over time. So if you go back say five years when maybe it was, finally the first point was mainstream that, hey, there is CVEs and reporting. At that point, it might take weeks or months before people would be attempting to exploit that. And we're seeing that in some cases, drop down to hours. So again, harping on the log for J1, there was people running scans, which could be detected, right? People have the ability to detect scans. People were running scans based upon like essentially almost like rumors on Twitter before it was even like mainstream in databases. That was a very exceptional case, but still people are seeing it go from weeks to days down to hours. It's getting almost to the point where you need to assume that it's kind of real time, that once you hear about it, there is someone attempting to exploit you. I think in terms of certainly future planning, like we'll say the next five years and sooner, you pretty much have to plan on that fact that the attacker will be responding in real time. Maybe even the same way that you see people have built automation and bots to trade on SEC disclosures and things like that. I think you'll see people attempting to even automate, maybe even like GPT-3 like or something like that on vulnerabilities. I think that's the world we have to plan for if it's not already happening. All right, so I think I kind of covered all that. Actually, the last point is really important, sorry. You need a process for publishing such fixtures to production. So during the log4j, again, for example, we had a customer who was very satisfied with their ability to remediate quickly, and that was because they had the transparency part that Guy talked about earlier, like they knew what they were using. When they heard this news, they didn't have to say, okay, everybody, let's go scan everything and see what we have. They were confident that they were already scanning everything and they knew what they had. They then proceeded to remediate, again, ideally automated, but when they'd done that, they'd remediated the problem out of all of their source repositories. Their comment to us was the job's only half done because now I have to go track down all of the images we built in the past that contain the vulnerable code and also all the production environments that are running that and also kind of work forwards and backwards from those. So your ability to respond to vulnerabilities and security is not just how quickly your developers can, but it's also a DevOps challenge, how quickly you can get that into production as well. A problem which is remediated only in a source repository, but not in production, is essentially not remediated at all. Okay, so this is an oldie, but a goodie. I know there's at least two or three presentations this week with Equifax still. This is from 2017. So actually the fact that this is old is relevant to why I'm bringing it up because even back in 2017, relatively mainstream now, the headline says it all, like Equifax officially has no excuse. So even back five years ago, this term I was using for a long, long time ago in open source security, already it was considered no excuse that if you have known disclosed open source vulnerabilities that you do not upgrade and do not patch, there is no excuse for that, you are liable. Equifax were held liable. They were brought in front of US government and things like that. So we're five years on from that, from this point where people already kind of said in a mainstream way that there is no excuse. All right, so now getting a little bit more into the proactive way. To do number three is ensure your dependencies are actively maintained. So before I've been talking about this concept that there is a fix that you can apply, but there can be scenarios where vulnerabilities are found in open source packages where there is no like essentially no fixed inversion. And that can be because the library is no longer maintained, that there wasn't somebody around to publish that fix. Or it may have been because it was irresponsibly disclosed through an issue or a pull request. And because this project hasn't been released in a year, it took days or even weeks for that project to release a new version. So it is useful to understand in dependencies, like to audit them regularly. Like when was the last release? If there hasn't been a release for a year, in rare cases, it might be because it's feature complete with no dependencies, but in the majority of cases it's because there's something wrong with the maintainability of that open source project. Are commits happening on the source repo? Are the issues and pull requests getting responded to? Again, if it's feature complete, you probably don't see issues and pull requests piling up, but if it's not, you will. And then importantly, are the security patches being applied? I mean, that's probably a very trailing indicator. If you're looking at a dependency and seeing unapplied security patches, it's probably because you're aware that it's the weak link in your supply chain. So as well as keeping your own clean desk, it is an important characteristic to have maintained dependencies. So final to do for security maintainability is to be proactive and maintain recent dependencies. So we have a figure here, 90%. I'd like to explain that one. We looked at disclosed vulnerabilities in the NPM ecosystem in 2021. So you look at the CVEs and then you look at the fixed recommendation like fixed inversion 1.2.3. And then we compared the disclosure date of that vulnerability to the published date of the earliest fixed inversion. And what we found was that in 90% of cases that fixed inversion was published like same day or earlier than the disclosure. What a lot of people don't realize is that vulnerability fixes are often made days or even weeks before the disclosure. They're often done in a discrete way, like fixed memory problem or revalidated string or something like that. They don't say, oh, fixed the vulnerability. Sometimes they do, of course, and there's people looking for those. Both good and bad, white and black hat we'll say. So this is not like what I would call a simple recommendation, but if for example, if all you did was just be up to date with dependencies, just kept the latest versions, just let's say even blindly. Then in 90% of the cases you would automatically have remediated any vulnerabilities that you could have. Because in 90% of the cases of CVEs, at the time you hear of it, the latest version is non-vulnerable. So this is also a big change compared to the past where you used to maybe have to apply like monkey patching, like manual patching of dependencies. Instead, today you mostly can just upgrade. All right, so the benefits of up to date dependencies, it's vulnerability prevention. In some cases you can actually, depends how you define it, but you can have upgraded to the fixed version of a dependency before it is a known vulnerability. The point being, you can hopefully have already upgraded to a non-vulnerable version before the people who might attack you are even aware that the old version was vulnerable. Of course, you get the latest features and APIs and bug fixes. This is just a general software benefit. And then the final point is this, avoiding zero-day fire drills. So by this we've borrowed a meme here. This is the challenge I was describing earlier, where you've got like say an out-of-date project, you have a vulnerability alert saying you must update. And you've really got this choice between do I remain vulnerable until I'm satisfied that this fix is okay, that it doesn't change anything for me, or do I kind of succumb to the pressure of updating without adequate testing? That's the situation you find yourself in when you have a poorly maintained project, where you're far behind in dependencies. All right, so final point here about this, technical debt is that when we say to people, stay updated with dependencies, a lot of people do have a hesitancy in that. They say, well, we don't have good tests. All the people who wrote this code left long ago and we're just maintaining it. And I don't really wanna take responsibility for it. So we believe strongly in safety of the crowd in this case. So as an example of safety through crowd testing, you may have even weak or no tests, but if there are hundreds or thousands of other projects who've tested the same upgrade as you're about to apply, and there is a 98% test passing across that crowd, that would be a very positive indicator to you. Similarly, crowd adoption, safety through crowd adoption is a second factor. So for example, if there is an update that you should apply and you're a bit nervous about it, but you find out that 25% of users of this package are already running that version in production, that again would be a very strong indicator. Even if you yourself have no confidence based on your own tests or your own technical debt situation, if you know that a new version passed tests and is widely adopted, you may be able to kind of take that leap without still having confidence in your own abilities. All right guys, so last one here. So wisecrack, but updating dependencies is like going to the dentist. If you go once in every five years, it's gonna hurt. So that's a good way to summarize. All right, yeah, so the last two sections are gonna be shorter than the first two. I'm just gonna tell a short story now about two companies called Small Comp and Large Comp that are in the middle of an M&A. It's based on a true story by the way from 2018, but the company name Small Comp and Large Comp are obviously fake for the sake of anonymity. So we have Small Comp, which is a small software company and Large Comp, which is an enterprise scale software company. And as part of any standard like M&A or OEM agreement between two software companies, so Small Comp was required to do what's called an open source audit, which basically means to list all of their open source components as well as their licenses so that Large Comp can ensure that they're not inheriting any legal risk with the taking of Small Comp's dependencies into their code base. A little bit of background just on licensing. So some licenses for software are very permissive, right? All code is licensed under certain legal terms. Some license is very permissive, like MIT, for example, where all you're really required to do is to pass on a copy of the copyright and the license, and then you can do pretty much whatever you want with the software. And some are less permissive, like GPL, for example. So these are copy left licenses. So if copy right means all rights reserved, copy left is the opposite, which means no rights reserved. So basically anything that's attached to this component has no rights reserved. It has to be open sourced and anybody can use it under the same terms. And Small Comp actually found a dependency license under AGPL, which is an even more restrictive version of AGPL inside their software. And what happened is that the large comp immediately took action on this and they took $4 million out of the deal and put it into an escrow on the side. Under the following terms, that Small Comp had to remove every single trace of the AGPL from the software. They had to completely swap out the component. And also 80% of their customers have to deploy the updated software to production. All this has to be done in a two year timeframe and the related cost of development or deployment are gonna be deducted from this $4 million. So if it costs $500,000 to develop and deploy the fix, then Small Comp's shareholders are only gonna receive $3.5 million from the deal. It doesn't sound too complicated, right removing one open source component and deploying it to production. But Small Comp actually had two main obstacles during this time. The first was that just removing the component was estimated at like one year for one person of development in QA. And again, Small Comp, it's a very small company so they don't have many resources and they can only afford to put one person on this task. So that's one year out of the two pretty much gone right there. And the second is that Small Comp's customers are hospitals and if anybody's here has ever worked with hospitals, you know things are very, very manual when it comes to like software and deployment. And yeah, so Small Comp's technicians were actually required to fly out to all of their customers across the globe and persuade them to update the software from like the AGPL version to the non-AGPL version all within the year once the fix was ready. And it's not an easy task but they were actually able to do it and they fulfilled the terms of the escrow after one year and eight months and it only cost them $500,000. It's not too bad I guess but still 500K for one bad license is quite a price to pay. At least there was a relatively happy ending here. And yeah, I think there are two things that we can take away from the story pretty much. One from the developer perspective, the other more from an organizational perspective. The first is that as a developer then you really need to understand like your organization's legal policies. You can't be a licensing expert yourself and or you can't be expected to be a licensing expert yourself and you should have somebody who is an expert on that create clear policies and you just need to make sure you understand them so you don't cost $500,000 worth of damage. And the second from an organizational standpoint is that you need to make sure that first of all you have these policies and that you're also scanning and making sure that when a new component is included then it has an acceptable policy to you and also during upgrades you need to make sure that the license isn't changing. So a dependency in version 3.0 can have license X and 3.1 can already have version Y. You need to make sure the license is staying consistent when you're upgrading. Yeah, Rhys, final layer. Thanks. So the final one is automation. Let's skip it to the slide. So essentially what we're looking to do is to automate all of these that we've spoken about. So in terms of transparency that was like knowing what you have. So automated scanning with every build, every day. That's essentially the key requirement there. Like I mentioned in the log for J scenario you want to already know what you're using not have to go out and scan when you think there might be a problem. That's preparation. For security, like automating the prioritization or remediation is really important at any scale because it's just, otherwise it's very hard to manage things manually if you don't have an automation of that. For maintainability, automate dependency updates. So keeping relatively up to date is something that you shouldn't burden your teams are doing manually. Use a solution like Renovate Bot from Mend, for example. There are alternatives as well, but do something to stay up to date with dependencies. It's a good way of saying you have a good security posture if you're ready to respond. And then finally, the legal, like Guy said, really this one is probably the best case scenario for breaking builds. So there was a pull request that introduces a license that doesn't comply with policies. Really, you want to stop that as early as possible. You don't want that getting certainly not into production. So the best place would be to scan and block builds for that. So do that in an automated way. Don't expect developers to be manually inspecting every dependency change in a lock file and every license and things like that. I think we're good, yeah. Thanks, everyone. I think we have time five, yeah, five minutes for questions if there's any remaining. You ask them and we'll repeat them. Yeah, hi. Yeah, thanks. So the question from Cornelius was about automation and about prioritization and that some of those factors are very hard to automate. I guess, I mean, this one was a sponsored session, so I guess I'll just take the opportunity to say this is one that is very hard to do without help. I mean, the reality is it is a very hard challenge, the automation, the prioritization. So it can be a challenge probably to do this one with cobbling together your own tools. There are companies and not ours, for example, that specialize in like exploitability scoring and things like that. So in general, most companies would need, would not have the capabilities of doing like prioritization scoring and automation on their own. Also for reasons like the crowd, like I mentioned, that sometimes some of these, this information such as like, is this being actively exploited or something or attempted to be exploited is not easy to gather on your own. That is data that you need to kind of pull in in real time from a third party source. So this will be one where I would say like, realistically, you need to kind of get help on that, there does need to be like a system which can sort and prioritize for you on these matters. Some of the factors though will need to be internal. So like no third party vendor can say, tell you, oh, this app is public facing or this app is internal only, or this app contains like personally identifying information, this one does not. That's a type of like metadata tagging that will always need curation and annotation from development teams to help whatever system you have to know. I'll summarize the question first. Thank you, it's a very good one. So it's about, you know, it's very common, especially saying Dockerfile, suggest like, you know, not specify exact versions and just be like, hey, let's install, you know, Debian and Appget and things like that. So you're correct, it's very common and it's not irrational, like it makes it easier on people and it means that when they build, they get the latest versions. And 99% of the time, latest versions are gonna be good for you. The challenge is that it's not deterministic and that, you know, one day everyone can just come in and the build is broken for everybody. Like no one, everybody running Docker build. It doesn't work. So I think that the future of software is not non-determinism, you know, that for example, when building images, this concept that you have Dockerfiles and run Appget, you know, that's a very, that's like taking it back 10 years ago in software where people would just pull things in off CDNs or copy paste off internet sources. The reality is we need like better package management and we need the ability to lock dependencies and when building an image, be able to like reference essentially a manifest and ideally a locked manifest so that you have like repeatable builds. Something that you can do to make things a little bit better in the short term is attempt to use exact versions where it is possible. So where you can actually install a specific version of a dependency, including with apt, do so. On the flip side, then people will say, yeah, but then I'll forget to update it. So that's where you need the automation part, you know. So pin it to an exact version and then automate the updating of those exact versions. This is a scenario where automation gives you the capability to do things that like realistically is beyond humans alone. Like no one is so worthless that they should just be checking non-stop, you know, Dockerfiles for versions scattered throughout and updating them manually. Please go ahead. So the question is about lock files and they're not present everywhere. So do we have recommendations for how to deal with that? And I think we have a stop. So Guy, quick one. We don't have a stop or anything. Okay, yeah, you can answer. Yeah. I mean, if you don't have a lock file and you can't lock like transit dependency versions in place, then there's nothing you can do. You can, first of all, you can lock the direct dependency versions in place and you can get a lot done that way. Yeah, I don't have too much time. Yeah, there are ecosystems lacking in lock dependencies and it's low hanging fruit in terms of security. Like the industry collectively should be doing better. It's like, you know, Maven, Gradle, NuGet does have lock files out as possible. So that has improved. But in general, that is a weakness in those ecosystems that should lead either to those ecosystems improving it or to people who care about security seeing if there's alternatives to those. Okay, we've reached our time. So thank you very much everyone for attending. Thank you for the questions. And as I understand it, this will be available on YouTube in the coming weeks thanks to the Linux Foundation. Thanks everyone. Thank you.