 presentation art. Thanks, Simon. Wonderful. Thank you very much and a big welcome to everyone. Thank you for coming to this session. Software supply chain risks, are we ever doomed? My name is Simon Maple and just as a bit of background to myself. My background is very heavily in the engineering space. So I've been in engineering teams, DevRel teams, development teams for just over 20 years. I've been at Sneak for just over four years. And my role at Sneak has varied from leading the DevRel and community team to now the Field CTO. I work with a lot of our customers, a lot of prospects as well, helping them be successful with their security programs. And one of the big topics that we talk about today is supply chain security. And we're going to be talking about supply chain risk. And my background is heavily Java and heavily community oriented. So I really, really enjoy going to a lot of conferences and communities and user groups to talk with practitioners as well as as well as these days, a lot more senior execs in organizations. So what are we going to be talking about today? Well, you're obviously here because you care about software and hopefully you care about the security of that software. And one of the areas that we're specifically going to look at here today is going to be the open source software that you're using. And there are a few ways in which we can look at supply chain security. At Sneak, the way we look at it is there are three main aspects of supply chain security. One is the components, the constituent pieces that you go ahead and use. You pull into your build environment and you deliver either as part of your application as part of the cloud native foundation that your application runs on. The second piece is around the pipeline, the going from a Git repository all the way through to a build proserver all the way through to maybe an artifact repository to production. That process of pushing code through a pipeline. And the third piece that we look at as a supply chain security issue are the plugins, the additional components that you use as part of that CI process. So, for example, Codecov could be something which obviously recently attacked in a supply chain attack, something which we would bring into our supply chain into a CI start environment and it's a third party artifact. Now, of course, one of the big areas there is the fact that some of these pipelines have pipelines themselves. So an open source open source library has its own pipeline. And if that got attacked and breached, then it could affect the pipeline that that I am building for my application. This is one of the things that we call a cascading attack. Now, the area that we're going to be focusing on here mostly is going to be open source software supply chain. Now, this impacts us all. And one of the I want to kind of like, you know, reflect on this picture just for a moment. And I've got to ask you, you know, when you look at this photograph, I wonder what I wonder what comes to your mind. Maybe maybe you see this as an amazing futuristic outlook of the world, the future of our education, of learning, online learning. Maybe it's the uprising of our robot overlords. Now, for me, I ask myself a couple of things. First of all, how much is this robot learning about the environment it's in learning about my house, my children potentially? Where is the information that this robot is is is gaining? Where is that being stored? Is it being stored in a safe place? Do I know where it's being stored? What if the robot gets updates from a specific place, an up source stream that that upstream source, rather, is compromised? What happens then? What happens if it were to get compromised? Could it talk to my child? Could it recognize? Could it, you know, could the baddies see where, you know, areas of my house that I would want them to see? These are these are some of the things that when we think about it from this kind of, you know, a different level and emotional level here, when it's our family or our children, it's something that would keep us up at night. We need to think about the similar risks, the same kind of things as if, you know, in our applications, you know, about this. And although it's a different level here when you're thinking about a family versus your application, we need to really think about where we're getting these libraries and components from. Are they safe? Are they potentially going to, you know, risk or or harm or impact my application? So today, we're going to share with you a number of real world stories. We're going to cover, first of all, how developers are actually core to playing this fundamental role in a number of recent and growing security incidents. We're going to cover why you should really care and think about the software supply chain security and why it's important to you and leave you really asking that question, where should you put your trust? So let's go back in time and let's take an early glimpse as to how one developer many, many years ago, back in 1894, I believe, a cheering award winner, Ken Thompson, he wrote a short essay here titled Reflections on Trusting Trust. And he describes in this in this essay how he added a backdoor into the into the login program of Unix and when he did this, he continued and actually added a backdoor into the C compiler and went further in this almost like a chain of attack, I guess, by backdooring the compiler that compiled the compiler. And in this in this article, he explores this revelation of how software can be taught to learn specific traits and pass them onto the spawns of software. And what you effectively do is you provide software here that can be that can remain without a trace of a horse or Trojan horse, as he calls it in here, because your trust in code that wasn't created, wasn't written by ourselves. And this is something which kind of goes from application code all the way to the compiler, to the Assembler, to the CPU and so forth. It goes back all the way through about how much we actually how much we actually trust. And as we as we learn, you know, by Thompson's Trojan horse story, which dates back almost 40 years, developers have been targeted as a malware distribution vehicle. And we'll cover that with some of the more recent events. In case you had a doubt, we're seeing more and more open source software being being developed. And this is the number of new packages which are being created in each ecosystem each year. And you can see the growth in the number of packages being being being added each year. This is growing the open source application software footprint. But the applications that we're building are also growing in their reliance, the number of open source libraries that that that we pull in. And us as software engineers are very accustomed to the way that, you know, people contribute. Many of us maintain as a course as well to many open source libraries, I'm sure. Now, how much do we know about packages that we use? How much do we know about the authors and what quality are those authors providing? A great example, in fact, it was a couple of years ago when NPM, the NPM repository, they jumped up to a million packages on on their registry. And someone on Twitter claimed that they were the owner of that millionth package. And I thought, how do you know that? Because the at the time, the NPMJS.com website jumped 50 packages at a time. It didn't really give you a very accurate number. So you couldn't tell from that. And when you registered your library, it didn't give you a number back. So how did this person know that they had the one millionth package? So I reached out on Twitter and I said, how do you know this claim? And he said, well, when when the NPM website was around, I think it was about 50 or so packages away from the one millionth. He wrote a script to automatically create and deploy 60 packages into NPM. And that's how he knew and he very, very likely did get the millionth package. And this state, this is a story really about the quality of packages that can get into these repositories that we can potentially use either directly or in a dependency graph. So we have to be very, very mindful of that. So when we think about the not just the libraries, the number of libraries that are being added into these repositories, but the baggage that comes with that, the number of vulnerabilities, we can see that the numbers of issues, known issues that are being added are growing year on year. This is the number of disclosures happening per year, a couple of years out of date here. But you can see the trend really, really increasing largely across all across all repositories. Now, where are the fears in these repositories? Now, there's an academic paper that was published a couple of years ago that investigated the properties of various language-based ecosystem libraries. And it found that almost two-thirds, 61 percent of open source packages on the NPM ecosystem could be considered abandoned. And what this meant was there wasn't a release made on that NPM package for the last 12 months. And you can see that the number of packages here, number of downloads, I think it's per week here, huge, huge numbers of downloads, hundreds of millions of downloads per week on what we would consider here based on the number of releases per year, an abandoned package. Now, what is the risk of an abandoned package on NPM? Well, here's an example just from last year. Now, this was August of last year. Andrew Samson, an author, a library maintainer, noticed that NPM indefinitely suspended their process for adopting an abandoned package. And remember, there are two-thirds of the libraries are abandoned. And that was because of something that happened with him. Let's give you the story. He created a library or developer application called Bebop. And he wanted to register that on as many of the package registries as he could. So he looked around and he was able to find the Bebop name clear on Nougat, Cargo, all the others, apart from NPM where it was taken. So he did some investigation to understand how he could actually get the Bebop package. And he noticed that there was some documentation that said, if you can get the author's email just using the NPM owner, Alice, with package name, grab the author's email address, ccsupportnpmjs. And if there's no resolution between you and them, the NPM team will sort it out if they consider it abandoned as well. Well, that's exactly what Andrew did. And he noticed that four weeks later, he got a reply from NPM. There was nothing back from the maintainer. And this is a little bit small, so I'm happy to read this out. It was basically the Bebop package was given to Andrew. And there was a couple of caveats here. You won't be able to. There was a there's a thing that said you won't be able to reuse any existing version numbers that were used by the previous order author. But in but the suggestion and it's only a suggestion was to publish the first update as a major release. So Andrew was now given this, including, you know, all the previous releases here. And he was suggested that the first release should be a major release. Why? Well, because obviously it's going to be a different project. If Andrew was, of course, malicious here, he could actually add a malicious update into this package, just changing it slightly. And then any consumers of this package could potentially pull in malicious code. Well, as NPM already mentioned, they believe this is an abandoned package. Or is it? Well, there was another user called ZK here who published it eight years ago and only realized that it wasn't their package anymore because when they tried to do a publish, it was denied. And in fact, it says here that this was actually a dependency in over 30 different packages on NPM. So this was, you know, a very, very dangerous move to occur, which is why this abandoned package adoption was removed. But there are other ways in which this kind of thing can quite easily happen, one of which was the event stream incident. Now, the event stream incident, this was, you know, back in 2018. But this was one of the one of the most targeted attacks, malicious attacks in the JavaScript ecosystem that I think it's pretty much ever seen. And this is targeting maintenance and targeting developers who are working in an open source project, and they are the attack vehicle here to distribute the JavaScript code. So what happened was, let's go into what exactly happened here. Back in twenty twenty eleven, the event stream package was created and it didn't really didn't really get received many, many updates from around twenty fifteen on. So it was largely in a maintenance mode. One of the user, Antonio Maccheus, Maccheus published a non malicious package. So another package called flat map stream to NPM. And what they also did was they they created a pull request to this event stream package, the original event stream package that that added a dependency to their flat map stream package, which at the time was non malicious. So there is a potential contributor as a potential open source contributor, created a pull request, probably had some good things in it. And they also added this this dependency to something that's not malicious. How was the user supposed to look at this and say, oh, yeah, this is someone who I just can't trust at all. They're naturally going to look at this and say, oh, you're trying to fix a bug. Oh, you're trying to add some feature. You're obviously a user of this and you want to contribute back. This is the model that open source open source plays. So they accepted that and all was good. However, the infected version or an infected version of flat map stream one point zero point one dot one was released. So this was this Antonio person upgrading or or or or releasing a new version of the flat map stream to NPM. The very next build of event stream will, of course, pick up the the malicious version. So there was no change needed at this point to the event stream because it was picking up the latest version of flat map stream. So what happened? Well, what this did was the malicious version added a added a code payload that was that was actually encrypted. And there was a very specific string that needed that was needed to decrypt that that payload. And I think it was actually the name of the application that it was used in, which in this case was copay, I believe. And copay is a is a Bitcoin wallet. And that Bitcoin wallet was consuming event stream for, I think, two versions. And it went unnoticed for three months. So this Bitcoin wallet is a very targeted attack to cope to the copay wallet and this malicious attack on on Bitcoin wallets was achieved by this by this update. So a very, very dangerous incident that occurred there. Now, this could never happen again, right? We would always learn from this and the same thing would happen. Well, actually, one year later, the exact same thing happened. This time, an existing version of the Electron Native Notify application project had no malware in it at all at version one, one, five and a user added it to a package, a commonly used package here, EasyDeck's GUI. No problem there at all. However, when version one, one, six was released at a malicious level of the Electron Native Notify, EasyDeck's GUI pulled that in. And again, it was a crypto wallet. The Igama crypto wallet was built with that and included the malware. And it's ultimately exactly the same as the previous as the previous example that we just shared. So what is the point here? Well, we can't just rely on our own internal dev teams here. We are relying on every developer, every maintainer to maintain the security level that we need for our application. It's not just the reliance on our developers to pick a good library. It's also the maintenance of those libraries by the library maintainers themselves. And it's an extremely hard thing to be able to to be able to track. So let's move on to developer tooling. How much thought are we giving to the security of our own development infrastructure to the tools that our developers are using to build their own applications as well? This could be anything from staging environments, build tools, maybe our CI tooling. One thing could be our IDs. And in early 2021, a security a security researcher was able to break into the BS code GitHub repository. And it provided them the capability of making code modifications to this extremely well used IDE that I don't know how many developers use, but I would guess it's in the millions. Now, first of all, how did this how did this how is this even found? Well, it was actually a security researcher riding a train. And while some of us might grab a book, check our mail. No, this person decided to decided to, you know, scan. They were bored, it says they decided to read the VS code code on the on the GitHub repository, of course, as you do. And what they found is that there was a command injection floor that was made possible because of a an attack vector in a flawed regular expression. And they were able to the result of this was to open a new pool request or simply by opening a new pool request. The researcher was able to execute code that the the VS code CI scripts were running, and there was no authorization, no authentication checks, and what they were able to do by by executing code on the CI scripts and the CI servers was effectively get a reverse or a reverse shell, and they were able to get a reverse shell on the CI server from there. They could get push access. They could get right access to the repository source code. But of course, fortunately, this was a security researcher that could that that responsibly reported this to Microsoft to fix it. But yeah, in this case, we were very, very lucky that that this was the kind of thing that was identified by them. So let's take a look at open source security in general now and talk about how the Python and JavaScript communities can mitigate security issues. Now, there was a group of security researchers that investigated how their maintainers work in an open source community. And this is really in regards to their ability to not just find but actually mitigate security vulnerabilities in a timely manner. One of the research questions was how quickly each of the maintainers in JavaScript and Python could mitigate a newly disclosed vulnerability. And this provided some really interesting stats, particularly over, you know, if you look if you look back and further in time. The research found that on average, it takes around a hundred days for both JavaScript and Python maintainers to start mitigating public vulnerabilities. Now, as a consumer, this may well not be anywhere near quick enough. But one thing that's that's actually quite interesting to monitor here is, you know, the difference, I guess, in in vulnerable vulnerability mitigation versus regular lines of code changes per year. So what this graph is showing is the number of commits that are mitigation for vulnerability over the number of commits in general. So for regular feature additions and bug fixes and other things. And you can see here that in the JavaScript community, the number of vulnerability fixes, vulnerability mitigations were very, very low up until around 2018, where it became actually a much more mature thing for the JavaScript maintainers to do. However, in the Python community, it's fairly consistent over time. So this demonstrates the almost the low levels of APSEC awareness around the JavaScript community there before 2018, which is which is very interesting. And now we're going to go to a very specific example here. And we're going to talk about a NPM library called Marked. And Marked is a markdown parser. It was downloaded millions of times every week. It's a very, very popular library in the NPM ecosystem. And this vulnerability was was added. It's a vulnerability which provides the ability for some cross site scripting with HTML entity. So there's the ability to add some cross site scripting and a cross site scripting attack in an HTML entity. And HTML entity is a representation of a various character in HTML. We'll show you this. In fact, if I show you the live hack now, I have a I have an application here. In fact, let me just move this up a tiny bit so you can see this application. Right. So we have this application here. Goof to do application, which is running on my local machine in a Docker container. It's a to do application so I can do things like let's buy some milk, for example. And you can see that there now to do a cross site scripting attack. What I might try to do is maybe an embed a embed some kind of a script. So maybe I'll I'll do I'll try and do a script here with maybe an alert of alert one or something along those lines. If I was to try and do that, this is actually getting sanitized. The Mark's library has sanitization, which is identifying where I'm trying to do a cross site scripting attack. And it's recognizing the annual brackets and trying to try to stop that. But is this actually going through Mark's or the Mark down? Because this is actually there's no Mark down here. So maybe if I could try something, maybe a bit more Mark down. So if I was to do something like, I don't know, HTTPS, sneak dot IO, or rather, this should be sneak followed by a link. This is a way in Markdown that we can provide a link. So if I was to do that again, but this time, I'll add a bad link. And instead of let me make this a touch bigger, instead of adding an HTTP request here, I'm going to do a JavaScript request and I can do a very similar thing like this. This should take us down a Markdown route. You're going through the Markdown library to pass this. Let's see if we can do that. Well, the library is actually doing sanitization itself. And you can see it's sanitizing that out and making sure that that doesn't that doesn't run as we would expect. And the reason is because it's identifying some some certain characters, which it doesn't like. So what we can do is if it's an HTML entity issue, I can add an HTML entity, a representation of these characters. So that there is the HTML entity for a colon. And I'm also going to do an ampersand pound for T colon for the open bracket. And I'm going to do the close bracket as well as another entity. So we've got lots of HTML entities. This is almost like a decoding or an encoding way of representing these characters. Now, I'm going to try and run that to see if that gets me through. As this kind of an issue, it still doesn't. In fact, if I open this up, you'll see, in fact, it's actually is actually turning them back into the correct characters. So we're almost there. But the problem here is what happens if I almost provide an HTML entity? If I was to type in this just there, what this is doing is it's avoiding the sanitization of the marked library because we're not providing an actual colon here because we're missing the semicolon at the end. So this isn't an HTML entity anymore. And as this gets passed through the marked library, it's going to it's going to sit in a in a form that is actually now representative by the browser as a colon. So the browser will look at this in the same way it does a close anchor or close div, etc. And say this looks like a colon. And when I run that, then we have the bad link, which when I run, if I move that up slightly, you can see we get our alert. This is the this is the attack that that this marked library was showing. And if I jump back to my slides, let's go into what I want to show you here, which is the date this attack was reported. So there's a little bit of live hacking just to show you what this what this issue is. This was the date it was reported on May 20th, 2015. When was it actually fixed? Well, it was only merged and made available on the 29th of July, 2016 over a year later. Now, this reporter created a pull request with the fix with information about how you can actually attack plus the tests that accompany the fix as well. All it needed to be done is to merge. But, you know, this isn't that this isn't the issue with the maintainer. Maintainers are just trying to do their best. This is one of the problems of open source maintenance and the open source model. They can't be there all the time. There's no legal or contracting obligations for them to support you, unless, of course, you do that explicitly. Now, the other the other thing about this is when a fix does go in, what happens? How long does it take you to become aware? How fast can you consume that fix? So let's now go on to the actual maintainers of open source libraries. We're all very much dependent on these, and we're reliant on the hygiene of our maintainers. I'm sure in all of your organizations and companies, you put high levels of accountability to your staff, your, you know, individuals all the way through, not just developers, but everyone in your organization to make sure that you have good passwords, that you have to factor authentication and other things. We can't ignore the question of how we put trust in the libraries we use, unless we also ask the question about there, there. How easy is to compromise them? What is the hygiene levels there? Now, in 2017, there's a security researcher that worked with the Node Foundation to conduct some research into understanding the state, the assessed the state of weak MPM credentials in the ecosystem by maintainers. And there was their work actually was pretty devastating and revealing the truth of the lack of security hygiene. What they were able to do was gain publish access. They were able to gain publish access to 14 percent of MPM ecosystem modules. So for 14 percent of ecosystem modules, you know, these researchers were able to gather publish access. This is huge. This is this is mind blowing. These modules are downloaded tens of millions of times a week. Look at them, debug, MS, React, COA request, really, really commonly used libraries. Now, the problem was rooted with insecure passwords chosen by well known maintainer accounts. Literally, the word password or reused passwords that that were, you know, released, you know, elsewhere. Now, what could have happened if, again, if this was a malicious person doing this rather than rather than a white hacker, very interesting and quite scary as well to think about? So what can we do? Well, in 2017, NPM supported two-factor authentication. Now, NPM has well over one and a half. I'm not sure what the number is today, probably closer to two, but well over one and a half. And over the last four, four, five years, it's supported two-factor authentication. What has been the uptake of the two-factor authentication? Well, in 2009, only 7 percent of maintainers have enabled two-factor authentication. By 2020, it's only gone up just over 2 percent. So still less than one in 10 people in NPM have two-factor authentication. And I'll let you ponder that based on perhaps the security hygiene that you run in your company or that you have to go through in your company and think about where the weakest link could be here and what we can think about in doing about that. So in the year 2000 here, cyber security expert Bruce Schneider said in his book, Secrets and Lies, that humans often represent the weakest link in the chain. And this is a reality, I think. And there was also a very interesting term which is coined as Linus' law back in 1999 by Eric Raymond in his work here, The Cathedral in the Bazaar in 1999, as I said, given enough eyeballs, all bugs are shallow. The idea being in open-source code, any bug can be found because there are literally so many people looking at it, so many contributors. Let's take a look at that and see if that's actually true with open-source. Well, not always, okay? It's not a sweeping statement because here there was a pseudo vulnerability that allowed attackers to gain root access. And this was a security vulnerability that allowed any unprivileged user to gain that root access based off the default pseudo configuration. And this was something that lived in plain sight for over a decade. And this was one of the things that kind of when we think about open-source, just because things can be found, things are hiding in plain sight all the time. So let's talk a little bit more about how software supply chain impacts everyone. Well, what about open-source libraries not living forever? We reached a point where we kind of took open-source for granted somewhat. And open-source registries are very open in their nature and allow developers to push openly as I mentioned previously. We've become very accustomed to raising an issue in a project's source code repository, asking for help, asking for a feature, asking for a bug to be fixed. But what happens is we tend to rely on them so much we consider them unbreakable in the sense that they're going to exist forever. Well, what happens when a maintainer removed their library? This is exactly what happened in 2016. A maintainer called Azir pulled tens of his open-source packages from NPS largely because of a disagreement with the ecosystem. And one was pivotal, one was, of course, as we know, left pad. And what this resulted in was a huge fallout, a breakage of the CI process, the install process, which relied on something like a left pad. And this incident showed us two things, two important things. First of all, the weakness of how businesses fell to manage their open-source software. And it actually just exposed this soft spot with this reliance of needing something to be somewhere outside of their domain. But the most important thing here is how registries didn't foresee this as a problem. They weren't designed to handle this specific situation. They didn't expect anyone to pull a package and entirely break it. And as a result, there was no defensive coding against it. Other kinds of malicious activities and assets that we can kind of track back to open-source ecosystems. Well, time after time, we find more and more malicious packages hitting the NBM ecosystem. A good portion of this is through typosquatting, the idea of someone publishing a library that has a very, very similar name. If something's got a couple of ease, add three ease and see if someone typos and pulls their malicious package. Or it could be someone potentially planting something malicious as we talked in a nested tree. And the chances of finding those are significantly lower as a result. But malicious packages aren't just a thing on the JavaScript ecosystem. And we saw that very recently in last year when over 3,000 malicious libraries were published on the PIPI website or on the PIPI ecosystem. And ultimately, this showed a new type of attack, a dependency confusion attack where allowed a security researcher to infiltrate large organizations like Microsoft, Apple and so forth by publishing new libraries that were given a private library name. So bigger companies were publishing in their internal repositories a specific name but leaving the public name open. And when a security researcher was able to publish the same name package in a public repository, these ecosystems package managers were prioritizing some of those and pulling more of the public libraries down. This dependency confusion showed how you can exploit this design flaw in the package manager and ultimately the human error to infiltrate those. And there are tools around, free tools around, I know, Sneak created a free tool here to identify where this dependency confusion can exist in your build. So make sure that if you're using local repository packages that the public repository also has packages with those names it. Now, I wanna kind of like leave now with a couple of questions for you and then we have some time leftover for some questions as needed. I wanna kind of like leave you with the following questions. In terms of obviously less or more software in the future we as organizations are constantly building more software, constantly building more functionality. My question to you is do you think your organization is gonna be using more or less open source software? Are you gonna have a greater reliance or less of a reliance on that as embracing the open source movement as going forward? And the big question which is really, really important to leave you with here is who do you trust? Who are you willing to pull in to your application? What checks, what is the requirements for them to show their security hygiene for them to be able to gain your trust? Is it popularity? Is it showing maintenance of their packages? Is it showing they're fixing their security issues? Is it a low number of issues being raised or being fixed in a sufficient amount of time? Who do you trust and who should you trust going forward? That's everything I had for today. So there's a little bit more time left for questions. If there are any questions in the chat. Just checking now. So I can't see any questions just yet. If anyone has any questions, now is the time to add those in. One thing I will, oh, so one question in from Manny, hey Manny. I think that's the Manny Sakai I know. So hey Manny, good to see you. Do you have similar coverage for Java and the JDK as you have covered for JavaScript and Python? Are you talking about a sneak here? Sneak. So in terms of sneak as things like SCA, the ability to scan and identify your projects and where vulnerabilities and things like that exist. Yeah, absolutely. You'll obviously have subtle differences between package managers. So if you're using Maven versus NPM, for example, then the way in which your dependency graph is created is gonna be subtly different and subtly different rules around that. But ultimately, it's about creating that dependency graph, identifying where those vulnerabilities can exist in those dependency graphs and being able to suggest those fixes for them. Is that hopefully that answers that question, Manny. But yes, there's absolutely the same coverage for things like Java, JavaScript, Python and many, many other languages go.net and a whole bunch of other things as well. Cool, that's excellent. Thank you. Any other questions? One coming in, what was the issue with the private package names didn't quite follow? So yes, the issue with the dependency confusion is I believe the issue was, and maybe someone can correct me if this is not right, but I believe the issue was where some repositories preferred local package names, preferred package name locally, some preferred remotely. So what it did was if you used to have a package name that you had locally and there was no remote, you know, in the NPM ecosystem, no remote package name which had the same package name, there was no conflict. It would automatically pick the one where it could find it. But if there were two in the Python ecosystem, what it did was it preferred going to the ecosystem one above the local one. So it was a package managed default. And as a result where people only had a private repository because it was a package that they didn't want to share if someone else published a package name with the same name. So it could be, you know, private dash, Apple dash, whatever. If they published that on the Python, on the PyPI, build managers and package managers would pull that remote one above the local instances as a result, they could potentially put in malicious code into those public repositories. And then that would get bundled with applications that the Microsofts and the Apples and so forth were building. Good question, good point here. Let's have a look. There is a clear advantage of Golang in terms of security as per your vulnerability graph. Rust is also by design way more secure than JavaScript. How, so in that sense, as per your vulnerability graph, is that because, are you saying that Sahab because it won't always go for the latest version that actually stays on the lowest version in a dependency range? Is that why you're saying that? And the question here is, how do we, how do you see we replace JavaScript with Welleson, et cetera? Go, Rust, Target, Welleson? It's a really good question. And I think, I actually think there's a lot more that ecosystems can do to actually protect us. And some of the examples here go to show that. I think there's other things like digital signing, for example, which some ecosystems do, some don't. Java, for example, is in a very good state with that kind of thing. You can't just upload 60 packages to Maven Central, for example, there needs to be certain criteria met. Equally, there's digital signing there that occurs. I think it's mandatory, whereas in NPM that's not something that occurs. So I think, I don't wanna see necessarily things being replaced running from that, but I think these kind of ecosystems do need to step up. And yeah, I think in Golang, I think what you're referring to there in terms of the dependency, the vulnerability graph is really that in JavaScript, you're much more likely to jump to the latest version rather than stay on your current version. And that has advantages and disadvantages. One might argue that it's actually, it can be considered a security advantage there because you're automatically consuming bug fixes, including security fixes. But equally, yes, from a stability point of view, what you build in a year from what I understand, I'm not a Golang developer, but what I understand is, yeah, you're gonna effectively build the same artifact even if other versions have appeared. And that's a really fine balance in terms of what people want to do. Do you wanna stay in the latest version with bug fixes and security fixes, or do you wanna stay more stable and more predictable with your version? So I think, yeah, on that one, people have different ways about going with that. We have a few minutes left, if there's any other questions, I'm happy to take them. Oh, and one of the things that I was, one of the things that I always think, this session is talking about supply chain risk, but one of the things that I wanna also make a distinction between is what a supply chain risk is and what a supply chain attack is, because the two are very, very different. And when we think about, I'll ask the question you don't need to answer on the Q&A, but if I was to say log4j to you, do you think that's a supply chain attack or do you think that's a supply chain risk? Interesting, or log4shell rather, I mean, it's just a logger, right? But interesting difference there. Now, what I would say is that is a supply chain risk in the sense that it is a library that is in our supply chain that we are pulling in to construct our application. It is not a supply chain attack because who is the attacker? The attacker is the person who is potentially trying to break in to someone else's app. Okay, so the attacker is the person who is trying to attack a website which is using log4j. That's not a supply chain, right? They're attacking an application. Now, if someone, if you was to look at the typo spotting attacks, if you was to look at the event stream which is a malicious attack on a library with the intent of breaking some, of getting into someone's supply chain, that is a supply chain attack. The attacker is trying to break someone's supply chain, trying to attack someone's supply chain. CodeCov, another one whereby the attacker is trying to infiltrate CodeCov because CodeCov is used in everyone else's supply chains. What do they get? They get given the environment variables and other things from anyone who uses a malicious version of CodeCov. These are, that is a supply chain attack. The others are supply chain risks. Log4j, all those kind of things, the marked library, it's a supply chain risk which allows, you know, anything that you pull into your application could contain risk and it's part of your supply chain but that's different to a supply chain attack. Just one thing that I wanted to kind of add there. Oh, sorry, I didn't scroll down. Yeah, Manny got that one with the supply chain risk. So, does Sneak have features to suggest squatting and potential misspelt names in the current project? So what it does is when we identify misspelt names in your project, it'll mark it as a malicious project. And as a result, if you try and use that, it will be flagged as a vulnerability using a vulnerable or in this case, malicious library. Is it all working on shadow repository services to help during times when a package becomes unavailable? I don't think there's anyone doing that but I think this is actually a really good way of actually getting a repository, like a private repository and making sure that, you know, you have those as well. And it's about your business and your organization making sure that it's not being left up to the repository. What are you doing to make sure that you, you know, you are catering for these incidents as well? And thank you very, very much for the questions. I appreciate them. That brings us bang up to the 10-2, which I believe is the top of this webinar. Oh, well, thank you so much, Simon. And thank you everyone for participating. Questions always make these things much more applicable to your particular scenario. And we're just so happy to have you all here and we hope to see you back at a future webinar. Thanks everybody.