 Hello and welcome. We have a very special episode for you today. Tim Cadillac is here from Sneak to talk about just how insecure much of the web really is. Stick around. This is the state of the web. [?]. [?]. [?]. [?]. [?]. [?]. [?]. ?] All right, so Tim, thank you for being here. Oh, of course. I'm very happy to be here. So let's start with fundamentals. There's so much we could say about web security. And people usually think about little Bobby drop tables, but there's so many more vectors on the web for attack, like then, say, SQL injection. Yeah, there's a large number of different vulnerabilities. I think SQL injection jumps out probably because of little Bobby drop tables. But there's other vulnerabilities that I think most of us have at least heard in passing, cross-site scripting, denial of service, command injection. The list goes on and on. There's also things that people don't think about like regular expression denial of service where you can actually use a regular expression parsing engine to bring a system to a crawl. There is no shortage of different vulnerabilities. It's just down to how imaginative the attacker is. Right. And do you feel like people are aware of all these different vulnerabilities? And more importantly, do they feel empowered to actually fix them? I think, by and large, people have heard at least of the most common ones. You pull a room of developers together. They've probably heard of cross-site scripting. They've probably heard of injection. Maybe not some of the more obscure ones, but at least those that we've been dealing with for years and that are the most common ones they've heard of. They may not realize how many different ways that vulnerability can be used against them. There's an endless amount of different ways you can inject something into the page. It's not just as simple as injecting a little bit of JavaScript through either the URL bar or else. You can use the base redirect element. You can use malformed HTML. It just depends on how creative you want to be. So I don't think people are aware of all the different ways that those vulnerabilities can be used, but they've at least heard of them. As far as being empowered, that's where the challenge comes in, because traditionally, security has been that other thing. It's the other security team has to deal with a separate team that sits somewhere else within the organization. They have their own tools. They have their own process. They do their own audit and maybe report back to the developers on things they need to fix. Maybe in the best of cases, they've had a tool that they've picked out and then told the developers they have to conform or use this tool. The challenge has been that we haven't treated security as something that the everyday developer should be focusing on. And we haven't given them the tools and the time and the process to deal with that, at least not until recently. We're starting to get a little better, but we have a ways to go. So what could a developer say if they're getting pulled in one direction from their manager, let's say, to just ship the feature? But if they're being told that the feature development takes precedence over security, what could they say to bring them back down to earth? I mean, that's a challenge. If they don't have buy-in already from the organization in terms of focus on security, it's going to be very challenging for them to do. I think that's the first thing you have to do is you have to make sure that you've taken the time to do some risk assessment. This is the application that we're using. This is the way our company works. And these are all the different things that could possibly go wrong. And if they go wrong, this is how bad it could be. And then from that, you determine, like, OK, well, based on this information, we absolutely have to make sure our data is stored securely. We absolutely have to be on HTTPS. You identify these three or four things that are at the very top of the list, and you start to track those. Find some tools that'll do reporting, throw it in a dashboard, make them a priority, find ways to enforce them in the development process. If you put those things in there to help provide this sort of protection or this extra layer of support, I guess, to the developer, so it's not just one developer going to their manager and saying, hey, this is really bad for security. That's not going to get you very far. But if you say, look, we've identified, as our organization, we're tracking that we don't want to be vulnerable on any properties using HTTPS. We want to HTTPS everywhere. And if you look, we've got this issue here. Or so on with the code for cross-excripting, whatever it happens to be, that carries a lot more weight if it's already backed and supported by other people within the organization. So a developer can be so careful with code that they're right in-house to make sure that it doesn't have any of those vulnerabilities. But it's the third-party JavaScript that they pull in that could really come back to bite them. You've actually done some research into this space with your company, Sneak. The 2017 State of Open Source Security report says that 433,000 websites tested out of those, 77% have at least one JavaScript vulnerability, which is crazy, right? Like, 77%. It's a scary number, yeah. So what can developers actually do to mitigate that? Well, OK, so first off, it's worth noting that these are JavaScript vulnerabilities. They're specifically JavaScript vulnerabilities in known libraries, right? These are vulnerabilities that we know about, that we've documented, and that the library is something that's being shared, typically through NPM or something like that. So there could be much more vulnerabilities floating out there that we just haven't discovered yet. We're defining new known vulnerabilities every single day, literally. That's frightening. Yeah, it's very scary, as well as this doesn't tell you about the vulnerabilities that might be lurking in the stuff you've produced. Yeah, it comes out of that open source development is so fantastic, right? I think it's awesome that we have this vibrant ecosystem, this community of people who are willing to create a library that solves a problem and then put it out there so other people can use it. There's a reason why this is massively popular within organizations, because it makes so much sense to tap into the collective intelligence of thousands of developers rather than trying to build everything in-house and repeat the rebuilding of the wheel every single time you need some piece of functionality. But yeah, we don't necessarily know who those people are. Let's assume that you have a JavaScript library. Let's assume that, on average, maybe five to 10 people contribute to a single JavaScript library. I think that's a fairly conservative number. For the smaller stuff, you're probably talking a couple. But on some of the bigger projects like React or jQuery, you have tons of contributors. That means that if you're pulling in 10 different libraries that you're using in your application, you're trusting 100 different developers, 10 developers per library, 100 developers that you're saying, I'm trusting what they know about security, I'm trusting that they've taken the time to watch for the security issues in the code that they're producing, that they're not just putting it out there, I'm trusting their security know-how. That's a really long chain of trust. And so there's a lot of risk that's inherent in that. So we have to be really careful and really critical when we evaluate what libraries we're using to make sure that there aren't these security issues lurking in them. So what is the threat level here? Should we be scared? What is the severity of the 77%? Should be terrified. No, I mean, it depends. Are you almost happy? So 77% is, it's certainly a lot larger than I expected. When I first was digging into this, I was expecting that we were gonna see about 30%. So certainly the number itself is scary, right? That's a ton of sites with at least one chink in the armor. However, the severity of those vulnerabilities do vary. So you have things that range from low severity vulnerabilities where there's not much that's going to happen. Something can go wrong, but it's not necessarily an ultimate disaster. To high severity vulnerabilities, like if they're able to do some sort of injection or command injection and get some arbitrary commands to run on your site or application, that's obviously a really big deal. So it varies in terms of severity. And the other thing to keep in mind is that we don't know how many of these vulnerabilities are exposed. We can test and we can see that these vulnerabilities exist. We know that a vulnerable library is being used, but some of these vulnerabilities apply to a specific method. For example, there's a jQuery vulnerability that applies only if you're making an Ajax request for JSON data. If you're not doing something like that, you're okay. So it really comes down to an individual assessment. It depends on the organization. How are they using these libraries? What's at stake if something does go wrong? But again, that requires that risk assessment and that critical thinking. If you're pulling in a library and you see it's got an own vulnerability, how likely is it that that vulnerability is going to be used against you and what's the worst that's going to happen if somebody does it? And it's also worth keeping in mind too that just because you're not vulnerable today to that, like you're not using that method today, it's still there. It's a weakness in the framework of the library. So two months down the road, a developer who's not aware that this is an issue, it comes in and uses this method and suddenly you're exposed again. Developers don't want to have to sit there and second doubt themselves every single time that they're pulling in one of these libraries and using a method. That's not what should happen. So it's very dangerous to keep the weakness around. So I guess going back to like how scared should we be? We should be uncomfortable with it for sure, but I don't know if we should be running out of building because it's on fire, kind of scared. I think it's definitely a fixable problem. So let's step into the methodology for a second. How do we arrive at this 77%? Sure, so what we ended up doing is, so Sneak maintains a database of known vulnerabilities across a bunch of different ecosystems. For JavaScript, it's a lot of NPM libraries. So what we're able to do is we're able to take a snapshot of that database, specifically for client side libraries or frameworks. And then we compare that with, there's a fantastic Chrome extension, the JavaScript library detector. There's actually the detection part of that. The script has been put out on NPM for people to be able to pull into their own applications and it goes through 60, I think last I looked, somewhere around there, different libraries where it'll detect the presence of that library and the version of it programmatically, which is great because you're not relying on URLs or anything that's gonna have all sorts of mishaps. You're actually programmatically detecting if it exists. So using that, we can run this inside of, in this case, Lighthouse. And Lighthouse, when it opens the page, it'll run that JavaScript detection, which gives it a report of all the different libraries that it finds. And then those different libraries are checked against the Sneak database to see if we know that there's a vulnerability for that specific version of that library. And then that's how we end up with the final vulnerability count. So in terms of the size of the test, that's thanks to HTTP Archive. So we can test over hundreds of thousands of different sites and quickly see what libraries they're using and if they're as vulnerable or not. And one of the nice things about HTTP Archive is that it's continuously running semi-monthly. So we can see how that 77% changes over time. And actually looking at some recent data, it's taken kind of a dip by 2%. It went from 77 to 75 in December, back up to 77 now. So what might be some of those reasons for fluctuations? I mean, there's some variability, right? Like in terms of fluctuating down, that's gonna be a little less common, I think. You know, that's gonna come down to if something went wrong with the HTTP Archive runs or maybe the JavaScript detection script, something changes inside that script that causes for a little while a library not to be reported correctly. And so it misses some of those. So, you know, it's gonna fluctuate a little bit. Like the one and 2%, I guess, is not as big a deal. Overall, I think it's probably gonna trend up, unfortunately. It just depends. It's gonna be a race in terms of developers who are addressing the problem versus how many more vulnerabilities do we find? You know, we've added a couple of vulnerabilities since, you know, this first shift that make the database of known vulnerabilities that Lighthouse is checking against bigger. So we're catching more than before. And the library detector script is also getting more accurate. So it's detecting, for example, when it shipped initially, there was an issue with React being detected and now that's been fixed. So the percentage of vulnerable React apps have probably gone up a little bit. So if anything, I would expect it to fluctuate a little high. But yeah, you'll see some tips, I guess, depending on how things go. So Tim, thank you so much for being here. I've learned a lot and I'm frankly scared by some of the things you've told me. I need to go and fix my own website now. It's always a pleasure to terrify people. So thank you. So if you have a question for Tim or any pro tips on how you secure your own website, leave a comment below. And thank you for watching the State of the Web. We'll see you next time.