 Hello and welcome to 10,000 dependencies under the sea, exploring and securing open source dependencies. My name is Greg Horton. I'm a product security engineer at Slack. And I'm Ryan Slamo, an associate software engineer on the product security foundations team. At Slack, AppSec is product security. Our project organization is split into classics and foundations. Classics were the original team, and they focus on most of the traditional AppSec responsibilities, like reviewing new features, penetration testing, and running a healthy bug bounty program. I'm on the foundations team, which focuses on reducing risk through automated tooling and creating secure by default libraries and patterns. Both of these teams work together to ensure the security of Slack, the product, using a multifaceted approach. For some context, what is Slack? Slack is a channel-based messaging platform. And with Slack, people can work together more effectively, connect all their tools and services, and find the information they need to do their best work all within a secure enterprise-grade environment. Let's begin by talking a bit about the Slack stack. Users expect Slack to work everywhere. To that end, we use and are a partial maintainer of the Electron project for a consistent experience across devices. In practice, this means our front-end web code also runs in all of our desktop clients. Our back-end is primarily in Hacklang, Facebook's fork of PHP with strong typing and other enhancements. We also have some services written in Go, like our caching layer. Finally, our mobile apps are primarily in Swift and Kotlin, but we won't be talking about them much today. Today, we'll be talking about an OWASP top-ten issue, using components with known vulnerabilities. Specifically, we'll be focusing on vulnerability management for third-party dependencies. Our story today begins with an intern project. Matt Juanzic and I, who are now full-time engineers at Slack, were interning on the Product Security Foundation's team last summer. When we got there, we were given an open-ended project of understanding and limiting our dependency risk. Today, we will walk you through our journey building a tool, and Greg will walk you through how we implemented the tool and built a process around it to actually limit the risk at Slack. Modern code bases often require tons of third-party code. At Slack, we have one main repository that contains our entire front-end and most of our back-end. Our main repository currently requires over 6,500 packages. A year ago, we had half that. This trend would be concerning if we didn't have systems in place to limit risk. All of our first-party code has to be reviewed, but what about random stuff you find on GitHub? That's exempt? Clearly, some process must be in place to manage risk. It's important to note that we have a mature process for adding packages, and the count still doubled in a year. For PRs that add packages, developers have to explain why it's needed. Packages must be actively maintained and save meaningful engineering effort over just building the functionality ourselves. Additionally, all packages are assigned a team or a directly responsible individual to update them and maintain them. This is especially important for security updates and fixes. So how did our package count double? In Word, npm. We only directly require 350 packages in our package JSON. However, when you resolve the nested dependencies, our dependency tree expands to almost 6,500 unique versions of packages. Running this much third-party code can be a risk. The downside of using common software is that we often find out about vulnerabilities at the same time as everyone else. Let's take a look at some examples of related issues. One of the most notable was Equifax. Equifax got hacked because they were running an old version of Apache Struts that had known vulnerabilities. This hack leaked 143 million social security numbers. Preventing this could have been as simple as upgrading their version of Apache Struts to one that did not have publicly known vulnerabilities. Vulnerabilities are also discovered in old versions of popular npm packages. For example, both of these advisories were issued this year. NextJS and AngularJS are respected well-maintained packages, but running those in your production code bases without adequately maintaining them is a recipe for disaster, because new vulnerabilities can be discovered at any time, no matter how many years you've been running the same code. npm also had issues with malicious code in popular packages. For example, EventStream, which was a popular npm library, required a dependency that had been compromised, and that compromised dependency included a targeted Bitcoin wallet stealer, and that malicious code was downloaded over 8 million times. ESLintScope is an even more popular npm package used for linting JavaScript code, and a compromised version was uploaded that exfiltrated npmrc files which contain package publishing credentials. Those credentials could allow an attacker to push a new version of a package as a semver patch with malicious code, and because of the way npm semver works, all installations of that package would automatically upgrade to the malicious code. And here's for some figures that might be slightly biased. Sneak found an 88% increase in library vulnerabilities over the past two years, and 78% of vulnerabilities that they found are in indirect dependencies. Just take a look at our dependency tree from earlier to understand why. When we only require 345, but end up with almost 6500 packages, there's a lot more code beneath the surface than on top of it. And we're not the only ones who have this problem. Our customers know about this problem and they ask us about it. We need a good program in place so we can show them that we take care of their data and we limit risk. What do we do about this problem now? When we started, we defined three goals for a good solution. The first is we want to detect vulnerabilities as soon as they're publicly available, because we want to be on top of the fixes as quickly as possible. Second, we want to track any vulnerabilities or weaknesses in our code base. We want to be able to have insight into where we have risk and so we can better fix it. Finally, we want to alert when we have a problem. No developer should ever have to think, oh, I should scan my repo before I deploy to production. We want to alert when we have vulnerabilities, so this just happens automatically. The next question we asked was, can we use an off-the-shelf tool? But our requirements ended up making that surprisingly difficult. First of all, we use Hackline, which is not a popular language outside Facebook, so it was difficult to find vendors that supported it. Second, we use GitHub Enterprise rather than GitHub Cloud, which makes it harder for dependabot because dependabot still isn't out for GitHub Enterprise, and it was even further from being out a year ago when we began this journey. Next, we needed a tool that will scan the entire dependency trees. Knowing about vulnerabilities in packages we require directly isn't enough. As we saw earlier, there are so many more packages beneath the surface, and we need to make sure that those are a stable foundation that we can build our app on. And finally, at Slack, we're heavy users of Slack as a product. All of our alerts and things like that just go in Slack. So we need a tool that has support for routing alerts for different code bases or packages to different teams. And finally, you might be saying, oh, we're a vendor and we have a tool that does all that. But we're still a relatively small company, and we need something that's not going to cost us millions of dollars over the next few years. So we built Asify. Our solutions to the three points mentioned earlier were first, the indetection. We run daily scans of our code bases to both figure out what packages were requiring, and also we upload those packages to the Sonotype Open Source Vulnerability Index and see if there are any new vulnerabilities that have been reported for our packages. Second, we built a dashboard to track the status of repositories, as well as tracking remediation efforts for individual findings. And finally, we built robust Slack alerting, which will be covered more later. Asify supports three package ecosystems. The first, of course, is Hackline, because that's our backend and that's where some of the most dangerous possibilities are. We added custom metadata to track composer packages upstream because we had to manually fork and vendor some of these packages to add strong types and other Hackline features. Second, we added NPM because NPM is the biggest defender for vast amounts of packages, and we found by far the most findings coming from NPM packages. And finally, we added Go support because a number of our high-value services, like our caching layer, are written in Go. Here is what the Asify dashboard looks like. This is an example repository created for this demo just to show you what it looks like when you scan a repository. So we support scanning multiple branches, which could be used potentially for future CI integration. Here we have an example finding page. This is for Node Forge, which was being pulled in in our sample repo's package JSON. Node Forge has a weakness, but we don't actually require Node Forge. Instead, we require a Google Auth library all the way on the left. We built out a dependency graph tool because we want our developers to understand where the vulnerability is coming from and what packages need upgraded or removed to fix the issue. Here's a more complicated dependency graph. This is from a dev dependency. The weakness was on the right in IS URL. However, if you trace back all the way to the left, it was actually some gulp plugins we were using as part of a build process that were pulling in the weak version of IS URL. And finally, like any good security tool, it comes with dark mode. Now I'm going to hand things off to Greg to cover everything that happened after the original development of the tool. Thanks, Ryan, for that great overview of the tool. So now that we have this fully-futured tool that did everything that we wanted, it was time to integrate it into our wider processes here at Slack. So this would be easy, right? We have an app that scans our repositories, looks at our third-party integrations, sees if there's vulnerabilities, and then lets us know about them. So our first workflow for this tool was that Ossify would scan daily and then post to a Slack channel any currently known vulnerabilities. And using the power of Slack, we could make Slack notifications that were actionable, meaning that we could give the information that was needed for quick remediation. They were non-obtrusive, so we could set those notifications to snooze or ignore if they weren't relevant to us. And then make them configurable so we could notify individual users and specific channels about these dependencies. Here's an example of what those looked like. So as you can see, this dependency detection is the app. It's scanned in a repository. It found some vulnerabilities in some third-party libraries, and it would tell us about them. And as you can see also, you could set them to ignore. You could snooze them, et cetera. But the problem with that is that this channel was way too noisy. Every single day, it was posting all the vulnerabilities that were found, and they hadn't been remediated yet, so it would repeat a lot of the ones that we'd seen the day before. Maybe we didn't want to ignore them yet because they were still relevant or we were still trying to find somebody to fix them. But it was a very long list, and so we couldn't really work to work them down in this format. And also, blindly throwing them into a channel made it so it was everybody's problem, which in practice made it nobody's problem. Nobody took initiative to see which vulnerabilities were actually a threat to us, and nobody was taking responsibility to fix them. So it was pretty ineffective at solving our main problem at first, but we had to go back to the drawing board and figure that out. So we had to admit that Ossify was an effective tool, but it was not only reporting vulnerabilities, but it was also acting as an internal ticketing system. For something we wanted to maintain long term, we wanted Ossify to do one thing well, find vulnerabilities and report them. And to do this, we had some prior art that we could work from to figure out the best way to do that at Slack. And that answer was Jura. We have a Jura system. We have security tickets that go into our Jura that can be action-abonded by developers. And so our solution was to put it in Jura. At Slack, we have a method already for triaging security-based tickets that come from multiple sources, say if we have bug bounty reports or internal findings. Our SLAs or service level agreements are 180 days for low tickets, 90 days for medium, 30 days for highs, and then seven days for critical findings. So pushing, upgrading these vulnerabilities to a ticket in Jura puts these already into the processes that our developers know. It doesn't add in any more friction, and it puts them into an established workflow that also ties to company-wide objectives. So upgrading these libraries just can't be another thing that developers ignore. They have to do it to meet their OKRs. So now that we had a new flow, we had a new way of dealing with and finding these vulnerable libraries. So our new flow is that OSPI would find the vulnerability and then throw it right into Jura. It would file a ticket. If multiple vulnerabilities were caused by the same package, we would roll those up into the same ticket. Say you have a couple of highs and a few mediums, but that all would be remediated by upgrading a single package to a certain version. That would be one ticket. So we weren't making multiple tickets for multiple vulnerabilities per se. Just if you upgrade this library, these are all soft. Now we can talk about our triage process a little bit. So the first step in this would be that the ticket be filed and the product security engineer on call would triage it. So our triage process is just enough so that we can find how serious the vulnerability is in our systems. We would read the proof of concept from the sonotype database and then check our source code to see if we are in fact using those vulnerable functions. Because if we weren't, then it wouldn't be a critical vulnerability for us. It would be a low or even maybe a non-fix. If we are using that, we might try to reproduce the attack or if it's used in multiple places, we're just going to determine that it's enough to determine a severity for fixing. If it's used in hundreds of places, we can't test every single one. So we would then recommend it be a medium or high depending on what thought. And we're also going to review the fix that's in the upgraded version just to see if there's anything more we need to consider when implementing the update. All of this is just to either confirm the severity based on the CVSS score or adjust it as needed. From there, we would find the responsible team. So after determining whether or not we could exploit it, we need to find the responsible team to fix that. That team is then going to be responsible for mediation in the SLA time that we've already determined based on the severity. Lastly, once the package is updated, Ossify is going to go in and automatically see that that version is no longer used and we'll remove it from our list. And then for the vulnerability is remediated. Now that we have a working process and that fits into what we're already doing at Slack, there's still a lot of future work to be done. Right now, we're working on getting a clean state for vulnerable packages and working down our backlog to determine if the packages are currently being flagged or vulnerable and get them fixed. Next, this process, as you can tell, involves a lot of manual work still. There's a lot of manual work for the product security engineer to find out who owns the vulnerable dependency and also seeing if it's exploitable. So we would like to figure out a way to automatically figure that out instead of having that be a manual process. And that's just going to be looking into how we can track when new libraries are added. And thirdly, we'd love to integrate this process into our CDCI pipeline. So packages that are vulnerable aren't uploaded to our code base when they're committed. So on commit, it would run the scanner, look for any open vulnerabilities, and then either block that or let it through if there aren't any. Thank you so much. Special thanks to Matt who did ossify development for us and Nikki and Oliver for a previous version of this talk. If you have any questions, we'll be in the Discord chat to answer them. Thank you.