 We can go ahead and get started. Yeah, thanks everyone for coming to this session. It's going to be on the OpenSSF Salsa and the NIST SSDF. These are two emerging software supply chain security best practices frameworks, and we'll be using these to actually construct a roadmap. Before we get into it, my name's Tony Lohr. Hi. I work as a developer advocate and evangelist for Psycode. I previously worked into it as a software engineer on several teams and worked at a bio lab for that. So as I briefly mentioned, our agenda today will be introducing certain supply chain attacks, discuss the statistics around those, and explain why these frameworks were created in the first place. Then we'll get into the NIST SSDF, Google Salsa, the specifics of them. We'll compare them, figure out what gaps emerge still. And yeah, then we'll be able to construct a pretty comprehensive security framework. I forgot to take a demo off of this. There is no demo because it's a vendor-neutral pitch. But yes. So getting started, let's answer the question of why we need new application security frameworks. Well, it's because attackers are shifting priority rather than going for production apps directly. We're seeing a lot of attackers go through software delivery pipelines and even use developers as potential vectors of attack. Developer credentials are highly sought after for attackers for that reason because they typically have escalated privileges as that reduces friction in the development processes. Unfortunately, it's resulted in a landscape where software supply chain attacks are on the rise. Everyone in this room is probably heard of solar ones if you're here, as well as most likely CodeCub as well. Xcode Spy has a soft spot in my heart, if you could say that, because I was actually hit by that attack when I worked as an iOS engineer. I can say it's similar. Regardless, you can see that these types of attack are trending upwards. And Gartner actually predicts that 45% of organizations worldwide will have experienced attacks on their software supply chain, which is a three-fold increase from only a few years before. So let's get into the NIST SSDF, which is a framework that was created in response to an executive order surrounding supply chain security, particularly around the colonial pipeline attack. This particular framework was directly inspired by OWASP's Sam and Liz Gartner at the White House and the Department of Defense's general resources. This presidential executive order to go over at a high level coordinated efforts across multiple federal agencies to create a common language set, and that's really the key to this. It's creating a common language for vendors, customers, and consumers of certain frameworks to be able to actually communicate effectively. Software supply chain security is a particular focus because often commercial software lacks transparency, and there is insufficient focus on the ability of software to resist attack. When we talk about visibility, we really are talking about the shadow dev and shadow IT problems that tend to exist. Very often, and I've been guilty of this myself, developers will use a library to accomplish a particular task, something mundane, typically like data calls or perhaps math, but oftentimes attackers also realize that this is the case and use that as a vector. Going over the specific tasks outlined by the NIST framework, pretty much every task has been complete as of now, and NIST has released version 1.1 of the SSDF. This particular push also helped organizations define what critical software was within their pipelines. Let's see. Software supply chain security is another key to this aspect. NIST actually released three total documents related to improving software supply chain security. This includes the SSDF, which I'll be covering a bit more in detail, but also a special publication 800161 contains quite a bit of information about this. It refers a bit more to internet of things if I'm not mistaken. So again, not quite our focus today. I can skip that particular side. Cybersecurity labeling for consumers, as I described, entails creating basic labeling to actually be able to describe security to customers in a consistent manner. NIST identifies key elements of this labeling program and has actually completely released a final version of this criteria, which is available for pretty much anyone to use. There are five key drives behind this labeling. It's to encourage innovation, be practical, and not burdensome. In other words, actually usable. Factor and usability is a key condition. Build on national and international experiences, in other words, using battle-tested strategies, and allow for diversity of ideas as long as they're deemed useful and effective. So boiling down to five principles of the SSDF, we have protects, confidentiality, identity, rapid response, and training. I'll let folks take pictures if they want to. Another key deliverable of this particular framework is the risk severity schema. The one I like to point out is level three, because that actually is a bit more prescriptive for federal agencies. If an agency is deemed to be critical infrastructure, as we said before, then there are mandatory reporting requirements that they have to adhere to with regards to breaches. In addition, the SSDF recommends minimum standards for developer verifications of code. This includes static analysis, utilizing threat modeling, running dynamic analysis, which includes tools such as fuzzers or code-based structural test cases, and something I'm sure everyone here is familiar with, checking included S-bombs. S-bombs are great, but if you're not consuming them, what are they really doing? And the key practices to boil it down are preparing your organization, protecting your software, produce well-secured software, and respond to vulnerabilities in a timely, effective manner. Respond to vulnerabilities is a bit vague by the SSDF's definition, and that's because this framework is, it's not a rubric per se, it is very much a set of guidelines. Now, if we get into the OpenSSF's Salsa, sorry if I have Google Salsa on the agenda, I was made aware very recently that it no longer goes by Google Salsa, so if I have any typos related to that, forgive me. But anyways, OpenSSF's Salsa framework was actually introduced by Google several years ago. They've been consuming it internally, and I believe it was binary authorization for Borg or something like that, regardless. It's designed to help protect the data fidelity within the build systems. I have a nice threat model in here that I can show momentarily. And one of the most notable attributes of the SSDF's, of OpenSSF's Salsa's levels is the fact that there are four different levels. And unlike several other cybersecurity frameworks, namely the SSDF, the eventual goal of every company should be to hit level four because this is the highest level of security. Salsa level zero is basically just a tongue-in-cheek way of saying that there is no guarantees. Salsa level one means that the build process must be fully scripted and generate provenance. Not particularly difficult. Most organizations hit Salsa level one without trying to. And Salsa level two requires using a version control and a hosted build service that generates authenticated provenance. This is a bit more difficult, and this, though not being particularly challenging to implement, does entail some more authorizations. Three is where things start to get a bit challenging. It's where the source and the build platforms meet specific standards to guarantee the auditability of the source and integrity of the governance, respectively, what that basically means is that the actual build, the logs, and the source have to be retained. Ideally indefinitely, but 18 months is the minimum that's often prescribed. Salsa level four is the most difficult to achieve, but also enforces the highest rigor of standards. This involves having a two-person review of all changes in a hermetic reproducible build process. Technically, the reproducibility isn't a requirement, but there needs to be a good reason for it not to be reproducible if this is not something the organization hits. Requiring a two-person review, though, is very much industry standard, and there's virtually no reason why that can't be achieved. This is the threat model that I was referring to. I particularly like this because it really just shows how many vectors of attack do exist, particularly with modern software, and something I would like to point out is each dependency has its own pipeline, and I'm sure anyone who was involved in resolving the log4j instance is aware that sometimes it's not your dependencies. It can be your dependencies, dependencies that introduce the vulnerabilities. Let's see, there are five main categories of salsa requirements. There's source requirements, build requirements, provenance generation requirements, provenance content requirements, and common requirements to get into that a little bit deeper. I actually wrote an article on Google's, on how to use our platform to achieve source requirements. I highly recommend. But the actual source requirements, they just entail having version controlled, which it's pretty easy to do if you use GitHub, GitLab, or any other source control system. I don't know many developers who don't. But this history needs to be verified as well, and it needs to be strongly authenticated. This entails enforcing MFA, or two-factor authentication for your developers as well, retaining this information indefinitely. As I said before, it's not 100% necessary at level three to retain it indefinitely, but to hit salsa level four, this information does need to be available, regardless of the amount of time that's passed. And yes, having it be two person reviewed, specifically two trusted parties, helps prevent any sort of malicious developers. It helps prevent malicious commits. It can even be used to identify suspicious code before it even is committed in the first place. Let's see, moving on to the build requirements. This, essentially it all goes back to, if you want to improve the security of your build, you have to make it less likely to be tampered with. Having a build that takes variables that can change is just a way of introducing another potential vector of attack. Google Salsa really, OpenSSF's Salsa really aspires to protect the build above most other aspects. Getting into the provenance, this is pretty closely related to creating an SBOM, but beyond just having the software, it also entails having provenance of the build itself in order to achieve the highest possible level for Salsa that you have to have all your dependencies included within this provenance. Going a little bit deeper into the provenance content, there's a laundry list of things that should be included. I don't feel as though I should go too deeply into all them for the sake of time, but I'm gonna break my own rule. Organizations should include the artifacts, the builders, and the source and pretty much all of their provenance content. It's one of the few ways to ensure that tampering hasn't occurred. I would like to call out though, including metadata is not a requirement for Salsa either, but yes, that being said, the reproducible information and transitive dependencies should be included, which also tells a little bit more information than just showing the metadata. Here's an example of provenance content as I see it. I wonder what my notes went, oh well. And lastly are Salsa's common requirements. This includes security requirements, which includes having some sort of baseline security standard to prevent compromise. Often this is described as, sorry, often this is described as a baseline framework or a contingency plan. And that's something that Salsa doesn't cover as well. It doesn't describe any sort of contingency plans. It's basically a means of preventing compromise in the first place. Access is pretty intuitive as well, but it just involves physically protecting your code from actual attackers who could potentially tamper with it in person. In addition, super users really just refers to the actual individuals who have admin privileges over your code. You don't want to have no super users because frankly, there will always be break glass in areas that happen. That means that you should very carefully select who those individuals are and they should not be able to make changes alone. So based on this, what are key learnings from the SSDF and Salsa? The SSDF focuses more on what whereas Salsa focuses a bit more on how. And what I mean by that is that the next SSDF is more focused on defining minimum requirements for software used within critical infrastructure, particularly federally. It doesn't really refer to any specific contingency strategies that you should employ. Conversely, OpenSSF focuses a lot more on how. This is a specific model for scoring the supply chain, focus on improving security within the build phase throughout deployment. It's essentially a rubric. You might notice that tiers versus levels are tiers and levels within these frameworks sound relatively the same. And you'd be partial, right? Because higher tiers slash levels do relate to a higher rigors of cybersecurity. However, higher tiers within the NIST SSDF represents increasing degrees of rigor and sophistication in the risk management processes. Whereas higher levels within Salsa represents greater maturity and each level of Salsa acts as a milestone towards the eventual goal of achieving Salsa for. This might sound strange, but achieving the highest tier of the SSDF is not for every organization because frankly, employing cybersecurity does introduce some friction to your business processes and of course there's a large overhead to install them in the first place. If you're a financial institution these checks can't be skipped though. PCI requirements and plenty of other compliance checks very much necessitate that, but let's say you have, I don't know, a fun app or something like that. There could be arguments that you don't need to spend your time doing that. Salsa is not like that at all though. With Salsa you should aspire to higher levels. One particular best practice that NIST calls out that I want to point out is reviewing for hard-coded secrets. And these hard-coded secrets don't just appear in source code. Granted, they're often unintentionally introduced in source code, but I've seen it builds, I've seen logs, I've seen several other aspects. Registries as well contain secrets, so scanning for these throughout the SdLC is important. In addition, enforcing strong governance of security policy is recommended by both Salsa and NIST for which every common requirement of Salsa goes back to governance. You need to enforce strong access control. You need to enforce super user access. You need to enforce security baseline. It's very much prescriptive of governance, whereas the NIST special publication 800-53, which is security and privacy controls for information systems, is actually referenced specifically by Salsa as a great guideline for governance. Detecting and remediating misconfigurations is also recommended by the NIST SSDF. This includes maintaining a system configuration or correction and inventory of the systems used and the configurations that they have. This can include hardware, software, firmware, documentation, build pipelines, pretty much everything that touches your code. Enforcing security configuration settings for IT products helps prevent a DevOps person's worst nightmare, which is probably configuration drift. Not that that's actually probably a surprise audit, but configuration drift is pretty close up there. Now, one of the other best practices I really love is Salsa recommends reducing code tampering risk by enforcing source requirements, e.g. having a two person review, which can prevent, it could prevent a pissed off consultant from potentially pushing a malevolent code, or it could just prevent an attacker from inserting a small amount of a compromised code. In addition, enforcing build requirements, such as ensuring hermetic builds and utilizing code signing, helps reduce this code tampering risk. However, these two frameworks are excellent, but they don't cover everything that should be done to mitigate risk. And personally, when I was reading through them, I noticed some glaring gaps that I think can be improved upon, specifically identifying suspicious behavior and code leaks. Source code is a software company's intellectual property. I think we can all agree on that. Save open source, of course, because that's our property. Neither NIST nor Google frameworks address the need for identifying suspicious behavior, though. Salsa does suggest some preventative measures, and the SSDF suggests that have a contingency plan in place, which I think both allude to the idea of having anomaly detection, but neither say it explicitly. In addition, enforcing the principle of least privilege, despite not being explicitly said by either organization, helps protect your build pipelines by preventing a breach of one component from spilling over to another. Preventing lateral movement is probably one of the best things that an organization should do, and it's also key to enforcing zero trust. Hardening the configurations, such as having a strong version control policies, hardening the security of CI CD pipelines, maintaining an up-to-date and current inventory of all your configurations, and inserting some processes for password rotation also can improve this. Additionally, verifying the code integrity through each stage of the SDLC can help protect your code from potential tampering. Centrally managing your policies allows you to enforce your policies consistently across the SDLC as well. But again, in other terms, if you have a suit of armor and it's full of holes, it probably won't help you too much. So in the same way, an organization should have oversight into every single asset they have, because, well, if you can't, if you don't know what you have, then you can't really protect it. Kind of an interesting statistic. I've probably said it to a few of you, just walking around. But about 60% of all breaches that have occurred in the last few years have actually utilized an unpatched dependency with a known fix within their attack path, which is really just another way of saying you could prevent 60% of breaches by keeping your stuff updated. Simple as it sounds, it's difficult to do, especially if you're in an organization that has, say, 500 depositories and several thousand developers. And as I said before, anomaly detection is the last point that I think that these frameworks miss. We need to be able to identify anomalies in access, particularly access grants. If I am a UK-based company, and suddenly I'm seeing logins from out of Western Russia, chances are I'd probably want to be aware of that or at least alert my security team about that fact. In addition, anomalies and configurations, such as configurations drifting from one of your deployments can potentially expose your proprietary code. One particular example, let's say you have a Kubernetes cluster that is publicly viewable and publicly accessible, but for some reason it originates from a repository that's set to private. Chances are those are conflicting configurations that should be called out and investigated because a best case scenario, it's just a minor oversight. Worst case scenario though, it could be a full on code leak and code leaks are something that you want to avoid because you're basically doing an attacker's reconnaissance work for them in addition to potentially exposing hard-coded secrets that could theoretically exist. I finished way faster than I meant to, super sorry about that. But yeah, I have some additional resources on the particular cybersecurity frameworks that I discussed today as well as several others. And yeah, we can hop right into Q&A. You're leaving me hanging. I get it. For Salsa both, to show my downstream customers that my stuff's better, they're more secure than mine. So the question is, how can we use the SSDF and Salsa to show their customers and demonstrate the fact that we are taking security seriously, if I understand it properly? Well, that's kind of the beautiful thing about it. If you can generate provenance and attestations that you are following these cybersecurity frameworks, then chances are that's a great sign. A lot of organizations don't specifically enforce a particular framework, but I mean like ISO 27001 is one example of a framework that is not required by law per se. You're not gonna get hit with penalties like you would if you say violated PCI, DSS, but it does improve customer confidence and that is ultimately what adhering to these cybersecurity frameworks goes back to. It is showing your customers that you are taking their security seriously, but in addition to that though, it also shows that you're not being negligent. Not all attacks and not all breaches are created equally. I feel like we can generally agree on that. And an attack that say uses a strange mathematical based zero day attack that has never been seen before. I feel like you can cut a company a certain amount of slack for that type of attack. However, if they are breached because of a vulnerability that they have in one of their dependencies that had a fix come out two years ago, well that's just negligence. So I don't know, hopefully that helped explain. Yes, no worries. If I understand the question correctly, you're saying that how could you show your customers your compliance if your customer has no reason to believe your organization because of say a Fox watches the hen house situation? I don't have a good answer to that one unfortunately. I suspect that we'll see a good solution to that in the future, but also using a certain security tools, like the ones we offer, are pretty good for being able to generate these attestations. And especially if you have policies that you've written into your security, but like the policy is code, not just a written policy, you can enforce that continuously through pretty much every commit. So in that way, you can ensure that compliance is upheld at a certain level pretty continuously. Hopefully that kind of explains the question and hopefully I didn't miss the core of it. And I saw one more hand over here. The question is, are there any guidance for open source maintainers that wish to utilize this framework? The answer is yes, absolutely for Salsa. The SSDF is, I'm not gonna say it's particularly dense, but there's definitely quite a bit of information to it. And it's a bit more descriptive versus being prescriptive. Salsa is very prescriptive. As I said, it is a rubric. So I personally view that as being a bit, for that reason, being a bit easier to achieve for an open source maintainer, particularly one who's doing it on their own time. Whereas SSDF is a bit more involved because that also involves a bit of high level strategy of okay, what's our risk tolerance? What's the amount of money we're willing to allocate towards improving security? Whereas Salsa is a bit more straightforward. It sure is, yeah. Anymore?