 Hello, and welcome to all those joining us on the live stream. So this is me, my name's Matt Jarvis. I am director of developer relations at a cyber security company called Sneak. I'm also the vice chair of Open UK, which is the industry body for open source in the UK, a CNCF ambassador and I run a whole bunch of other open source related events. And a very good afternoon. I'm Andy from Control Plane. I'm a founder and CEO. We're a cloud native security consultancy based out of London. We work in the US and in APAC with regulated organisations. I'm also very proud to be involved with CNCF's tag security as a co-chair, where we help projects to escalate through the ranks of the CNCF by providing assurance, secure end user usage and threat modelling advisory to the project. And with my venerable friend, Mr Jarvis, I'm the CISO at Open UK, where we work to try and avoid foot guns in UK government policy. So today, and surprisingly, we're talking about S-bonds. And there's going to be a rather extreme amount of yogi bear memes in this talk. So for those of you who may be younger than a certain age and don't know anything about yogi bear, I would encourage you to check it out on YouTube. So to get started and for those of you who maybe have only just started to hear about S-bonds, let's have a quick primer on what one actually is. We'll start with some definitions. So it is an acronym for software bill of materials, which doesn't tell us very much. And then if we look at NTIA's S-bom FAQ, and I'm not going to read all this out because it's incredibly wordy and a lot of other words. Andy and I have coined an alternative definition, which is just a bunch of JSON, which we think is very catchy. But it's actually not true because S-boms can also be in XML. And I think various other formats maybe. So basically, an S-bom is just some structured data that tells us about the relationships between elements that make up our software. But we already have structured data sources that do exactly that in our package repositories. So RPM, Derbentium, a whole bunch of others all basically work like that. So what makes S-boms inherently better than any of those existing sources? Well, basically absolutely nothing. Nothing makes them inherently more valuable than any of those things. It's all just software composition analysis. But that's not to say that S-boms aren't valuable. They are a very important step in improving how we manage supply chain security by giving us standardised machine-readable ways of sharing that information. So if I'm a developer, I can use the S-bom to understand the dependency tree of a big piece of software. If I'm in a security team, I have a library of things that my infrastructure contains that I can analyse when the next big CVE comes along. But for S-boms to be really effective, we need to consider a lot more than just the basic information that they contain. How we create them, how we manage them. And most importantly, how we trust them is really the key to extrapolating that value. S-boms are a point-in-time capture of a set of dependency data. Unlike your package repositories database, they represent only the moment that they were created. So it's critical to think about them as the moment in time when we think about what is going to be in the thing that we create. Logically, it makes sense for us to create our S-bom at the point that we create our software artefact, i.e. when it was built or when it was packaged. This ensures that the moment in time in which we're encapsulating this data about the software artefact actually corresponds to when the software was created and therefore the dependency tree that it was built with. This probably means creating the S-bom in our build system through RCICD and considering it an output of our build in exactly the same way that test metadata and the packaging and the executables themselves are captured and logged. However, this does somewhat depend upon what we expect our S-bom to contain. As it's a point in time, some of the information that contains may have been invalidated by the time it's used or consumed. CVEs could have been released, zero days are inherently new, and therefore the consumers of the software may receive an incomplete picture for the package and any dependencies that are packaged in it if there's also vulnerability data there. We'll look a bit later at the various ways in which S-bom's and friends can encapsulate this, but perhaps it makes sense to first of all create an S-bom where we actually ingest a piece of software into our system in order to ensure the point of time is capturing when we acquired it. That could mean regenerating an S-bom in much the same way as scanning and software composition analysis tools do in order to understand the software we're going to run in production. So perhaps our point in time should also be the first time that we deploy that software to our production systems. Any or all of these approaches have their own merits, but probably from a logical and ease of adoption perspective, it makes the most sense to create the S-bom at build time and then to scan the S-bom to update any vulnerability data at the point of use. Whichever way ends up being the best route for you, the most important questions around S-bom's are really around trust. So trust is fundamental to human and technological interactions deeply rooted in authority and accepted on an implicit and explicit level in our daily lives. In technology, the authenticity of the authors of software and hardware is verified through cryptography and our trust as a consumer is extended to new producers. This is much like having faith in a restaurateur's complex dish. Its ingredients and safe preparation ensuring no impurities or unknown elements are added. In contrast to some other industries where responsibility models exist, software lacks a trusted food standards agency or authority to vet the entire digital supply chain for safety, sanctity and security, which means we need to extend our trust to every producer of software individually. Trust in software builds of materials depends on the producer and their adherence to practices that lead to a more robust and trustworthy software supply chain. Our software food standards authority would check the developers, the CICD, the packaging infrastructure, the owner of the signing keys we use to asynchronously verify their claims, as well as the guarantee that no other malicious individuals have access to those keys and the underwriting of our security guarantees from a third party that guarantees compliance, observing the system and the promises made by the standardising authority. Tech is a long way from the levels of trust shown in human food supply chains at this point in time. So we're going to come back a bit to some of that complexity that I think Andy's just alluded to in a second, but let's talk about the format of S-bombs to start with. As with almost all formats, there are part of the standards which are mandatory and loads and loads of parts that are optional. A lot of the tools that create S-bombs currently really only deal with the mandatory elements of the S-bomb, which means that not all S-bombs are created equal. And there's an emerging set of tools for testing the quality of an S-bomb. We've got some of them here, S-bombs scorecard from eBay, S-bombs from interlink, and other tools from organisations like NTIA. This is clearly a highly subjective field because the definition of what quality is is going to differ from organisation to organisation. But the way all of these tools work is to present back scoring based on different parts of the data contained in the S-bomb, things like full package URLs, licence information, stuff like that. And what's really interesting about this kind of emerging quality testing in S-bombs is that we can look at these tools as reflecting what users think will be important to them in terms of the kinds of information contained in S-bombs. And because of that, we think that these kinds of tools are likely to influence the development of the S-bomb creation tools themselves. So if we take a quick look at an S-bomb, this is an entirely correct one. I've snipped it for brevity, it's a CycleMDX one, and it fully meets the CycleMDX specification. We could see it contains information about libraries and then information about the dependencies for that particular package. But if we test this S-bomb with S-bomb scorecard, the overall score that it gives this S-bomb is 77%. And that's based mainly on the fact that there is no licence information contained in this S-bomb. Now, there isn't a hard requirement in the spec for licence information, but it's clearly seen as important because all of the quality testing tools score for it. Like I said, this is entirely subjective because it will really be down to your organisation which things are important to see in an S-bomb. But once all of our software starts being shipped with S-bombs, we can imagine wanting to test the quality of the S-bomb documents as we ingest that particular piece of software. And one way to do this might be to use policies in our CICD pipelines and potentially gate on poor quality S-bomb documents. And here we can leverage existing policy tools like Open Policy Agent to build these tests into our pipelines. Here's an example of a rego test which will fail if the S-bomb quality score from S-bomb scorecard is less than eight. And if we run this test locally using Comfortest on that S-bomb we just looked at, we can see the S-bomb fails to pass this policy. So, given that we've got a lot of optional fields and our tools may not deal with all of those fields, we can imagine that users might want to enrich their S-bombs to add additional relevant data that they care about. So perhaps we want to add that licence information. Perhaps we want richer, more detailed information about the components and their authors, where they've come from, or even things like security vulnerabilities. And again there's emerging tooling to do S-bomb enrichment that allows us to add other interesting data from different data sources into your basic S-bombs. Parley is an open source tool from SNIC that does exactly that. It enriches S-bombs to add more detailed package data, security scoring using the scorecard system from OpenSSF, and even vulnerability data from SNIC. So if we go back to this original S-bomb example, we can see we've got that package data in there, but it's pretty minimal, doesn't contain anything about licences, which we know scored it down on the quality tests. So one of the sources that Parley supports is ecosystems. This is a very cool site, contains a ton of open data about packages, and it's all available via an API. So if we take a look at enriching our original S-bomb with Parley, we can see the CLI command at the top to create a new S-bomb, and I've actually split this into two columns because it's a bit more readable for a slide. But when we look at the new S-bomb, we can see the information about the package is tons richer. We not only have the license information, we have detailed information about the supplier, we have lots of external references to websites and things like that, and we have a bunch of different properties, including the published date and various tags from ecosystems. And if we retest this enriched S-bomb, we can see that it now scores much more highly on this particular quality test. Another avenue for enrichment is adding the open SSF scorecards to our S-bombs. If you're not familiar with the scorecard stuff, it's a major project funded by the open SSF, which runs a whole bunch of checks on open source repositories for not only security data but community health licenses and a whole bunch of other stuff. These are all machine-readable and again accessible via an API. So if we look again at enriching our original S-bomb with the open SSF scorecard data, here we can see that Parley's added this external references section with the URL for the scorecard. And it's not very readable, I don't think, on this particular slide, but if we navigate to that URL, we can actually view all of the scorecard scores in JSON. And then this final example is enriching with vulnerability data. In this case, it's using Snake, obviously, but other security tools are available. What's happening here is we're embedding the vulnerability data directly in the S-bomb. Andy's going to talk a bit more about this later, but this is one of the forms that a thing called vulnerability disclosure report can take, where we actually embed it directly in the S-bomb itself in this vulnerability section. There are a couple of different formats that a VDR document can take, and Andy's going to talk about that in a minute. Other than that, you're seeing all this security data. Again, this is snip for brevity, there's a lot more stuff that's put in here, but you've got all that information about each individual CVE directly into the S-bomb. So, just to finish off on enrichment, loads of possibilities for customisation, it's massively subjective, it's really going to depend on what your organisation considers to be important. You can really go wild here, add multiple sources, and really add a lot more depth to that basic S-bomb. Over to you, sir. Thank you very much. We can enrich the core S-bomb itself with this data, which gives us something that consumers have trust in for that relevant point in time. Why do we want to do this? Software is never entirely secure, nothing can be entirely secured in the history of humanity, perhaps Fort Knox might be a current exception, but software that stands still will die. So we must move our dependencies forward, they will get vulnerabilities as is natural as we look to move software forward, and we want to take advantage of new features. New features bring risk of those vulnerabilities, but companies that don't ship features will be beaten by less risk averse organisations. So can we use S-bombs to mitigate this risk for our organisation, and if we do, what are we risking? If we trust an S-bomb too much, and the S-bomb then results in compromise, it can lead to software implants. This is true for software in general, there's nothing unique about S-bombs, but in order to meta-observe the S-bombs that we're then building this new trust relationship with, this is how we build a threat library and then model our behaviours in a way to quantifiably build controls into our pipelines that we'll look at as we progress. Remote code execution, a host of further threats, reasonably standard, but let's look at exactly what that means. An S-bomb contains potentially a dependency with a vulnerability, and so in order to indicate this to the consumer of the S-bomb, the producer, the vendor, or trusted third party can provide one of these two documents, a VEX or a VDR. So these software builds materials are valuable tools, but come short when failing to distinguish between exploitable and non-exploitable vulnerabilities in the dependency tree. So early adopters have stepped up to the plate here, proposing new standards and updates to mitigate these threats. Amongst them are VDR, CSAF, VEX and OpenVex, each making its mark in the realm of vulnerability management. We can think about security scanners as over-eager watchdogs, sometimes barking when there's no one at the door, there's no intruder. So imagine being alerted to a vulnerability that's already been patched or is not present, or in fact is not executable because it's an unreachable code path or a function in a library that's not even called by the consuming software. This is where these vulnerability disclosure exploitability libraries and formats come into play. We teach our watchdog new tricks by cutting through the noise and providing a VEX document. That document indicates that a trusted third party or the vendor potentially has verified that the software is not vulnerable to the specific CVE that the security scanner knows is in that version. So, for example, when OpenSSL has a vulnerability and a certain type of cryptography that it uses, if an application consumes OpenSSL but doesn't use that cipher, you can provide a VEX to say, we know that we're not impacted by this vulnerability. That means vulnerability assessors all of a sudden can run vulnerable code in production. This reduces the burden on patches, security teams and generally squeezed selection of burnt out cyber security individuals. So, the triumvirate of vulnerability management, VDR VEX and the newer OpenVex, vulnerability disclosure reports is the fundamental resource providing a list of vulnerabilities published by software vendors. However, it lacks a common reporting standard for programmatic consumption. On the other hand, vulnerability exploitability exchange format, practically unpronounceable, or VEX, bridges this gap by sharing information about the status of vulnerabilities, and OpenVex takes this one step further, providing an implementation effect that is interoperable and embeddable in any SBOM specification. It's a little bit of a fork looking at trying to move the discussion around VEX forward more quickly because these things have come from US government bodies and have the slightly turgid pace of progress that you might expect from large committee-driven decisions. So, let's attack the SBOM. It hasn't done anything to us yet. There are two classes of threat that we're looking at here, malicious and accidental. Malicious threats are deliberately introduced by an attacker with the intent to cause harm to the consumer of the SBOM. For example, trivially, an attacker could inject malicious code into the package that the SBOM is describing or create a fictional release with a tampered SBOM. There are lots of predicates required for both of those, but we'll discuss those into more detail. Accidental threats are, of course, introduced unintentionally due to errors or omissions. For example, an SBOM may be incorrect or outdated or perhaps contain confidential internal network information. Let's look at how and where these can happen. So, first of course we care about where the SBOM was generated and to what depth. Closed source and compiled software may have components that can't be identified in the software composition analysis after they're composed or built, so the SBOMs must be generated at build time, but they can't be validated unless the consumer rebuilds them from source. On the other hand, open source software can be scanned when it's ingested into an organisation, but then it's just software composition analysis with the additional hurdle of having to trust a third party not to tamper with or make mistakes in their SBOM. So, we're not replacing software composition analysis, but we can make additional verifications to enhance our trust levels in it. Can we make this more complicated? Yes, indeed we can. So, let's talk about the threats that can occur mapped to this fictional build life cycle. Local compromises are of course problematic. If someone compromises your local machine, your signing keys, your identity, either SSH or Git keys, everything from that point onward is compromised. Assigned and valid SBOM for an application with malicious implants or vulnerabilities provides a false sense of security, so local developer compromise is of course our first issue as it would be for any other system, or in fact whether there's an SBOM there or not. We get into the source and data repository. A compromise of the source repo where an attacker can gain control over the source to make the producer release a vulnerable version unknowingly. So the same style of attack as a local compromise just performed remotely, predicates or preconditions probably include access to the developer's SSH key or access to their GPG key, or perhaps they've left their device unlocked and they've just gone into the GitHub and the web UI. Whether we should trust the GitHub web UI webflow signing key is a great unanswered question. Dependency confusion. This is not an SBOM problem. The package manager, how package managers resolve the names of specific packages. So if I'm using an internal registry and I have a namespace for that internal registry and then someone goes and registers that same name on a public registry, the resolution of the package manifest, your package JSON for example, first goes to the public registry. So it's possible to gain remote code execution into the network for some of the largest organizations in the world as of early last year when this new supply chain attack came about. Again, dependency confusion relies on the package manager but if we have an SBOM behind it we would then need to forge that in a similar way. Build and verification. Probably the most dangerous threat here is an incomplete SBOM generation. Software authors are incentivized to not include their full suite of dependencies, transitive and so a huge call graph of dependencies because they're then responsible for the management and patching of the CVs all the way down that tree. So a lot of SBOMs will generate just the top level of dependencies and then it's down to the maintainer of that package whether a vulnerability in their package dependency tree is reported as a CV in the top package or not. This is almost an intractable problem. The opposite side of this which we'll get to is alert blindness from massive dependency graphs and then we fail to be able to prioritize the criticality of patching these things. Compromise SBOM generation tooling. This is the reflections on trusting trust. If your compiler is compromised, if your fundamental operating system components, your certificate authorities or the kernel, it's similar to local device compromise but again just another concern to think about in our CICD. Code injection, malicious but syntactically correct source code into the software's CICD. This is the SolarWinds style of attack. It's fine to take everything, it's fully validating in terms of the source code and then just at the point of compilation drop in an implant, it's then signed and appears to be legitimate as it's distributed. Difficult problem to solve but there are startups who are looking at how to trace the provenance of the build and ensure that all the build steps are known and detect these things. The Google team have just introduced a capability model for Golang whereby the behavior expected from an application is defined in terms of capabilities and then if the build suddenly starts making network calls to unknown third parties, well we can be sure that we've got some unintended behavior somewhere in the dependency tree. We have confidential data leakage. Perhaps the package URLs refer to internal systems or the external references refer to things that an attacker could use in the event of potential intrusion. Version regression, if as a defender I've allow listed a specific version because we needed to get to production but then we rolled forward. If we can regress down to that version and get the S-bomb to be signed in the package building process again we've bypassed a legitimate process. We cache algorithms. Things like shattered will allow us to pack individual files with lots of junk and rubbish off the end but then we have a hash collision attack. These are much cheaper. Shard 1 is effectively broken for this mechanism. We can mitigate this by hashing multiple different hash types. Shard 1 through 512 in our S-bomb and then validating one or all of them or just not using broken hash algorithm. Providence manipulation, altering or signing tampered providence data to make malicious changes appear legitimate. Again there is an expectation that the signing keys and the build infrastructure are compromised which is again a significant precondition. This may not be your biggest problem if you have a foreign party in your build infrastructure and outdated S-bomb content. You need to ship the correct S-bomb of course for the relevant software. Approven release, metadata tampering. Again we need the signing keys and we should be able to revalidate these things but if we receive an S-bomb and don't validate it what was the point of doing it in the first place and finally at this point signing an S-bomb would compromise keys. Those keys must be held under lock and other key. It's unlikely that we'd see Faraday cages for these kind of things. So again the build infrastructure must be sacrosanct. There's a concept of salsa minus one so we'll get to salsa in a moment whereby the assumption of secure build infrastructure that salsa makes must be verified. If we're using outdated Jenkins installs we've probably got a vulnerable plugin. The sandbox model brings everything into the same plugin. I meant the same model but I said model but it's not the same. Okay on the package repository and distribution we're almost there. Fictional dependency release, so creating a fictional version of the software and S-bomb or tricking users into downloading that malicious version. Again the precondition is compromised but this is a way that the S-bomb is not providing the safety that we expect it to. Social engineering attacks, this might be a way to get access to these pieces of infrastructure in the first place. Man in the middle, we're assuming that there's a privacy at this point or the signing keys so cryptography is not reliable. Some large organisations, unfortunately this is the regure. Package assessment, lack of contextual verification. If we're given this information we have to use it because it makes no sense to distribute without verifying it's a zero trust ESC metaphor, compromised or outdated threat intelligence. It's all very well to, often for large organisations with air gaps. It's a daily sneakerware routine to download the latest CVE database, walk it across the air gap and plug it into another system. If that process can be disrupted we won't be able to see what vulnerabilities exist inside the system. Again there are mitigations especially when scanning things offline and outside of those environments. And transitive dependency proliferation. Talking about that first level of depth of S-bomb dependency listing, the opposite is if we include everything so the entire graph of dependencies all the way back it becomes incredibly difficult to manage. This is the reality that vulnerability assessors have to deal with but it's two sides of a not very pleasant coin and finally deploying operations if we are providing an effective software firewall of the extremity of our organisation to ingest untrusted open source software and third-party code and somebody gets into that validation infrastructure. Again, all bets are off. These are some of these obvious threats but they're things that make the S-bomb just contextualise the way that we need to be careful with it. So of course this all smells a lot like Salsa the supply chain security framework which is just another view on the previous framework that we just modelled. We'll extend this little classified threats so this is everything that we've just run through but now re-based on Salsa. This work is going to go into the open SSF and be used as a basis for building a full software supply chain security model. If anyone would like to contribute into that work we'd very happily take collaborators. Now we'll look briefly at the same thing with the graph. Can you imagine how much could go wrong? So what we're looking at here Sorry, mind if I have a sip of water? Thank you very much. A lot of words there Andy, a lot of words. Word salad. So what we're looking at here are the threats to a portion of the build system and we'll expand this picture. These are some controls that we can apply to the build system. As you can see there is non-exhaustive but reasonably long list of things that we have to consider at all stages. This is the pre-build stage. So before we even started to build our application we can get private workers to build repos then we have the build itself that sacrosanct build test publish phase exploding that out. Here are more risks that we have to deal with and some controls that we can use. Again there is a lot of work to go on here a lot of moving pieces we're talking full automation, infrastructure as code rebuildability, fundamental to securing these things and then finally the post build. Can you imagine how much can go wrong? We build attack trees for these kind of things so that we can quantifiably understand where our control should be and the counter measures efficacy in the face of all of these potential threats. So there's lots of good stuff going on here the CNCF tag security group and the open SSF and a lot of work in the space If you would like to know more please do come and reach out afterwards we're both available on Twitter find us in the community in various places and I will go and have throat sweets before lying. I'll pass over to you good sir. So I mean I think what's pretty clear from the threat model and obviously no one's expected to take all of that in one go right but I think the point of that the S-bonds are only really useful to us as part of a much larger system that's based on trust and attestation there are so many failure points in the use and management of them where if we don't deal with trust properly we can basically make that S-bond completely worthless right? We're back to that thing about it's just a bunch of JSON and the magic bullet is not really about the JSON itself it's about you know how we work these things into our software development life cycle but that is not to say at all that S-bonds aren't a very good thing they help us to understand the dependency tree and use properly they can allow us to analyse our infrastructure for known vulnerabilities as well as to manage that supply chain security as part of that system but we've got to kind of treat them as software we've got to think about their life cycle how we create them how we manage them we've got to be able to verify them at all points where we're going to use them and we need to give them some love we've got to put the things into them that we really care about so that's all folks thank you very much for listening and we are happy to take any questions are we? come on you must have some questions yes sir red button on the bottom I think Andy or I can just repeat the question so you need S-bonds from your up chain suppliers and you want to give S-bonds to your downstream customers how does that transfer happen generally speaking do you want to answer that? so the question was how does that transfer happen from taking an S-bonds from upstream to passing it on to your downstream consumers of your software right? especially if you're delivering embedded software I mean there's different approaches I think there's probably people in the audience that would have a view on this as well it can be packaged in the artifact itself it can be packaged to a separate OCI artifact it can be treated as if it was a checksum so it's delivered via a different channel so sort of back in the day of where we verify a shaw of downloaded artifacts but I wonder if Puerco might have an answer to get into more trouble with that question there are cases where the upstream vendor if I have bought chips or anything else will refuse me the right to deliver any S-bonds to my customers for their devices that's getting the whole situation even more complex because it has a new layer called legal which is painful I mean I think that it's pretty valid though right because I think that's one of the things that is still kind of emerging on you know I mean I know for containers we store them in in OCI registers right but I don't think that's a solved problem for every space in which you would want to distribute your S-bonds and it could be the case as well that like Vex because they're temporarily bound they probably want to be from behind an API somewhere because if your artifact is hashed and content addressable which is the point of an S-bond then it may be that you can provide them via API as well but yeah it's still it's done in different mechanisms for different places at the moment I mean we've definitely heard stories about people delivering them on paper right which you know I think that's the kind of if you're delivering it on paper and putting it in a file that's not really solving your supply chain securing problems yeah I guess just a compliment an answer to you if your supplier does not provide you an S-bond just don't buy anything from the supplier the problem is they will provide me an S-bond but don't give me the right to distribute it but you said something that triggered me coming from my TF and the word of protocols you said API for the Vex I haven't seen anyone focusing on that I mean S-bond is tied to a particular release but Vex is a constantly changing document right and the discovery of where to download that the discovery that there's something new this is a protocol question really that we need to start looking into it's not like I want every single vendor do different APIs and have to implement thousands of different APIs and crazy stuff I want a standard here so this time last year I did a talk at open source summit in Dublin talking about Vex and kind of the emergency of the standard I was tempted for that talk to build the APIs we started to ruminate on what this looks like it's all down to trust it's all down to trust again so who is providing that Vex document is it the vendor? great okay but they've made a mistake so what about as a security consultancy that wants to assure that piece of software or do I get to publish one then who do we trust who ultimately trust the signing keys where's that route of trust because it's not the same as certificate routes of trust where we've got those global TAs or DNS where we've got the signers so we're into a federated API for Vex and I gave up at that point I thought this is not too much for one talk yeah so it's currently unsolved I think really the closest anchoring for that should be the vendor if there is a vulnerability in the dependency graph the vendor can say we've run symbolic execution we're 100% sure that you can't get here or it's an open source package and we've removed the function that has the vulnerability when we package up the software or we tree shook whatever from a JavaScript perspective it's the same thing but it all comes down to trust and if you're a big bank you have a financial relationship on paper with the vendor where they have responsibility if it's open source it is an individual trust relationship so I didn't really want to get more involved than that but thanks for asking so two things one is the live stream says OCI refers API as potentially a mechanism to distribute that was from Brendan Mitchell but I think from the open source perspective I know one of the biggest concerns has been the cost of distribution because they're being slammed enough with all sorts of stuff and to come in and now say hey we now need to have a whole new mechanism potentially a whole new set of APIs that we now need to support and now need to pay for my question though actually is around I'm curious thoughts around most S-bomb generators today strip out the actual transitive dependency information they just sort of say here is the list of dependencies I found in your build in your container but not the relationships between those things and often it's the relationships between those things where the security comes into play I'm curious if you have any thoughts around how to fix that or how to I know it's the hard question yes I mean yes it's a hard question it is a hard question it's misaligned incentives again to some extent as a maintainer you don't want to have to be issuing point releases for a dependency graph of a thousand packages and you end up with an unreasonably sized S-bomb that is it takes a long time to verify and if you're going to go and run a hash function over every single file system kind of node it's going to take you a long time so that's almost intractable for a certain size of application it's almost intractable and then on the other hand if you are capturing all of that dependency information as a consumer then you have a lot of work to do and you're back into over exposure to alert fatigue so I'm not quite sure where the solution is no I mean I have nothing to add other than I don't know either but thanks for the question I love these loaded questions from supply chain security experts anyone else on it's a set up okay yep one more so I'm interested in the idea of I was so curious when I heard about S-bomb enrichment and trusting S-bomb in the same talk because to me whenever you enrich an S-bomb you're essentially creating a new document so in a world where we can get trust at S-bomb which is not quite here yet how do you envision transitioning that trust from the original S-bomb to the enriched S-bomb that's a very good point yeah I mean I think enrichment is clearly something that we see people doing once they have that trusted S-bomb already right so it's something that you would do internally as an organisation and you know I mean I think everybody ultimately has to make their own decisions about what you know what they want to put into that into that document I think Parley is a sort of proof of concept in a way about what's possible with that and we may actually see a lot of S-bomb creators adding that information you know I think the whole thing is kind of still in flux which is the point about it really but there's clearly stuff that end users care about that S-bomb most S-bomb creation tools are not currently doing to the level that so the use case is more of a last mile because they're so subjective about what does your organisation care about and what are the things that within your build pipeline you need to see within that document so yeah I definitely don't see it as being something which a vendor of software would enrich and then pass that on because it's so subjective I think it's down to what you want as an organisation I mean I think the vulnerability data is an interesting one right because where do you want that VDR type stuff you know one place that you can do it in the spec is in the S-bomb itself I wonder would you potentially use an external reference for the previous version and just say this is a totally different S-bomb it's now signed for my level I mean obviously tools will not then go and verify the thing so you can kind of put whatever you want in it and you need another manual step to do it but yeah some link between it would be the only way I think that use cases like applies to so many different documents in the supply security space so maybe solving it once for all would be a good idea inheritance