 and author data and the timestamp because timestamps are important. So there's a couple of formats that are out there today already. SPDX is the one I'm most familiar with, but there's also Cyclone DX and SWID. And SPDX is also now an international standard as of last year and so that's obviously where the Lynx Foundation is putting its focus. So there's other practices that have to be associated with these S-Bombs and again this one little table pretty much summarizes up what the minimum elements are and you need to be able to understand your dependency trees and how these components are hooked together and what you depend on. And a component can be anything from a package to a file. It's a sort of unit. It's why they use the term component. And you have to be able to know what your known unknowns are and that's the part for relationships. Do you know you have closure? Do you know all the pieces or hey, I'm a company that has this proprietary technology and I'm going to tell you I might have something but I'm not going to give you details? That becomes a known unknown. But you have ways of signaling that so that you know where your risk areas are. Firmware and you know pluggable modules and firmware are very common in the embedded space. You know someone has an accelerator technology making that visible how much people want to do that or not do that with what gets loaded onto your FPGA things like that. We need to be able to handle those and model those to do as well. And this model will basically let us do that at a conceptual level. Now getting the tooling there to make it easy is where we need to have some challenges so that it's put in the builds and like I say it's a one-line change somewhere. Just taking those minimum elements. Supplier name, command name so forth. The SBDX22 spec that's been around now for a year and a half, two years almost. It meets that all. Okay. We're adding the SBDX community coming out with another rev of the specification that will help make some interchange work a little bit better with some of the other formats but all the concepts are there today. And they're there in the other ones too. So working for this and minimum is there but it's very much recognized that you have various different types of relationships. You have licensing information. These things are all important but at the heart of an SBOM and so there are use cases but they're not mandated. So it's good if you can put them in. And so being able to generate these so you try to get the best information available will save problems down the road. But that is not initially mandated but it's just those other fields that were the ones that are there. And the signalling of the known unknowns. But having these types of relationships and being able to express whether something is statically or dynamically linked has implications for licensing but it also has very strong implications for security as well. And you know if you're running something and something changes up from underneath you there may suddenly become a vulnerability for your runtime libraries. And being able to be able to understand how this is all relating for a system perspective is a good challenge on the inside. So from that survey I was talking about 40% percent are using SBOMs today. And the regulatory agencies are waking up which hits a lot of our spaces in the embedded side. Specifically the FDA has already initial issues and preliminary guidance about what they are looking for and in terms of an SBOM. And they're adding a few more fields that they want to be seeing. Like end of life support. Our favorite. You know how long is this being supported for. So we're sort of extending things a little bit more in at least on the SPDX side to let you if you know that information when you're building a system you can just put it in. And you know like with the latest kernel we know like right what the support is from the community and the corporations may add different support terms and they may want to put that in when they ship it out. So we have ways of working with these things and you know the CIP project may want to actually advertise that if you're seeing some of the clinics they care about you can have even a longer life cycle. It's all you know this is all areas I think we've got some interesting innovation and some discussions to have as a community to figure out how we can make this better. But the national energy people are very interested in it. The health care is the automotive sector. I know they've all got working proof of concepts. Show of hands. I love being able to do this again. How many were in the keynote yesterday? Okay. So the dagger board project is one to consume S-bombs. It's one of the first nice ones to actually consume S-bombs, stick up a dashboard and make it visible what's there and what vulnerability and manage monitoring the vulnerabilities against the components you have. And that's what S-bombs will eventually be useful for. So seeing these open source projects come out that consume S-bombs and help people navigate do they have risk or not. This is kind of where we're going to be going with it. So it's one thing to us produce it and then it's another thing for people who are using our software to be using S-bombs to help figure out if they've got a risk or not. And that's pretty much what the hard of all law this initiative is, is basically letting people who are using the software have to do better risk management. So embedded projects that are generating S-bombs today. Zephyr, of which we just had our first developer summit. I hope we can affiliate more with the embedded links one next year. We actually had an in-person thing that Steve Winslow who's here in the audience who can talk more about this topic actually was presenting on where the study divided of S-bomb generation is and some of the ideas we need to be heading towards. And to do that you just say West SPDX and then you do your West build and you get three S-bombs coming out of it. And then the Octo project in last year with the last release. Joshua Watt has done some awesome work and Saul has done some work with the kernel and so right now if you're using Octo you can automatically generate S-bombs. You basically change one of the config information files and you just keep doing your things like there. So it all happens in the background and you get this whole host of small S-bombs that have relationships to stitch them all together. So you understand the system that's there. So let me just go into a little bit of detail here. So Steve Winslow gave, let me say I could use a slide from before. And there's videos from him available up there. So you can watch more details of what's happening behind the scenes here. But what it does is it does two source, potentially three source S-bombs. One for the Zephyr sources. One for the SDK sources. And one for the app. And so there's these little S-bombs for those and that's just the sources, the source S-bombs. And then there's a build S-bombs where the .os are linked back to the sources. So you have that traceability back and then the .os are linked into intermediate .as and so forth to your final L image. And so you have a full S-bombs of that. So yeah, hands up. Go for it. So since these files are changing all the time and you're going down to the file level, is every file going to have a rev? Well, every file has a unique hash, which is actually can serve as a rev. Okay, so right. So in SPDX, every every file is hashed. Okay, and so you have you can quite frankly memorize your system, such that if you're rebuilding things, you don't have to redo your scans and things like that to find information. So if you've done it once and you've got it in your, you know, in your repositories and so forth, potentially being able to reuse it in some of these systems is something that I think we'll be heading towards. The same way we try to find ways of optimizing our builds today. But the vision is every time you build a binary image, there's an S-bomb that comes out with it at the same time. That's where we need to be literally and it just happens behind the scenes. It's not a big, oh, frantic, frantic, frantic at the end of the day. It's sitting there. It's a sidecar file. And if people want to process it further before they share it further, they can do that. But it's there with whatever information to build infrastructure new. And I think that is a plan for embed is being effective. Similarly, so where this is actually being now used in to prove that yeah, it's just happening by default. No one's thinking about it too much is if you actually go onto the Zephyr website, there is a there's a blog post on this. But there's a dashboard that the Reno people, which are a simulator put together. And it's got all a whole suite of the boards that are already working with Zephyr. So there's almost 400 boards in there right now. And any time in that dashboard, it says past or built. If you go and download the image, you get the executable image and you also get the s bombs. So you get those three s bombs I was talking about, the Zephyr sources, app sources and the built sources, the built image. And you can see them. And this is just what it looks like. This is take by but tag value isn't the only format we can use. It can also be JSON. It can also be YAML. It can also be XML. It can also be spreadsheets with that but the spdx. It's a language. It's just a language for expressing this and how you want to work with your tooling and the format you want to work with is pretty much up to, you know, the people doing the tooling. We are not overly prescriptive here. So the other case study, which is relevant is what was done in Yachto. And this is thanks to Josh. He gave me permission to use his slides now. I'm all for getting my permission for my slides. But if for the I'm assuming, does anyone in this room not know about Yachto? I didn't think so. I can go over that slide fast. But basically, Yachto is doing a whole bunch of different types of images. Okay. And be it containers, be it devs and so forth. And any of these can effectively go forward with it. And so the build flow is you've got your host tools, you create your tool chain, everything's shot and hashed. And then you're basically executing your recipes to make your application and your images. And literally every step along there gets to have an S bomb. So you can get your S bomb for your sources, you can get S bomb for the recipe, the cross compiler, once the cross compiler is there, you have your build tool chain, all S bombed, which gee, that's useful for safety and supply chain tool attacks. From that, when that's gone, those that tool executing generate sources, you get that coming out as an S bomb, you get a source bomb for the recipes and the built images. And then you basically create your target image by pulling all that together, have S bomb for that. So all of those S bombs, you don't want to be putting in one big file, you want to have a broken up and you want to be having logical relationships between these things so you can reuse components over time. And that's our today. Now what we do for the final sort of like all index everything, we're having discussions about or at least I want to be having some discussions with Josh is here. Josh, I can't quite see this. Okay, Josh's guys hand up there. And so feel free to come up closer so you can answer some of the questions that people have to Josh, it's okay. But yeah, so that's what we've got going today. So if we're talking about the state of S bomb, in embedded, this is sort of I think where we are at state of the art for this. I think there's other projects that are working towards getting some of this stuff in place. I chatted with Richard Barry a couple months ago, and he said he would be doing it for free RTOS as well. So if we've got the Zephyr ecosystem, we've got the free RTOS ecosystem, we've got the Linux ecosystems, and we've got the octo build system, we've got a good swath of embedded. But my question now is what's missing? And where do we want to go? So let me just one thing that is we're trying to do is things like security, that's a missing area. And so things like dagger board, which you just saw, will let us people take these s bombs, feed them into their ecosystems and then do the monitoring. SPDX lets you link to CPE, to this, you know, common platform enumerations, you can have pearls in there. We'll probably look at vulnerability disclosures. So these things happen on their own time scales, how we get this all to fit together. These are all good questions. But with that, I think that's pretty much what I had to talk about. If people are curious, go for it, Tim, go for our questions. So hypothetically, if a company was using Debian, does Debian have some s bomb stuff going on or? So Debian actually, SPDX started from Debian, okay, way back when, the depth five format from Debian, the trouble is the lawyers wouldn't accept the depth five format because they couldn't have verification. And so the tag value that you see here is literally just depth five with hashes added and forcing full enumeration. Deocto is able to generate out to Debian. And so you can generate for Debian packages, you can be generating out these s bombs through the octo side today. It's an area for outreach with the Debian ecosystem, but if the sources are out there, I'd love to get it such that every time they build their distros, they're just building out the s bombs for every the pack for the built packages. But specifically Sebastian Crane from the SPX community is already doing an outreach to them and is becoming a Debian maintainer so that they will interact with him. So fingers crossed, you know, at some point in time, later in the year, next year, we might be able to have that just happening out of the Debian builds too, which would be a good story. There's a fair amount of discussion going on with Red Hat. They're also working and looking at various, you know, how do they start looking at things for their distros as well. So a lot of these companies are working with like the company side of it. I think there's the economic pressure that I was turning about earlier that will move things forward. It's the open source community side of it and making it as easy as possible, I think is a piece we have to work on in the sense of the outreach into these communities. I've been trying to outreach into the Debian community about SPDX for about, as long as SPDX has existed, but I never became a Debian developer so we need people who are Debian developers to help move it along in that community. Go for it. Oh, well, there's someone back there next. It's okay, you got the mic. Okay, so if a company has an internal fork of Zephyr and then you've modified some files, does that defeat the CVE matching or is the matching based on source? So right now in, if you've taken an internal fork and what you need to do is figure out whether one of your changed files is in the part that participates in vulnerability, so you still need to get down to the source level. How much of that can be automated and you've automated is a good question. Right now the vulnerability database just reports on like CPE, like the version number at the package level, which is not sufficient. The example that taught me that was Amnesia 33, where Zephyr had Fnet in the LTS as a component, so to speak. So if you look at just the component numbers, you think, oh, my LTS of Zephyr is in problems. Well, actually now the actual files and the functionality were never included in Zephyr and we had no way to signal that. And so being able to signal whether something is vulnerable or not is something that there's work going on. There's an initiative called VEX or Vulnerability Exploitability and so that we can automate the signaling of whether or not code is vulnerable. But you have to have that S-bomb first to do that. And from my perspective anyhow, we need to be able to track from the version that was built into an image and what actually got included through the configs in the image to the sources to know whether or not we're really vulnerable. If we can get that traceability back, which we've got in these cases, then we should be able to start to eventually automate. And we all know that open source is built on copying a file from one project to another project, right? So being able to track back, hey, is my file, you know, I'm using this file here at the source level. That's the granular you're going to need to get at least, you know, hey, I know it, I know this project over here has a vulnerability. I'm wondering if that file is participating and can I figure out if it's in mine? So there's some interesting initiatives like the Gitbomb initiative that might give us a way of fast matching all this stuff together. And I'm really excited about having those linkages happening between things. Okay, next question? So now we have the S-bomb, you know, ideally. Has there been any thought about what do we do with that for like device verification of, like we have, we get this device from our supplier, how do we ensure that, that software that they, on the S-bomb that they said they provided actually conforms on the device that we get? Well, hopefully they've been using an S-bomb that has hashes. You notice it was not part of the minimum elements. There was lots of long and hard debate on that. Some of us were very strongly that they needed to include hashes because we'd seen the problem. But there was a lot of push back so they didn't get included. But if you actually read through the document, let's see, they expect hashes. And hey, if you say give me a binary and you've given me an S-bomb, first check. Top little hashes match? They don't? Oh, why? And then get down to the levels. So by using hashes, we can do the trust but verify type of mode, which we I think we all need to be in. Go for it. Hey, I do supply chain security at Anaconda. So my last interactions with embed stuff was from a prior project. I don't actually know how common Anaconda code use is in embedded at the moment. But we generate S-bombs with all of our builds in the SPDX format. And we're trying to figure out how to connect up like vulnerability assessments that we do into that. And Vex is what we're currently thinking about, at least what I'm currently thinking about. Okay. But yeah, I'm curious. So what's going on there? So first thing, if you're already using SPDX, it's free to, if you're an LF member already, it's free to join the SPDX project and let's use your logo. So I'll put that plug into everyone. But in terms, yeah, well, in terms of how we all hook this together, the keynote yesterday by Jennings and the New York Presbyterian, they've open sourced the code. Okay. And that's matching if they're consuming S-bombs from others, they're keeping an internal database and they're matching it to vulnerabilities. That I think is the first case. So if you've got the S-bomb from your product, getting it to people who are consuming it and letting them, you know, monitor it, I think the first case will be seeing active in the field. Beyond that, if you've got people including your products into others' places, providing the S-bomb to them proactively just so that they are official, might be a step too. Is that what you're looking for? Or did I miss the project? Sorry, did I miss the question? Yeah, I was also, so any kind of consolidation of like, here's what we're looking for in any series of industries, it would be really interesting. So there's actually working groups doing proof of concepts today in the medical devices, which James talked about. But there's also one going on in the automotive. There's one going on in the energy sector. So a lot of the things that have regulatory oversight are actually working active proof of concepts right now to prove to themselves that yes, they can do it. Yes, they can figure out how to operationalize this. And yes, they can figure out how to take the operational information and pull it into their policies. So there's work going on right now on how we share s bombs, you know, one organization shares it, they broadcast it to the one to one it, you know, how do you trust that it has been corrupted along the way situation normal problems, right? But then the other thing is policy engines get arrogant created at companies. You know, first question they have an s bomb or not is obviously a risk factor. But then once you start digging hasn't been verified, you know, have, you know, do we have active monitoring of the components that are key, things like that. I think we'll see more and more of that type of operationalization. Quite frankly, from my perspective, you know, if I've got a car with all the software in it, I kind of want to know someone's keeping an eye on that stuff. Next question. Not so much a question, but a comment. So I run a personal server in my home with a little Minecraft server that was running for my kids. And it got hit by log four shell. And it would have been nice to have s bombs because it took me a couple hours to figure out whether or not I was vulnerable. And if I'd had a manifest, it would have been like so much easier. Crap. Yeah. Are you looking at working with things like autocom, auto make, CMake and things like that? I would love to talk to anyone in that space who wants to work with us to do this automatically there. I don't work in that space, but it's a CMake. Basically, Steve behind you. Go ahead, Steve, and talk. He did the work in CMake, so you can talk about it directly. Yeah, I was just going to say for Zephyr, what we did there is because they use CMake as their build system, we're really just working and taking the metadata that you can get out of CMake through their file based APIs and then essentially kind of processing it and turning it into what comes out in the S-bomb as SPDX. So there there's some things like started a little bit higher level with CMake and then got more specific about how it's used by Zephyr, but there's some stuff there I'd be happy to chat about. So how, how hard is it? Is it just like adding one line to your CMake file or is it a lot more difficult than that? So it's you can very easily get CMake to give you the JSON, JSON files with all the metadata. Actually, you know, a lot of the code for getting it to work in Zephyr was to parse that data and then, you know, parse the JSON, figure out what to do with it and figure out how to match it up against the files that you're actually are actually being used in your build system. So it's not, I would love to see that become something that's actually in CMake so that it could be as simple as, you know, yeah, I think that's that would be a goal. Yeah. Okay. So open question then, where do we have gaps other than the tooling infrastructure for embedded that we need to work on fixing and are there any ones we think are the highest priority wants to focus on first? Any votes for the GCC compiler and the linker? Yeah. Well, the new tool chain is still used for building most of the embedded space. It's a debug information that these things are doing is byproducts that we basically pull into S-bombs. But what information out of the compiler? I mean, you've got the sources coming into the compiler. You've got the compiler version. Yeah. What other information besides that would you want? So what, what files made it into the final binary? Okay, so the actual, what sources went into this auto file? Yeah, what sources basically have been pre-processed, how they've been pre-processed? What are the dot a intermediates? So all that's in the Dwarf. So you can pull it out. Exactly. It's all in the debug information right now. And we need things that process that debug information and just, again, hopefully it's just a one line option when you're doing your build with that new tool chain to export that out as an S-bomb with the binary. Whenever you generate an elf, there should be just an S-bomb there. So the linker steps in the Dwarf and debug information has a lot of it. We just need people to do the sort of sort of processing that we've done with CMake there as well. It shouldn't be too hard. Please. Someone, please, just anyone who can put that in, let me know and I'll happily spread the word it's done and let people, you know, come and help us. Yeah. And like I say, anyone who, like I say, anyone who wants to work on that and they want to understand more about the S-b-x format, reach out to myself or Steve and we'll help you understand how to, is it okay for me to speak to you, Steve, on your behalf? He's nodding his head, okay. But, you know, but yeah, like, we'll help you make sure that the S-b-x coming out is correct. There's online validator tools for S-b-d-x. So if you think you've got it close, you can throw it through the online validator and check. And you can put it out as Jason or you can put it out as Tag or whatever, you know, one of the formats because the validator takes all of them in and then we'll potentially convert them into another format. So you can, you know, if you've got one tool doing something in one format and you need to have another tool that just accepts only Jason or something like that, you can convert it and pull it together to build these composable things together. Next? Yeah, go for it. So don't you need to also look at the command line that you use to execute the compiler? That can make a difference to the binary. And so one of the things, yeah, exactly, one of the working group that's happening in S-p-d-x right now is a build working group. And so Brandon Lum from Google sort of leading that off with a bunch of other people that are interested in tracking what's important in the builds. And if you'll see that we're trying to collaborate with the reproducible build people to make sure that we reference what they've worked, they've already done effectively and not, you know, we don't want to reinvent the wheel here. We just want to leverage effectively. But the command line is how the build was invoked is a large part of it, as is sometimes the environment it's running on. So another question I just heard about today. Gitbomb, do you know about Gitbomb? Yeah, that's another... So we've actually just... So the two three version of the S-p-d-x spec is about to be released candidate tagged. We've been putting the last set of pulls in. And one of the things that's in there is a link to the Gitoid. And so we worked with the Gitbomb community, such that if a Gitbomb has been created with the ashes for some component, we can actually do an external reference to link an artifact to it. So this way we can have that... But the thing that I'm excited about there is some of it will not have the file names and things like that. So but having the S-bomb data and having the links to the Gitoid, the Gitoid will let us do these fast searches we want to figure out if a file has moved from one product over to another eventually. And I think that will be very powerful once we actually build the ecosystem up around it. Sure. Any other questions? Any other last comments? Oops, one more last comment. Go for it. Yeah, a comment. So you mentioned about S-bombs should be an automatic part of every build, so you should just get them at the end. Yeah. The point that I haven't heard much is if you have that and say you have a bug in your firmware image and it wasn't there, not the firmware image, you could diff the S-bombs if you had them and figure out like, hey, the static library changed. Exactly. That would be hard to do otherwise. Exactly. That's why, like I say, that the Amnesia 33 case, the only way it could signal that it wasn't affected because we had to go to the source file was literally to put out a blog post. Okay. And we need to be able to do that programmatically to get to scale here, especially when we're trying to figure out I've got this really, you know, I've got a nice little constrained IOT device sitting there. It's going to be really, really, really expensive to update it, but there is a vulnerability in code that's sort of related to it. Am I affected or not? And being able to authoritatively say, hey, no, I've got to throw all these devices away or I've got to go through a very expensive update procedure versus no, I can authoritatively say, you're good. No changes needed. You can't be exploited, at least from that vulnerability. We'll see what emerges. They're always coming down the road. Okay. Well, with that then, thank you very much. Thank you for attending and great to see people in person again.