 Well, thank you very much, everyone, for coming. It is so lovely to see people in person again and be able to do things like ask for a show of hands. So since I can finally do it, how many people have heard of S-bombs before this talk? Pretty much everyone. How many people are able to generate S-bombs now? Less. OK, it's a work in progress. That's about 50%. And how many are able to consume other people's S-bombs? No hands up. That is our problem right now. So we're moving our way through the ecosystem. But one of the pieces of FUD I got when we've been working towards this, fear, uncertainty, and doubt, sorry, slang acronym, OK, just checking, is, oh, we can't generate S-bombs for firmware. What? Of course you can. And so they're essential for embedded. In fact, they're probably more essential for embedded than a lot of other places. Because you really need to know what's in your code in the embedded space to know if you have a vulnerability and you need to remediate. So I'm just going to go through why are there interests right now in the S-bombs? What makes up an S-bomb? But since everyone here pretty much has heard of S-bombs, I'll go through the first parts pretty quickly and then I'll talk about a couple of example projects that are out there that are doing it today. Pretty much with one line change. Either in a config file or in a command line. And that's it. And that's where we need to go for this ecosystem to really adopt the S-bombs. So challenge, vulnerabilities. It's going to cost money. And we have these vulnerabilities happening every year ongoing continuously. And as various surveys have shown last year, supply chain attacks are growing. So it's not just the packages, but it's the tool chains making up the packages. And it is the dependencies on other packages. And then sort of migrating their way into executable is where we have problems. So being able to go through that dependency and keep the threshold low, basically attackers are going over what's easy to get to. And there are problems in these spaces. So just to clarify what the software supply chain is, you have your final product, it has some dependencies. They may have dependencies, and so on, and so on, and so on. In fact, things get rather complicated from time to time. And while it's easy for a developer to fix one package, having that package percolate through all its dependencies and make its way out into the system becomes very high, expensive. And we're starting to see fines being imposed. We're starting to see into the billions of dollars to remediate some of these vulnerabilities out in the field. So taking software, open source software, and knowing that it has the potential to hit a lot of these things is going to be catching it. Amnesia 33 was very much in the embedded space as an issue, for instance. And it had a fairly wide ripple through many places. So last year, the Linux Foundation went and surveyed some of our friends and family type of deal, and then we also had a panel. So we tried to make sure we were not overly biasing things, and the results came about similar. And 95% of the organizations that were surveyed do care about software security, which is good. And 98% use open source. So there's a wide recognition that open source is there. It's in the ecosystem, and we need to be making sure we can address it properly. This also bears out what Synopsys found in its survey last year as well. 98% of their code bases that they audited had open source components, and open source made up 75% of the applications. So understanding all those open source components that have migrated their way into applications and products and the relationships between static and dynamically linked information needs to be transparent. It needs to be accessible to make the quick determination, am I vulnerable or not? You don't want, there's been a really interesting trend that all these lovely vulnerabilities come out just before Christmas. And it has ruined a lot of Christmases in the last few years. So hopefully, because people are spending time going through grepping through their files and trying to understand, am I vulnerable or not? Going through their logs, what's installed, what's not installed, so forth. So from the survey, the number two action that was recognized from everyone was better use S-bombs to bear Secure's supply chain. First action, of course, is actually have a vulnerability reporting system and know what you've got in your system. If you don't have an inventory system, that's first. We're also seeing here in Europe, there's guidelines coming in for securing the internet of things, the devices, everything like that, and they're sort of expecting to see an S-bomb. So this started showing up in 2020 even. But there wasn't a really good definition of an S-bomb. And we're also seeing the IoT Cybersecurity Improvement Act in the US. And again, it didn't quite have a good definition. We've also been seeing, and we're still seeing even more, there's some ones have just come out, actually. I've got an update to various regulatory agencies are starting to expect an S-bomb, in particular the FDA and the medical device space is being fairly visible about expecting it. And the energy councils and the SIPs, they're sort of looking for various standards that have to come in. And then last year, in May 2021, the US government buys a lot of things. And as a result of that, this executive order that came up from the Biden administration, as of last month, actually, if you want to be selling to the US government, you're supposed to be providing an S-bomb with it and articulate exactly what you've got. And this caused a major inflection last year to try to figure out, OK. And they came out a pretty precise timeline as to who will be providing what a definition of an S-bomb is. And they relied on the Department of Commerce and the NTIA group. And they came up with a definition. So what actually is an S-bomb? And everyone's using it. In the impeded space, we've been using the term S-bomb for forever. Bill of materials, what's on the board, what's the software growing with it, and what's in that software. We've been doing that for BSPs for forever. However, everyone's been doing it in slightly different formats. Manifest files are S-bombs in some ways. This is all there today. We know this concept. However, the standardization of the format so that tooling can work at scale has been what's been lacking. So NTIA basically had been having multi-stakeholder working groups going on for, I guess, about two to three years prior and talking and basically having meetings every week debating and debating and debating these terms and concepts. And there was a group called that did the framing and they tried to figure out what minimum viable was because that's what they wanted to do then first. And what they pretty much came down to, it's record containing the relationships and the components. And the components were defined very ambiguously, deliberately, it could be source code, it could be files, it could be libraries, modules, open source, proprietary, whatever. But you have chunks of information, chunks of executable or information components and you have relationships between them, what depends on what. And that is at the minimum part of it. And that's what lets you represent the aspects of a supply chain. Now they did add in specifically the concept of known unknowns and being able to be explicit that I don't know everything at this point in time of my dependencies. Or I know everything and everything is known. And that subtlety does make it harder for some formats to work because the full dependency graph becomes very big very quickly when you start looking at modern software. This is one that we basically had looked at from SPDX itself and we found that we had that certain component there and we were sort of looking through that. But these are all sort of components that were being used in some of the SPDX tools, for instance. And it's a small system compared to a lot of commercial offerings. But there's no single way to identify a package out there today. Now there is desire to go towards using things like package URL, which is sort of a community-driven way of identifying a package that we want to try to hook into other things. But the naming problem is part of this issue. There's different ways that people refer to things. So we've got problems, but we can make progress. One of the things is your basic supply chain. You have someone, an upstream or supplier. I use the open source or third party that you bring into your flow. You build your product. You build your offering, your executables. And then you provide it downstream. By having the S-bomb and the transparency, you really do understand your dependencies and what you're bringing in. And you also have better support for your compliance and your reporting, which your legal teams and your risk management teams like. However, the open source ecosystem, as you see, is huge. So our challenge is going to be, how do we get to the stage where you've got this component and it's being used throughout this whole ecosystem? There's 36,000 packages were affected when I looked at this stuff on depth.dev for the log4j one. Because you don't know where your package, like open source, you don't know who's downloaded and use your code. You don't know who's included your packages, necessarily. Sometimes there's problems, sometimes things report home, but this is why it takes so much of a cost to remediate once it's out in the field. So we asked, how aware were you? My equivalent of the handshow at the start was sort of done in the survey. And most people had heard of the term. I'm rather familiar, very familiar or somewhat. And so one of the challenges, though, is people have assumptions of what the term means, because they think they know. And there are some subtleties. And as we say, the executive order provided a lot of motivation because companies want to make money. And so 78% said they'd be using them this year. I don't think we're quite there, but that's what was being talked about at this point. And to do this, we require use of standards so that we can interoperate in various tools working with those standards. So the minimum S-bomb, NTIA had a timeline and late last summer produced this document that talked about the rationale. And basically, there were these minimum elements. And you want to have to support automation of automatic generation and scaling. And it recognized a couple of formats that were capable of doing the job for the minimum there. SPDX, Cyclone, and SWID. Those were there. And then there were some practices and processes, again, including understanding the depth of your dependencies, your known unknowns, how delivery works, access control, and so forth. So looking at the data fields and the relationships, the minimum S-bomb is these types of fields, which are pretty standardly available. One of the things that that multi-stakeholder group I was telling you wanted to see included and they got dropped was a hash. Best practice in my mind completely is always to have a hash of your item so you know you're talking about the same thing that hasn't changed out from under you. But all the formats both supply them. It was just there was insecurity about how to do it. So 47 more steps doing today, 78 planning to do it this year. Challenge to become so is this diagram sort of showed up initially in those NTI studies, software lifecycle. There's a lot of information that comes into the software building as you're doing a build and bringing it in from outside. So the stuff that comes in from third parties and open source would come into your either procure plan, then start doing your build and test and lease information. And you'd have information after you do your build. And then you'd install it. And you would have configs. And whether a certain config was turned on or off might depend whether or not you might be vulnerable to something. Because if you don't go through that code path because of the config that you set up, you're not vulnerable. So you want to understand that part too. So there are what I consider probably four types of S-bombs here. There's a source S-bombs where you're very authoritative. You know exactly your source files. You have a package level. Then there's you've been given a binary. I want to try to figure out what is in there. Those are your analyzed S-bombs. Then OK, I'm combining these sources and these binaries together to create another executable. I know exactly how my build process is working. I can be authoritative. This is exactly what's in here. That's a build S-bomb. And then, hey, oh, I've configured this to go into the system. That's a deployed S-bomb. Everyone talks about these as S-bombs with their mindset. And they talk past each other constantly. So be aware that there is a difference of what part of the software lifecycle is this S-bomb being generated and used. And then be aware of that. Because there's valid information and valid use cases for all these places. But the term S-bomb, and it all fits the minimum definition, OK? That term S-bomb, though, is used. And there is a semantic difference that's significant between these in terms of how much you can trust the data. I would trust a build S-bomb that links back to the sources a lot more than I would trust. Someone gave me a random binary that I tried to use some third-party heuristics to understand. So how to produce them, most important. I've got two embedded projects I know that are generating them today. And they're generating them with one line change. Zephyr has build system that works on CMake. And you do West SPDX with a couple of options. And then you can just do your builds as normal on your command line. Yachto, there's a config file entry. And thanks to Joshua Watts and Sol Woods, you've just basically changed that config line. And then go ahead, building your recipes and doing it. And you will have this stuff all spit out at the end of it all to try to hook things up. And so that's where we need to be in embedded. And it's possible. Both of us. I'll go into more what Yachto does, but Zephyr is based off of C for the most part. So hands up. I love doing that. Who's familiar with Zephyr? Pretty good, about half. So it's a simple real-time operating system. It's meant for when Linux is too big. So for the sensors, the actuators, things that have to be 10 to 100K. And it has got neutral governance. It's a very active project. There are a lot of products now showing up in the field. We're getting about two commits an hour into the project. The Linux kernel is about nine to 10 commits an hour. So it's moving pretty fast, especially in the embedded space, and has a pretty good traction. And like I say, we're finding the hearing aids are using it, because you want your battery life to last a long time. But it's also showing up in wind turbines for various sensors and actuators. So it's scaling all the way through the ecosystem, and then tags for tracking things, and so forth. And that one line change creates three S bombs, generally. One for sources for Zephyr. One for the sources for the app. And then this build S bomb, which takes the dottos. And links which ones make into which kernels. So you see which libraries, and then which libraries make it into the final alpha image. So you have that traceability for things like Amnesia 33, where certain files from FNET were significant. You'd say, oh, FNET version. And that was in RLTS. But the actual files where the vulnerabilities were were never made into Zephyr's repo, that will loan into a product. And so we wanted to be able to express this as a project. The only way we could do that was write a blog post. If it's a vulnerability, you can put a CVE. But if there's not a vulnerability, you have to figure out a way of expressing this too. So by tracing this back, you can quickly know if you're vulnerable or not. And if you want to see this in action, I'd encourage you to go look at the renode's dashboard. So they've got a Zephyr dashboard. So if you go Zephyr dashboard, renode, you'll probably find it pretty quickly. And there's about over 350 boards on that dashboard with five apps. And they've basically got not built, built, and passed for testing. And anything that says built and passed, you click on that, you get the download, you get the ELF image. And you also get those 3S bombs I was telling you about. So it's working at scale. It's part of a flow for just a dashboard today. And if you've got a favorite board, you can go look at the board and go and look at the image. Is there a Jaren? And then work it on the simulator or work it elsewhere. So there's a little blog post if you guys want to read more about it. The other use case is Yachto and open embedded. Hands up, those familiar with Yachto project? OK, all the hands are awesome. So I will not spend too much time explaining what Yachto is. Now, Josh, let me use his slides. He's given a talk about this earlier. He gave it in Austin at the Embedded Lease Conference. And I'm going to highly recommend you actually watch him talk about the details if you get a chance. You should be able to get it and available. But net is you can have targeted images for Yachto from things from containers, to simulators, to boards, to SDKs, and build tools. Yachto produces all of those right now. And the build flow from it, basically the recipes will let you create your tool chain and your cross-compiler tool chain. And then use that cross-compiler tool chain to build your packages. And then you'll have a recipe of how you're going to assemble those packages to make your target image. So that's a lot of information that is flowing through a supply chain, a very small supply chain, right then and there. Because that is your whole, the build tools compared with all the packages, compared with the final image, are all elements that could potentially have been attacked at some point in time. So the nice thing that would serve in discussions with Richard and Josh and the others, the recipe metadata has a lot of the information we need for generating an S-bomb today. And the debug information as well has the other pieces. And a lot of the Yachto projects have already put in the license identifiers and SPX license identifiers. So you have that information you could pull from them as well. So versions, source code, control URL, licenses, that's all there. Build time dependencies and runtime dependencies are explicit and available. Oh, and the CVs that have been patched already? That's explicit. Source files, package files. OK, it's all there. It's part of the recipe. It's authoritative. You're not going to be guessing. That's where we need to be. We need to take where that knowledge is and export it so that people aren't having to dig around and get things out. And it's just there at the fingertips without thinking too hard about it. So what happens right now, if you turn on that config option, config line change, is you'll be getting SPDX for your recipes from the tool chain, from the packages you're doing, you're getting SPDX docs, and you're getting SPDX docs from the recipe metadata. And so you get a whole slew of SPDX docs. And then you want to try to understand the relationships between them. So there's an archive right now. And a couple months ago, we were talking with Joshua about having effectively an s-bomb of s-bombs and using the format itself. It should be able to do that for being able to sort of summarize a system for you as well, not just the individual components. So that's work in progress. But with SPDX, we've got the ability to say, OK, what's a runtime dependency on something? What's it been generated from? Is it a build dependency? And these dependencies are all significant because you need to understand what is how things relate to each other. Because again, if something is a build dependency and it's been compromised, you may want to go back and rebuild everything. Something's a runtime dependency. Hey, I want to update my runtime underneath me. Thank you very much. But being able to be explicit like this, that's people do things a lot faster. And this is why the transparency becomes key. Now the next stage is obviously getting tools that are consuming this effectively and doing things like having dashboards and monitoring after it's been deployed. So there's tooling that's emerging there. I don't have some slides about it, but there's like a project called Daggerboard. And if you come catch me afterwards, bounce me an email for links that got open sourced in June. And what it does is it consumes S-bombs and then matches the things up to the National Vulnerability Database. So these S-bombs from the Octo is what's known at the point of time of build generally. However, vulnerabilities are found every day. And you might want to be monitoring things over time to see if something has become vulnerable at some point. So what they can do right now in Yachto and open embedded with it is you can pretty much anything they can build, they can pretty much generate a nest-bomb for now, which is kind of from what we have. I think that's where we need to be everywhere in the industry. And I think the embedded place can certainly do it. You have a proof point here. C, C++, Fortran language doesn't matter. As long as you can build it, you can generate it. Host build tools, kernel images. In fact, Yachto is now the first place that you can actually build a S-bomb for the Linux kernel that ties you to exactly the source files that made it into that kernel image. And when you think about the size of the Linux kernel, it's good to know exactly which files made it in from which configs and be authoritative. It's not just the kernel version right now of all the sources. You could have an S-bomb of all the sources of a certain version of the kernel, but knowing exactly which files made it into your image lets you know whether or not you have to deal with issues. SDKs, container images, VM images. There's work going on for Rust and Go right now. And I just, it's people interested in working on this stuff. Please reach out to Josh, to Saul, to Richard. There is active interest in taking this forward and making this available for all of the people that do work based on Yachto. And to just sort of summarize what the features they're using from SPDX from there, the declared license, homepage download. They've generated the CBEs fixed. SPDX is adding that into the standard, but there's a way of doing it right now. And so you can link to the CBEs and you can link to the CPE information explicitly. If there's a CPE associated with the package or a product up on the National Vulnerability Database, you can be explicit about matching it so that you can monitor over time. There's summary information, there's description information. The sources, all the checksums of all the sources are there so you have, it hasn't changed up from under me. I am authority, but if I look at the source, how does it match? The other thing that the Yachto project is doing is fully reproducible bills, which is another security mechanism that can be now engaged. And then source of all licenses are there, packages, packages with checksums. You can say which ones generated it, which are the bill time dependencies, run time dependencies and source code archives. Those are the features that they're pretty much using out of SPDX at this point. One of the things I'm personally very interested in is understanding are there use cases we can't represent with SPDX? Are there relationships? And we found a few, some of the specification people wanted to have a way of saying this is a specification for something and this is a requirement for something and it's starting to head us towards the safety cases. That made it into two, three, there'll be more things along those lines coming down the road. So, SPDX had an embedded start. It started out from people at FreeScale, Wind River, FreeScale, Wind River, Monavista and Motorola wanting to exchange information from their scans for licensing. Because I spent way, way, way too many weekends as a manager doing reps on packages to understand the licenses for my SDKs so that my lawyers would let me ship them for the new Silicon revs. And I knew my colleagues at Monavista and my colleagues at Wind River were doing the exact same exercise and we wanted to share the information. So we needed a format where we could start to share the information. But when we're sharing that information we need to understand the relationships between things, we need to understand the dependencies. Hey, we may be doing this for licensing initially but this gave us a wonderful foundation for all of the stuff that we're looking at for the SBOM definitions today. So it's been over 10 years now. It's been evolving use case by use case, hence my desire to hear of use cases if you don't have any. And it's over 10, like I say, it's been over 10 years and it became an ISO standard last year. We actually spent the time about a year to move the format, the specification into a format, took it through ISO, went through formal balloting internationally and it has been agreed to as an ISO standard. So it is available, it is working and your procurement people can specify a standard in their procurement stuff if you want. There's a lot of companies, in fact there's more that have been added since this one is there, who've let us use their logo because they're using SPDX. So Microsoft is very active in the generation of the next rev of the spec. Google's using it as well. The Eclipse Foundation is working with it as is obviously the Linux Foundation. And then we have a variety of scanning corporations and we have a variety of like, Scania is publicly saying they're using it and let's use your logo similarly with Siemens. So, and we pro and various other VMware and with River of course. So we've got a pretty good swath of this part of the ecosystem already using it. And anyone who wants, who's any company who is a Linux Foundation member already, if you want to see your logo up there, it's you join in and it's free. There's no cost to do it. It's just our way of being able to track a logo. So if you want to help the project and support the project and let us use your logo, please click on the join button and we'll bring you in. We changed our governance in the project last year. So that we could do this type of model. And we've got two seats on our steering committee for our members. And so every year we elect new members, the members elect whoever's gonna sit on the steering committee with the rest of the team. And then it's otherwise it's leads for the various, you know, the tech team, the legal team, the outreach team. And so that's kind of how this project is structured. And as you could see, there's a lot of relationships there and there was the two more that were added. But this is the type of information you sort of need to know whether do I have to pay attention to this or not. And so these relationships are all there. They came from use cases, they came from requests from people saying I want to do this. I want to say that this file has been added, deleted. This is a patch four, et cetera. We're gonna be revising and reworking some of our relationships. It was initially by, you know, one directional and we've in the next rev of the spec we'll have the relationship and then we'll have the way of signaling which direction it is. So this list will probably shrink down a bit and crisp up a bit. It's kind of what I'm expecting to have happening. Anyone who is interested in this type of work or, you know, not seeing what they want, you know, please reach out, write up your use case. And if we can't figure out how to do it with what we've got already in there, we'll start discussions as to what we need to extend. But the whole concept of having that source Sbomb and then showing that what's generated from it becomes pretty popular in representing things like, you know, vulnerability systems and so forth. And, you know, you can, we're writing tools out there. Vanna, I think, had a talk the other day about the merge tool she's working on where she's taking Sbombs and putting them on to be one big Sbomb. But I think we need to flex and that some people need to have that and then other people want to have, oh, every time I want to have a relationship between things because I want things organically changed because they will as tools run and builds happen. But the fact is we can represent this and we can say, okay, is it containing the source file? What generates things? What's generated from? Is it a dependency on? So forth. So this is the ISO specification. And one of the things we spent a fair amount of time doing as part of this is making sure it was available for free. You do not have to, like I say, you see this, there's an electronic version available. You can click on that link and you can see both OpenChain and SPDX are freely available. We want people to be able to access them and use them easily. They are not behind paywall. You can, certainly you can support ISO if you've got to deal with ISO, et cetera. It'll be available that way too. ISO-I slash IEC. But it is available for open source because this is meant for open source. Communities to be able to work with. And I'm running into all these standards that people are referring to me at these days on the safety space. It is a bit of a barrier for me that I have to spend multiple thousands of dollars to access some of the safety standards. And we need to be able to get the stage where open source is effective for safety, critical applications, because a lot of the applications are for safety. Are happening in safety critical domains. And so making it easy for open source developers to work with these standards is something I'm very passionate about. So if you want to look at that one there, that one is the one that went through ISO. So it's the 2-2 or version 2-2-2. We just finished putting out last month a 2-3 version of the specifications. We added a few more fields. Specifically, adding a build date, a release date, and a valid until date. This is end of support, or end of life. Because people were wanting to track that. That came in from some use cases in Japan from the automotive industry. And things like data sets are also have elements of valid until. So we have to refresh. And you want to put this type of information into your monitoring system. So you know, hey, this is coming out of life, I want to look at it, I want to understand more. These are all things that are useful. We extended a few more of the hashing algorithms. We also did, due to popular requests in the size of these files, if it's a field that has no assertion, you do not need to make it visible there in your files. It's assumed to be there now. And the lawyer signed off on that. So that shrinks the file sizes down considerably if you don't allow licensing information that you're trying to carry. Those two new relationships were added, the requirements and specification. And then we clarified a lot of information. Some of the profiles were extended to add the NTIA fields that were there, the mandatory fields. And we sort of documented things there. And we also talked, there's a whole new appendix on how to work with SBDX security. So it's all about how to hear how to go do this for this use case. So there's a whole appendix there. So if you're working with security spaces, there's information on how to link things together and refer to vulnerabilities, refer to advisories, et cetera. So that's kind of what the state of the land is like there. The tooling is starting to move over to using some of this. And there's a specific version in a specification that goes into these documents so you know exactly which version of the tooling you're using. And so we can go forward. So I guess to wrap up, if you wanna learn more about the SBOM guidance minimum fields, I would send, you know, I'd go over and look at the NTIA.gov site. And most of the energy now is moving over the CESA site. So that's where there's discussion happening about extending SBOMs to link consumption tools, things like that. There's people that are motivated and interested in talking there. For Zephyr, there's a variety of links. I'll be uploading these slides after this talk. And we just had a Zephyr Developer Summit in June this year. And Steve Winslow, who did the work of putting the instrumentation into the CMake in the West Build environment, gave a talk on it in some of the directions you sort of interested in. So if you're interested in trying to add this into your build environments, please talk to me. If you have any questions about how to use things, I'm happy to answer or other people in the SPX community are as well. Similarly on the Octo side, it's been, Create SPX has been there since 3-4. And there's a couple of presentations that are well worth watching from the people who did the work. And then for the SPX, our site is there. And from that site, you can figure out how to participate in the project. A lot of the stuff is up on GitHub as well. We have a GitHub site and there's some orientation information about where the meetings are and so forth. So we've got about five or six meetings between the tech teams and the various profiles each week now. And there's usually about 15 to 20 showing up on the tech calls. So there's a large range of participants with a large range of expertise that are having a lot of discussions. And then the documentation for the spec is there. And with that, I will say thank you very much. Hopefully you found this interesting. Go for it, Tim. So Sebastian Crain, who is on the outreach committee, has been working with the Debian community. The Debian communities recognize the SPX license identifiers for a long time, but they've got their own Dep 5 format. And so getting it so that we, but there's a lot of tools out there that take and generate Dep 5 by doing SPX first. So getting it into the whole bill flows is probably a piece to work, but Sebastian's looking at it and anyone else who's interested from the Debian community who'd like to work on this and like more information, I'm happy to talk and connect people. Any other questions or? Okay, go for it. Yeah, I really would love to get that. I just need maintainers who want to work on the problem. But yeah, if we can get this into GCC and Ben Utils and LD will catch more of the ecosystem, way more of the ecosystem. But not even that, like actually including because GCC produce bad code too. Oh yeah, no. So there's actually a build profile working group that's meeting, it meets on Mondays. And what we're doing is articulating what the evidence is, what relationships are missing for us to express this type of information. And so if you're interested in that build problem, happy to sort of loop you into, just come tag me or send me emails saying, hey, how do I get involved with the builds or how do I get involved with the AI profiles? Cause training data sets are another area. Like, you know, we're seeing, so if you're interested in about AI, come to my talk a little bit later today, cause I'll be working with Karen Bennett and we've been, there's a group of us that are meeting to talk about what's missing for being able to represent AI effectively with this type of information. And yeah, go piece there too, I see. And so, you know, some of the people are here in the audience as well. And so if people are interested about the AI problem like for the autonomous driving issue, we're going to need to make sure we have crisp clarity on, okay, these are the data sets we're training things. This is where we're going to be an issue from the data sets as well. And we need to be able to express that. So we're working on that problem right now. Okay, and I've got to wrap it up. I'm getting that signal. Thank you again.