 So I kind of missed this morning roundtable introduction. So introducing myself, some of you know me. I've seen a lot of you here. By the way, it's great to be back in person with a lot of people we've been on the phone with. I'm Gary O'Neill. I've been involved with SPDX. Since the beginning with Kate, I'm usually known as the tool guy. I wrote some of the initial tools. It was supposed to just be a pretty printer, but now it's like thousands of lines of code and does all kinds of stuff. But I've been involved from the very beginning. One role that I play in SPDX 3.0 is to kind of represent the community that is using SPDX 2.3 and think about compatibility. So if there's something that just absolutely won't work in 3.0, I'll raise a huge red flag. If there's something that makes it hard, I'll kind of raise a yellow flag. So with that as kind of the perspective, I'm going to go through a comparison of 2.3 to 3.0 from a migration perspective. So those of you that are new to SPDX, I'll apologize in advance. This is probably going to sound a bit tedious to you. From those of you that are using SPDX 2.3 today, I think you'll find it a bit more interesting, but I'll also try to point out some of the more fundamental changes in the model that are more conceptual, especially at the beginning so that it'll give you a little bit more of a context for what's being changed. So to start out, a lot of this was covered already, but why are the changes? You've seen some of the additional use cases already presented. Clearly, they need additional fields. It also impacts some of the structural changes. One of the next major reason for changes is to simplify. The number one feedback we've gotten from the very beginning, from the very first survey we've ever sent out on SPDX, we ask people, what's your biggest concern about SPDX is just too complicated. And I think a lot of that relates to the large number of use cases we support. And so if you want to do licensing, you got these fields. If you want to do security, you got these fields. If you want to have detailed source information, you got these fields. So you look at it in aggregate, it looks really, really complicated. So we made some changes, some structural changes that we believe will simplify this back. So that was definitely one of our objective. Now, I will admit at the same time that we've been simplifying, we've been making it more complicated because we've been adding more use cases as well. So I'm hoping the net effect by all of this is it is simpler and it covers more use cases. And the last one is flexibility. And this one actually has a fairly substantial impact on the structure of SPDX. Let me give you an example about flexibility. Today in SPDX 2.3, if I want to send SPDX information to you, I have to include it in an entire document. So I gotta wrap everything up in one document and send it to you. We had one person in the SPDX community saying, hey, I'm gonna produce an SBOM on every GitHub commit, or every Git commit, I should say. Every Git commit, they're gonna produce one. And it's only gonna be these little teeny changes. So how in the world can I do that if I have to reproduce every single element, every single file, reproduce everything? And then how will these amend relationships to tie it all up? That's just crazy. So we've made some structural changes to make it a lot more flexible. And I think in the end made it a lot more scalable too. So one thing you're gonna see in SPDX 3.0 is much larger implementations of SPDX with much larger numbers of elements that I think is pretty exciting. So that's another one of the things. So let me go over some of the structural changes. The first one is profiles, which there was a good question raised earlier on the profiles. Profiles is introduced really for simplicity. Now profiles can be a little confusing because we're really kind of talking about three different things when we talk about the profiles. The first thing is conformance requirements. So if I'm giving you an SBOM and I'm telling you this SBOM can be used for security. It supports a security profile. It better have some of the minimum requirements for security. I better be able to correlate it back to a database of vulnerabilities. I better be able to have the version information for the packages. And these things may or may not be interesting to people who wanna use the licensing profile. But if I'm using the licensing profile, I better have a concluded license. I better have a declared license. So when we, you can think of these as like conformance points. One of our contributors from MITER introduced that term to us. So these conformance points are part of a profile. They're also a namespace. Now what I mean by a namespace is if you look in our Git repository, each directory is for the profile and that's gonna be a different namespace. And it just helps us organize it. Organize the information so that if you're interested in this you can search on that namespace and ignore the stuff you don't wanna see. And the last thing is actually how we organize. You'll see different teams that actually are producing these different work groups around the different profiles. So that's the way profiles work. We are introducing a mandatory new field which tells which profiles you're supporting. And at a minimum you would be supporting the core profile. And most likely you'll be supporting the software profile. But beyond that, all the other profiles are really the choice of you as the producer or you as the consumer which are requiring of your producers. Another structural change is the external document references. So when today in SPDX 2.3, this I know is gonna be the most tedious slide. I apologize for, especially those of you who are not in 2.3. In 2.3, if you wanna reference an element that's not in the SPDX document, you basically have an external document rep that points outside of that. In SPDX 3.0, we move this down into each individual element so it can actually support the references externally. And we split it into two different classes. There's not one class anymore, but two. One that describes the imports. So basically the imports will say, this is where I got it from. This is the package. This is a check some of the SPDX document. This is enough information to know where it came from. And the namespace map gives you the information about the shorthand references within the document itself. So we separate it out into two different classes. The functionality that makes it more readable and the functionality that makes it verifiable. And, but all that functionality is there, but it is a structural change and it will impact. If you have tooling, you'll have to adjust the tooling to the new format. Relationships, I think this is one of the most impactful and I guess there was a question on relationships as well. Relationships are important to us in the SPDX community. For, in SPDX 2.3, a relationship was a property of the element. So when I produce an element and I deliver that to you, I tell you what all the related items are. The problem with that is that if I want to change that relationship, I have to create a new element. Because once you create an element, okay, let me go off on a little tangent. This is a very important point for SPDX. Once you create an element, it's done. Once you deliver that element, once you ship that element, we call it minted. It's like you've minted it and you cannot change it again because people are relying on that data. So that element is minted. Now I want to add a new relationship. Oh, I got to go create a whole new element with all the same information. So let me pause there, I see a couple of looks. Does that make sense? Yeah, that's a very important concept. Okay, so in SPDX 3.0, we moved the relationship out to be its own element. So instead of it being a property of an element itself, it's its own thing. So you can now create a new SPDX document using the external refs that I was talking about a slide ago and I can point from one element to another. So if I want to introduce a new build relationship or if I want to introduce a new security, relationship like a VAX information, I can just drop in an element and I don't have to change a thing on the from and the to. Everything's updated. It's very important for the scalability. Make sense? All right. There's a few other miscellaneous. Those are the big ones, by the way. Entities, which I think we're gonna rename to agents. I don't know, this may get renamed, but there is a, we've restructured right now on SPDX 2.3, we have a string field for a person tool or whatever. It's just a string, right? And you can parse it or you know, deal with it as you wish. We're making it much more structured in 3.0 where we've separated out, identities that are entities that have some kind of a, what's it called, the naming authority. So you can go to an authority and know that's who the person is or maybe it's just a name of something or an email that you have, but it's much more structured there. File type enumerations has been replaced by two fields. So it's a little bit structural. We have a media type, which is, basically we're using the standard media type strings that are standardized. So we've moved completely to that. And then the content to the file and enumeration, we've moved to software purpose for the purpose of the file as well. So we've separated the purpose from the media type and we've moved again to a complete standardization. So we're not inventing our own media types anymore, which is kind of nice. Package file name and package checksum have been replaced by a relationship from a package to a file. Again, this improves the flexibility. External identifiers. We split that into two different things. So we actually now have an external identifier, which has this kind of naming authority associated with it and an external reference, which doesn't. So some fields require an identifier for provenance or security reasons and some fields are okay just having a reference. So some of the use cases require that distinction. Package URLs, any fans of package URLs here? Okay, one, two, okay, good, good. It's now a top-level property. So we've elevated it up from being just one of many external identifiers, identifiers or references, to being a property of every element. So it's easy to find and something that you can almost, if you wanna have like a primary identifier, you can use it for that in SPDX3. Annotations and relationships are independent elements. So I already talked about relationship annotations of the same way. So you can add annotations after the fact and snippets have been simplified. So I won't go into the details on snippets because quite honestly not that many people use them. Maybe after Jeff's talk, more people will, I hope so, but we've simplified them. And I'm gonna go through some really more details of the migration. Here's some classes and properties that are removed. Files analyzed, those of you familiar with 2.3, this is the most hated property in SPDX and everybody is happy to get rid of it. So that's gone. It's actually, the functionality is replaced by profiles. So the functionality is still there, it's just covered by profiles rather than through this property. The license info in files, right now it's removed, but there is an issue to bring that back in again. But there is a proposal that this is redundant with declared files and is no longer needed. And I think there's one that I miss on the properties removed. Yeah, I think those are the main ones. A number of properties got renamed. Some of these I'll be honest with you, I pushed back on and wasn't successful. So if there's anything up here that gives you a heartburn, feel free to raise issues. The first one I definitely agree with, which is external document ref, that was really confusing because we have external reference and external document ref. They mean two totally different things and that's caused a lot of confusion. We've gotten feedback on that in the SPDX too. So that's fixed. We've changed package file name and file name just to name as part of the model. We changed version to package version. And then there's a few licensing name changes that go with the with addition operator that was presented earlier. Extracted license info, custom license. And actually I think we changed package purpose back to primary package purpose. At least I put in a request for that. The info. Yeah, so, which may be a structural change. There's an open issue right now in SPDX 2.3, we have a package purpose. One of the things, one of the tooling that I support is a translation tool between Cyclone DX and SPDX. And you need a package purpose to be able to translate. A Cyclone DX has this thing called a type, which we call a purpose. And there's only one in Cyclone DX. So if I have four of them in SPDX and one in Cyclone DX, which one do I pick? So we need a primary package purpose. There are some in the community that would like to have additional purposes to be expressed. So we may end up with two properties. One primary package purpose and one package purpose. That's yet to be decided. And the last renaming is specific to JSON right now and JSON. All of the arrays are pluralized. So in the model we'll have like a property X. And then in the JSON we'll say property X's. You know, to make it plural. That's gone. We're just gonna use singular. We're gonna simplify it. And it's just gonna be singular. I may read a little odd to some people, but we decided it's more important to be consistent within the SPDX serializations than to attempt this complex algorithm of making it pluralized. There's a whole bunch of classes and properties added. I'm not gonna go over these because quite honestly, these are very related to the prior presentation. So I think you got a lot better overview of what these things that are being added here. But these are all to support some of the new use cases and profiles. So that's my detail of it. And backing up a little, you know, from the big picture, these changes that we're talking about should provide more flexibility, especially with the new relationship structure, provide better support for scalability. I'm hoping the net result is a simpler SPDX implementation. And as always, we're always adding new use cases with the profiles. By the way, I'm very excited about the AI work that's going on. I think that's so cool that we're supporting that in the next release. And then just write this down in your notes. And we'll upload these so you'll be able to copy and paste this. There's a document that's kind of a living document that we're recording all of the migrations with very specific recommendations on how to upgrade from 2.3 to 3.0. So the structural changes are in there, all the renames. And then as we make changes between the release candidate one and release candidate two, we'll be updating that document with that. So that's what I have. And am I at the last presenter? Good. Thanks for staying awake on the last presentation. Very simple question. Why is that final document not marked down? Why is it still Google Docs? I don't know. That's just what I picked. Good question. We should move it to mark down though. I think you're right. And tie it to the release. Yeah. Yeah. Yeah. But it's a good point. Point well taken. Yeah. Well, Gedi, any thoughts about creating assurance in the S-bombs during the development process? Have you seen any best practices? How people collect that information that goes in the S-bombs fields? So when you say assurance, can you describe a little more what is being assured? So the elements you're presenting, right? It's the last mile whoever generated it. You just trust him, right? Ah, gotcha. So yeah, there's a few things. And this isn't really unique to SBDX3. There's one part of that problem we've tackled, I think. Reasonably well. Actually, I'd say there's two out of three parts we've tackled and one we've decided not to, okay, when it comes to assurance. One is we have creation information in the 2.3 and 3.0 spec that fully identifies who created, who's responsible for that SBDX document. In 3.0, the creation information is now tied all the way down to the element. So you can have a document and you can actually have different creation information for the different elements, but you know who created each and every one when they created it and all of that. The second thing that we did tackle is if you're referencing an element that's not in your document but is externally, there is verification mechanisms to make sure that that document wasn't tampered with. So we have verification and checksums and we also have verification checksums and hashes for the artifacts that are pointed to by or represented by the metadata in the SBDX document. So you know, so if you can actually get a hold of that artifact, you can check some it, look to see what the checksum value of the SBDX document and that's all there. The one thing we did not tackle is how do I know, there's no digital signature on the SBOM itself. And that was a conscious decision because once you get into that, you need to have a certification authority, you need to have a whole infrastructure to support that. And we thought it would be best left to other standards. So we'll produce this chunk of bytes, right? And then we would encourage other standards community, SigStore, others, to actually wrap that in some kind of a digital signature standard. So I think there's a lot of interest in including that. We just don't wanna duplicate what other standards have done. So I have a question for the build profile. Is there support for self attestation for a secure build? Like there's a new NIST requirement that people who are selling software to the government must self attest that they have a secure build. Is there a way to do that in the build profile? Yeah, I think there's, so the NIST SSDF attestation, I think that covers a little bit more as well. I'll talk to the build, but one at least isolated to that part of the build. I think we don't explicitly say it because the format isn't necessarily, well, at the time of writing wasn't even published. But I think we could see that as a use case where you could include that as an external reference as part of the build element for that particular component. In your first slide about structural changes, you mentioned conformance, namespace and organization. What is the purpose of organization? I didn't catch the standard. That's just, it's internal to the SPDX community. It's how we organize ourselves. So we have like a weekly meeting for the core team, and then we have like a bi-weekly meeting or every other week. So it's just how we organize ourselves. I just mentioned it because some people get confused. It's like, why are you calling this meeting the profile? It's kind of part of how we organize. The build, yeah, the noun. Yeah, exactly. So I was curious generally, and maybe this is a very general question, who are the consumers of these SPDX documents ever considered to be end users? Like I'm thinking of people who are like, you know, they receive like a consumer electronics device. And is that something that they can review to see like, oh, well, what's in here? Like what kind of code can I get that sort of thing? Like is that kind of an intended use case? Yeah, so, yes, is a short answer. You know, we do intend for it to be delivered to end users. There is an ongoing debate about the serialization formats and what are the trade-offs between human readability and machine readability within the community. But there is one format in 2.3, and it'll be interesting to see if it's feasible to support this in 3.0. I think it is, is the spreadsheet format. So we talk about tag value, which is a text format, and we talk about JSON, we talk about all these others, but I'll tell you, in my job, I deliver SPDX documents to end users and it's in the spreadsheet format because lawyers love spreadsheets. So that's how I send out the format, so yes. Going back to the first presentation from Steve, I noticed that one of the operators mentioned in expressions was a unary or later operator, the plus symbol. And my understanding was earlier, this was deprecated because license IDs are just strings that represent licenses. So does this mean there's now a recognition that several licenses in a list are successive revisions of the same license? Yeah. Gary, I can speak to that if... Oh, go ahead. All right, I don't know if I'm still on the audio. Hey. You are? Yes, I'm looking around where the employees come from. Okay. Go ahead, Steve. Yeah, so just to respond to that, so that plus operator, the or later operator, so just to be clear, that's always been present or at least for years and years has been present. What it's meant to signify is that the applicable license is the corresponding license that's specified or any later version of it. As far as SPDX is concerned, we're not like, if you look at the SPDX license list, there's not any sort of syntactic meaning or anything that we're defining to say this license is a later version of that other license or anything like that. So it's not necessarily, like SPDX isn't in the business of saying this version 2.1 is later than that version 1.3 or kind of other ways of specifying it. It's more meant to signal to the consumer of the SBOM that the concluded license or the declared license says it's this one or any later version. So there's nuances around it, particularly when it comes to some of the GNU licenses due to some of the requests that came from them and the FSF about how to communicate only or later in the context of some of the GPL, LGPL and so on. So it's short version, it's complicated, but I guess just to say that plus operator has always been there. And so we're just signaling it will continue to be. So Kate had mentioned that you with 3.0, you might go to ISO to get it certified. What are the criteria to make that decision? Boy, do you want me to try answering that? You try to reach it over. Okay. Yeah, I think it's stability of the spec. You know, it's a lot of effort to go through ISO and you know, we wanna make sure we don't have to go through it too frequently. So once it's fully stable, and we feel like the rate of change has kind of come down on it, I think that's when we'll go through ISO. We go through the pass, which means we're in the field and in use. And so we go through that path to go into ISO. That's how we did with the 2.2 spec and we'll probably do the same thing here. Our challenge is gonna be moving from this model version we've got right now to updating our docs will be a nice little challenge for us so it won't be quick. I wanna go back to one of the questions that the gentleman asked about human readable format and you mentioned you love spreadsheet. Is there another way to present? Because I see there'll be a lot of consumer electronics, especially in the healthcare sector, that may require some visibility into what's in there, especially vulnerabilities before somebody can start using for critical healthcare needs. Yeah, yeah. And you know, we're definitely considering, you know, continuing to support tag value and that's an ongoing, like I don't think we've come to a resolution yet on that, but it's definitely an ongoing discussion on, you know, what's, and that's kinda like the next step for SPDX by the way. Now that we have the model kind of solid, you know, still being reviewed, you still have an opportunity, you know, to influence it, but we're gonna start shifting our focus to the serialization format. I'll give you my personal opinion on this and I know this is gonna invoke a reaction from some of the other SPDX community, but my personal opinion is I think having like a tag value type of format makes a lot of sense with certain profiles, but with some of the other profiles, which are very complicated, trying to force that into a flat two-dimensional structure is gonna be extremely difficult. So my opinion is we support it, but we limit what profiles we support it for. We'll see, I don't know. Go Brad and we've got a comment. Yeah, actually I would say that that is a great opportunity for, you know, if there are any startup companies out here to actually implement UIs for these, because what's nice about SPDX 3.0 is that it's easily encoded into like a database format, so there's a schema right there for you to implement and we're working on serializations that you can convert from your database schema so you can make SQL queries, get JSON, and then display that JSON in like something written in React.js. So these are all things that can be pretty easily implemented and so my humble opinion about this is that you will no longer want to look at spreadsheets. There'll be some pretty UI that you would rather like to look at. My question is more on the defect side. I think one of the use cases I'm thinking through is like I'm scanning for vulnerability, but I may usually withdraw a couple of scanners at it. And I think in kind of like the decision making, it's helpful to know like kind of looking at the evidence of like which kind of this particular scan came from. Would that be encodeable today or like how would you encode that, yeah. Yes, it is. So you want a way to communicate the scan that picked up the CVE? Yeah, I think. Was that being the evidence? It's in creation info. Yeah. Thomas, go ahead. Yeah, so there is an optional fields. So in Svex 2.0, you had the creation info field where you can say basically which tool created the whole as bomb. There's also that's on every element in Svex 2.0. So optionally, you can for every element where you have vulnerability information or you can basically say like this was created by this tool. So if you have say, and there's also a relationship. So there's multiple ways how we can do this, but it is possible to exactly say which one scanner that actually I need to look at the license profile because there are any two books the same, but again, only now for the first release, these two came together and we can make sure that both mechanism will work the same, but it's possible to show multiple because again, I worked personally on all of the look at our boards and we have multiple security scanners and multiple licensed scanners, basically giving information into the S-Bomb and it was always tricky to tell which one is which. And now, especially for security, we're introducing a lot of different agents we call them now. I always keep the word actor. So basically you can exactly say that this tool organization or person did a particular thing. That's all possible, but it's all optional. So if you want to, you can include it. If you don't want to, you don't have to. I think you could also use like a published by relationship besides the creation info. But then you, that would be... So that would be on the relationship though because the relationship of vulnerabilities, a package affected by vulnerability is anchored within the relationship and not the individual. Yes, but you can... This is the weird thing where people really have to get the hat around. Relationships are elements. So you can link to relationships. You can point with relationships to relationships. I know graph databases don't like that. I guess it's... So I guess that's okay for the team model, right? That's like... This is the relationships on relationship. We moved the relationship to be an element and by doing that, relationships are allowed to have relationships. I had a debate with one of the other community members, William. It's like, nobody's ever gonna use relationships for a relationship. And then these things come up. It's like, oh man, maybe I'm wrong. Let's make it more complicated though. So we had this discussion in the implementers group and I was tasked to communicate that we're not very happy about that. So having relationships pointing to other relationships is going to rake a lot of internal representations on existing tools. So if we can't constrain them not to, I mean, I know this opens a whole can of worms, but just voicing the opinion of that group. Is it having a undirected graph? Is that the issue of cycles in the graph? Yeah. By the way, cycles in the graph are possible in 2.3 right now too. So. Yeah, but you can de-dupe them, right? You can de-dupe them. So, yeah. So before opening up for other questions like this, I would like to know, so how many people here are working on SPDX tools or S1 tools in general? All right, like around half or a little bit more. How many people are new to SPDX or to S1s? Okay, all right. So if at any point it gets boring or you need more clarification, please. We've got another question here. Thanks, I've got a question about tooling. Now the previous talk mentioned a couple of examples of builds on GitHub Actions and some Yachto builds and how the metadata you need to build, I guess a build element was already there for GitHub Actions, but what's the state of tooling for actually getting that data out into a usable form? Or is it still all custom work that has to be done in all cases or are some of the cases like Yachto and GitHub Actions, there's tools out there that do heavy lifting for you already? Yeah, I think that in terms of tooling, implementing some of this, I think on the high level, like the inputs and the things that are all captured, there definitely will be some fields that need a little bit more instrumentation or a little bit more intention on the tooling to do. Another path forward with this is with regards to other standards like Celsa that are kind of also being pursued. I think the idea that we looked at in this group is looking at Celsa and reproducible builds. There's a lot of information already out there. If any built tools do produce those kind of documents, we would then be able to do a direct translation to the build profile as well. So I think that in general, there is a good amount of information that is captured by built tools which meet some use cases, but not some. So I think in general, most of the use cases can be met with information available today. If you're looking at like safety, then it may require a little bit more instrumentation, like you have a P trace to make sure that you're capturing every single command that you're writing. But yeah, I think that's the idea I'll approach that. And Kate too. So one of the things we are doing is we have reference libraries for producing and consuming SPDX documents. So we have it for Python, Go, and Java. And so these libraries, if you hook into them with two, three to a day, we'll be making sure that they're available for 3.0. So if you wanted to abstract and minimize your burden back and forth and be able to consume, I'd say hook into these libraries that we're working in putting out as part of the community and that should give you an accelerator. In addition to what's happening in Yachto, I'll also mention that in Zephyr, we've been doing the generation directly out of the build. And so the information is there. Every time you build an image, you can have the effect of your S-bombs, multiple S-bombs that relate to each other created. And so I can point you to a dashboard that has over 400 boards across six apps after this that's already there. Anything that says built or passed, you can download the S-bombs as part of the images. So it's eminently doable. Yachto is also showing that you can do it with your tool chains, because Yachto basically, you build your tool chain that then goes and builds the images, the libraries and so forth, that then assembles it. So what we're gonna need for safety, you can see elements of it being happening already today. And I think with the 3.0, we'll be able to take it even further. Yeah, so just to add to what Kate said, in 2.3 today, Github's producing SPDX, Zephyr, Yachto, we have a Maven plugin. We've just introduced a, what is it? The, help me out, Brandon. Maven and Gradle, we got a Gradle plugin that produces. So those particular build environments are covered with tools that are available today. And many more. And if any of you are in a particular tooling ecosystem and want to contribute, we'd love for you to contribute open source in this area. Just talk to me and we could look at bringing it in. By the way, Kate mentioned libraries. Anybody use JavaScript or TypeScript? Okay, any interest in producing a library for us? That's next on my list. So feel free to contribute. Gary, maybe to add, we're missing one more thing. Can you hear me, Gary? Yep, we can hear you, go ahead. Okay, so also what I did for 3.0, so there is again, for me, we're now in the time of any S-Bom, where lots of tools produce S-Bom, but a lot of the information is incorrect or missing. And that's partially due to the build tools itself not giving you all the information or the information is incorrect and we couldn't always translate it. So in 3.0, I added several additional fields that should allow build tools to directly put the raw data that they have directly as external references on the package level. And then other tools can then process that. Because a lot of the metadata, people familiar with it are surprised that the metadata from the package is raw. And the burden was always, if you want to get package, the idea is always with S-BX, we go upstream to the package managers and then the package managers who directly produce S-Bom, but sometimes even the package managers, they cannot correct all the data. For instance, a misspelled get up URL, right? So in the old 2.0, if you wanted to translate that to download location, the tool would get stuck and then basically, yeah, wouldn't translate properly. Now in 3.0, the tool could still output it as a reference link, even if that link is broken and then a further tool on the tool chain can then basically work and process it. Because working on clearly defined early, I can tell you that about 40% of all metadata for all the major ecosystems is broken in one form or another. And so, yeah. Sometimes slightly broken, sometimes really broken. A lot broken, yeah. And so that's why I said, that's why Oracle, we have this whole creation mechanism where you can fix it, but said if all goes well for S-BX 3.0, we'll implement words and there will be a get up action and a get up pipeline and we are supporting 20 plus different package managers out of the box, so we'll generate S-BX 3.0 for 20 different package managers and one goal. Hopefully, they would luck at lunch after 3.0. It depends how long we take. Yeah, and on the broader topic of going beyond just the build, there's an open chain tooling work group, and if you're not familiar with that, that's a good forum to join and discuss tooling in general, including ORT or the ORT that Thomas is working on, plus a few other scanning tools that are open source in nature. And of course, there's a lot of commercial tools out there too that I'm sure you're familiar with that actually will do the analysis and generate S-Bombs. So lots of tooling out there, more needed. Gary, the group is now renamed, it's Open Chain Automation Working Group. Everybody's renaming things on me. Okay, Open Chain Automation Working Group. The name, the current name of the group, yes. Thanks, Thomas. I may go back to my previous comments about defects exposure in S-Bombs. From a user perspective, that's very time consuming, especially with the CICD deployment where you will not have time to look at all the defects, whether or not to accept the software and deploy. So having defects somewhere in the S-Bombs that's machine readable, and you can, if AI is supporting, do the impact analysis, you could have machine making the decision for green field deployments. Good point, I don't know if that, Karen, did you want to do? I'm just curious, I mean, I am new to some of these tools. As any of the tool makers do using AI powered tools to take some of the data, no? Because I can see with vulnerabilities that this would be a huge opportunity to build a tool. Yes and no. So in the Open Chain Automation Group, we had discussions on this. For, it's two parts. If you look at licensing, using AI is not really useful. Because in licensing, actually we're looking for the opposites. So we're not looking for like the Apache license, we can find a new example of the Apache license and detect it properly. But you're looking for the anomalies. And there have been several experiments looking for those anomies, but AI, it didn't really work. Coming to security, it's interesting. Yes, you can parse basically CVEs and all the broken information to get better metadata. But the problem is still, and if you're familiar with security and you look at the security data providers, the information that you get is generally terrible. To the point that I was looking today whether I could figure out for like, even though a lot of this information, it was mentioned for the project. It's not there, it's all not properly structured. It's all not, so yes, there is still a lot of work to be done. But that's why you now also see like things like OSV and all of the others coming in, basically doing an open source version of the security feeds to basically work on getting that data better. But this is, I'm sorry to say, a lot of the data that we get from the commercial tools is useful, but not perfect. It running AI on it would really be tricky. And we have done experiments with it, it's terrible. Yeah, we ran into two issues with automating security. One is correlation. We get information about an Sbom and correlating that to a database isn't perfect. We don't have a good ID structure. In SPDX, external refs is the secret to that. Put as many external refs as you can find. If you've got the software ID, put that in. If you've got what used to be called Gitbom, which is now called something new. Omni, put that in. Certainly Perl's package URLs are the favorite ones. Download locations can be used. Put them all in, because as Thomas mentioned, the metadata is not perfect, it doesn't correlate. Maybe if the Perl's wrong, but the SWID is right, you can correlate and you can find the information there. The other problem we run into is stale data. So somebody, I ran into this all the time. As Thomas just mentioned, you go to the GitHub homepage and somebody moved it or renamed it or something so the URLs all messed up. So stale data is also an issue because the SBOMs are produced at one point in time, but you may be looking for the information based on that SBOM at a much later point in time and lots of things can change on the internet. So we were talking a bit about tooling and I think it was mentioned that, for example, in Zephyr there's some tooling that created these SPDX documents automatically and I was just wondering if there's, like thinking in terms of license compliance, like I know we've got the license profile and we also have the build profile and a lot of licenses, especially the copy left ones, require build and installation information. And so I'm just wondering if there's a way to or if it's been considered to have it either required or at least highly suggested when using certain licenses where these requirements are known to exist to also have a build profile to go along with it to assist people in complying with these terms of the license because I think that's a common area where at least in my research that there's often something lacking. I can speak to that. Sorry, I can speak to that and then you're feel free to jump in too. Yeah, I think definitely, I think one of the ways that I understand SPDX kind of from the early days has talked about license compliance has been the idea that an SPDX document or an SPDX SBOM is helpful, it's useful for compliance, but it itself is not meant to kind of represent or consist compliance. Part of the idea being that SPDX as a project is kind of studiously trying not to make legal decisions or legal opinions or say, this is what you should do or should not do to comply with this or that license. We are instead focusing on representing the metadata trying to enable other folks to kind of being the upstream for representing the metadata but let other organizations, other folks provide recommendations about how to use it, how to use SPDX data to comply with licenses. So I think some of what's there, some of how we think about structuring licenses, structuring the other metadata is definitely informed by what the general requirements are for the licenses. So in the, particularly in things like some of the relationships that have been previously defined for SPDX 2, 3 and earlier, there are different relationship types in there having to do with the different ways that files or packages can be linked to one another, the different ways that one can be a distribution artifact or metadata about or various other things. So it's kind of, I think the concept from SPDX's perspective of being, this is information that is helpful for and can inform your decisions on compliance but then leaving it to the downstream users or other organizations to provide their own recommendations about how to do compliance based on that. Kate, sorry, I didn't mean to jump in there, but I'll let you carry it on. It's all good, it's all good, Steve. So the other side of it is as long as people use the SPDX identifiers, a lot of the tools are able to pull the licensing information out and surface it in the metadata. If not, it takes scanners and other tools and things like that to guess. But if we've got the IDs in both in the Zephyr project and the Okto project, they're pulling it out and making it visible and available. Now, to Steve's point, we're trying to make it available and visible. We're not trying to be a policy engine. There are other organizations, I'm thinking in particular, I think O'Saddle and a few other in Europe that want to consume the SPDX and issue recommended for policy. And so that's probably where, like again, it's separation of responsibilities and we're focusing on trying to get the data as accurate as we can and move it forward. And then other people will say, okay, hey, oh, you're looking at this GPL. You should be doing this, this and this. Oh, you've got a patchy, do this, this and this. And so there was, you know, I think Jolaine put some stuff out about that and others have put out recommendations but automatically applying it and so forth. We are saying that it's not our space, that someone else to do it. Okay. We just want to make sure that the information's available so it's easier to automate this type of thing. For sure. Yeah, I appreciate that. It's good to know where the kind of the recommendation should come in versus just the data. Frank. Oh, one more. Thank you. I have one request to our serialization group. I very appreciate if you could share the working draft for the JSON scheme or JSON context because Japanese are difficult to attend the regular meeting so because of the time zone. So we are catch up with by emails or some minutes. But I mean it's sometimes located at the GitHub, sometimes located at the eSupport and so on. Sometimes in an email. Yes. So it's very difficult to catch up with. So I appreciate if you shared on some certain place such as GitHub and so on. Yeah, yeah. So yeah, the serialization, you know, as I mentioned that's going to be our next focus and I think that's a very reasonable request. The serialization group actually meets at a time that I can't attend either. So I'm in the same position as you. But I will pass along that request and we are going to have a special meeting just on serialization shortly after this meeting. And between Kate and I will bring that request in that we document the schema. Yeah. The other thing that's going to happen with the SPDX GitHub repo, the model repo, it'll actually generate a schema. So when we check in a new changes to the markdown files it'll regenerate the JSON schema so it'll be current. That tooling isn't working quite yet but it should be working within a month or so. Thank you very much. Thank you.