 Welcome everyone. We're talking to you a bit about how do you actually know you're done when you're doing your software, especially after a security fix. So, Pete Rink and myself have been sort of collaborating off and on for the last couple years. Yeah. And I'd like to sort of talk about my, do you want to go to the next slide? I'm basically, my name is Kate Stewart. I'm working at the Lynx Foundation and I'm focusing on how do we make open source dependable, especially in the embedded side of it. I've also been involved with the SPDX projects at the start and you'll be seeing how we can probably pull some of this stuff all together. And then Pete. So my name's Pete Brink. I work for a company, well it's for UL, but it's a division of UL called KVA which focuses on automotive functional safety and automotive cybersecurity and autonomy safety and stuff like that. I've been doing embedded and safety critical software development for 36 years. And because of all of those things, I actually got involved with the organization called ABET, the accreditation board for engineering and technology. And I'm a program evaluator in software engineering, not computer science, not computer engineering, software engineering. And I also was the co-author for a Lynx Foundation class, LFD 116, which is software engineering for embedded systems. So if people want to go look that up, you can see some of that work. And it's a free course. Yes, it's free by all means. Okay, so why is UL here? It doesn't really seem like UL has that much to do with open source. But certainly we as an organization have been getting into and working with that kind of environment, specifically because of the sort of preponderance of open source software in many of these now critical systems. So it isn't just things like automotive, it's things like avionics, it's our power infrastructure, the grid and so on and so forth. So UL has an active participation in all of those things, not just from a power certification standpoint, but also a functional or operability standpoint. So when we think about creating a system or creating a piece of software where we can actually determine or make a determination around both what is it supposed to do and then as a consequence how do we tell, how can we prove that we've actually made that update or completed that change? One of the main things that we're concerned with are requirements. And requirements can really be a variety of different things, but really what we're looking at or trying to capture as part of the system description is the expectations of quality, safety or security. And there are all sort of variances on the same thing. And one of my favorite anecdotes when we talk about a car, the safety person and the quality person probably have different requirements or a different set of perspectives because a safety person, the car that doesn't start or drive around is very safe. So that's great, but the quality person will say, no, but it doesn't actually satisfy what the users actually want for us as part of that activity. And specifically, requirements capture what? So we see people writing requirements and they say, thou shalt write the code in such and such a way. That's expressly what we don't want to have. We want to capture what it's supposed to do, but not how. So the question then becomes how do we measure the quality of a requirement? Because we can talk about quality requirements, but then the requirements themselves actually have properties and characteristics where we, when we look at them to say, is this comprehensible? So we can go through a process of talking to the users and finding out what they want, but then instead of, so the users know or they think they know what they want, but then we have to go through this process of specification where we sort of reinterpret that such that we can define the system that they need or they've asked us for as opposed to what they thought they wanted or told us about. So once we have those requirements specified, we go through and analyze the quality of the requirements themselves and specifically how well they capture what it was that the users told us that they wanted or at least they requested. And I think is there another more information here? Yes, good. There are different frameworks that we can actually use to describe the quality of those requirements because certainly if we go out and write a bunch of requirements that are indeterminate, that nobody has any idea what we're requesting, they're not comprehensible, or if we write something that atomicity in this instance means the same thing it does in software. So the requirements need to describe one thing because if we go through and write a requirement that says thou shalt do this and this, we've now created something that isn't testable because you can't test that in combination and the tests really should be able to create a test where we can demonstrate that each of the requirements has been fulfilled. There is an example. Most of these you can actually go out and look up and download. So there's an example of smart requirements. Actually, it's smart ER. But the smart stands for specific, measurable, attainable, realizable and then overloaded, time-bound, testable, and traceable. And each one of these are things that we can look at as we go look at those requirements to say, is this something? Does this trace back to what the user's requested? Is it measurable? So in other words, how long does it take? Do we have a specification for what the metric is for how we're going to measure this? And on top of that, if we're concerned about making or writing requirements that are comprehensible in any way, there are specific syntaxes that are available to us and that there's one that was actually created by Alistair Maven at Rolls-Royce in 2009 and it's called EARS, easy approach to requirement syntax. I really like EARS. I haven't seen it done yet but EARS actually is a specific enough syntax that we could actually write a parser for it. So we could go through and write a parser to check how correct the syntax of those requirements are as a process of writing them, which I think would be tremendous. That might be a great open source project to actually get started. I'm not going to do it. The last of these things, the last sets of things that we would go through and do are requirements verification and I use the term verification here. Verification in safety context means two different things. We can do verification by analysis and we can do verification by test. So in this instance when we look at this, we're going to go through and do a review and we're going to analyze these requirements to see if they are adherent to the syntax and whether or not they satisfied all the criteria that we specified right here. So are they unambiguous? Are they comprehensible and atomic and so on and so forth. And I'm just reiterating the point that the users request stuff from you and then you have to reinterpret that and deliver what they need as opposed to what they've asked you for. Requirements analysis is the process of going through and figuring out and determining what it was that the users asked you for. So the most important questions that we can ask about those requirements, are they verifiable and can they be demonstrated via testing? Because if we write a requirement that we can't write a test for it's not a good requirement. Because that's how we demonstrate. It's one of the methods that we can use to demonstrate that we've actually completed. And this is really, I mean when we get to this point it's really analogous to any engineering. So if we did hardware engineering, if we do something mechanical or if we're building a house or a car, in each of those instances we would have something that basically proves that these things work together correctly. So that's the engineering part of software engineering. The last stage is requirements validation. And in safety or actually in quality also, there's a thing called a requirements traceability matrix. Which is basically where we have a test for every requirement and a requirement for every test. And it does not mean that it's a one-to-one relationship, but it does mean that you need to have a sufficient number of tests that all the requirements have actually been covered. We treat the system as a block box and we only use those requirements to make a determination around whether or not the system that we now created has in fact satisfies all the requirements that we specified up front. So after we've gone through all of these things we've done everything and specified all the requirements. It can be a laborious process, but once you're actually at that point you have a pretty good idea of what the system is expected to do. And this is for me one of the key things that actually makes agile work is that if we've gone through and done that kind of system level analysis and determined what the set of requirements for the system we're going to build are, it allows you to both do a system architecture. Now you can go through and create the set of components that are going to fulfill those requirements that you specified. But we can also create a logic behind all of it. We have this overall process that allows us to demonstrate the completion at the end of it. In terms of system architecture, I'm not actually going to talk that much about it because the important actually stuff is in the next section. But what I will point out is that in the safety standards or the security standards that are out there, I work with two specific automotive safety standards, sorry, automotive standards. The safety standard is ISO 26262 and the security one is ISO 21443. And there's also an autonomy set standard which is ISO 21448. And in each one of those this is necessary. We must have the requirements and we must have an architecture so that we can actually go in and do an analysis on it to determine whether or not we've discovered any gaps. If there's a safety gap, a security gap, or even a quality gap because we left out something compared between the architecture that we specified and the requirements that we captured from the users originally. And this all fits within the software development life cycle or the SDLC if people haven't heard that term before. So the system's analysis portion is actually what I'd like to talk about because it's this sort of mandatory or necessary step as part of any of the safety or security standards that are actually out there. And that actually includes the autonomy standard as well, not that many companies are actually doing it yet. So there are two different types of analysis that we can do. We can do an architectural verification or we can do an impact analysis and it kind of depends the scenario that you're actually in in terms of which ones of these you will actually do, which one of the two you will actually do. And in each of the instances that we work from when we talk about system analysis there is a lot of information that we're looking for. We want documentation, we want test and training data, we want build configurations, we want all of the details that you as an organization need to know, need to capture in order to be able to go through and do that. And part of the reason we want this is that because of the nature of the safety critical or the security audiences or the nature of where we're going is that if you have something on an automobile and it's going to be in use for 10 years if there is an incident, a safety or security incident, we need documentation, we need this requisite set of information about what just happened and we could use the same, we could use the Max 8. We can use any number of different things like in avionics or in industrial to find out what happened and why it happened. And this documentation is what we can point back to in order to find out what state of the system was as part of that analysis process. So why do we need this? Well, it's software, so in the safety, in the automotive safety standard there are two different sources of error. The first of them is random hardware. Because we call it random hardware failure as a random hardware errors because hardware is the only thing in the system design that's actually subject to those sort of random errors. But software and hardware and systems are all subject to systematic error. And what that means is as part of the building or the design of the system that we, the users, not we the users, we the software people, if there are bugs in there then we put them in there. And so the goal of these standards, the goal of these processes is actively to get rid of bugs before we write the code. So writing those requirements, doing the analyses, doing the architecture, all of those things are mechanisms whereby we can determine whether or not that we're doing the right thing when we're writing the code. And the mechanism then, these analyses and testing and so on and so forth are some of the best mechanisms for us to determine if we have minimized the amount of systematic error that we've actually included in the system. When we go through and do the architecture verification, there are different phases that we can go through. The requirements should be verified at this point. So we should be able to go through and essentially ascertain as part of the analysis as to whether or not the architecture fulfills the requirements that we specified up front. And if there aren't any safety or security concerns, in other words this is strictly from a quality perspective, then functional sufficiency probably is enough. We can go through and basically go through and set the set of requirements that define the set of functions that we were supposed to have in the system that we were building and basically just prove that those things work. We don't necessarily have to test for error conditions, whether or not those have been handled, although certainly if those are requirements that you have, then those should be fulfilled as well at this point. However, when we start talking about safety critical or security systems, we now have specific analyses that we can perform for a safety critical system. The typical one that's performed in automotive is a software FMEA. And so we, at this point, what we do is, I won't go through the labor in too much detail, but to find a bunch of blocks, architectural blocks, and to find the functionality and the interfaces between those blocks, and the software FMEA looks at the information that's being exchanged between those things and says, what would happen if this went wrong? And do we have handling in place to say, to detect that process and if there isn't a requirement already to cover that, then let's add a requirement or update the architecture such that that is no longer a problem. And then we have bugs before we write the code, in other words. When we talk about security systems in 21-443, there are a different set of analyses that we can actually do. We can do a tack surface, threat, and vulnerability analyses on the systems that we're looking at. And these don't necessarily only look at only the external interfaces because, of course, if somebody creates something, you have a Trojan horse in your system or something like that, then the possibility of these interfaces could actually be subjected to the same kind of attack that we might have seen on the exterior or the external interfaces that are defined by the system that you're building. So, if you're building, I should just say, if you have a connected car, a safety critical software system that is connected to the internet, you would be expected to do all of these. Functional sufficiency for the quality requirements, safety, the FMEA or the system theoretic process analysis, and I think somebody may have mentioned it earlier, or a fault tree analysis. And then for the security systems, you'd be expected to do the attack surface and threat and vulnerability analysis. The other type of analysis that you can do is if you already have a system in place. Now, this assumes that you actually, that system that you're working with, actually went through and did, you did an analysis on it previously. So, it may not be that even though you do have an existing system that you can only do an impact analysis, it may be still subject to the full analysis because it was never done previously. So, but the impact analysis is really all about change management. And it doesn't just look at the set of code changes, it can look at the scope of the change that you're working with as well. And how can you determine what that scope is and what the impact it's going to have on the system itself? Because if we're making, well ultimately what we want to try and do is determine what that scope of change, what impact that scope of change is going to have on the basic functionality of the system that we're working with, or potentially on the safety or the security as part of that process. Because really, we don't want to make a change and have it compromise the safety or the security because we didn't go through and do this analysis in the first place. And obviously, there's a lot of information that we need about the system in order to be able to do an impact analysis. So, that's part of the reason why we set up front, we have to have all that documentation is because that kind of information is going to be, that we're going to use in terms of going through and doing this analysis. So, yes, and this is intended to be a joke. So, for safety critical systems, we're going to make modifications that do not impact the safety. I think I talked about most of these already. But the key piece at the end of the whole thing is that we're going to retest the system. We're going to go through and look at the original set of safety and security requirements and run through the same set of tests that we did previously, maybe extend them if we ended up having to make a modification to the requirements such that we can demonstrate that the safety and the security and the quality of that system was not compromised. It's really important to have a modular architecture because that modular architecture is going to limit the scope of any change to potentially one or two or a limited number of modules. Is that my timer? Okay, so I guess I'm out of time. But that's good because I think this is my last slide. So, the configuration management is also very key because that is the thing that will simplify your product quality, safety and security reviews. And then on to Kate. So, as you can see, there's a lot of information we're going to need to capture here to do some of that type of analysis. And so the question is, okay, how do we organize this? Right now it's being done manually, spreadsheets, et cetera. How can we actually make this so we can automate it? That is the challenge, so keep in mind. And we also know that open source is now being used in a lot of critical systems today. I think pretty much any one of these critical sectors, you can go in and you can find open source there. Next slide. In fact, from surveys that have been done by industry, pretty much 70% of closed code bases have open source in them in some fashion and there's a whole bunch of components. So, there's a lot of scope of attack surfaces, a lot of scope of things that are interacting that we potentially need to keep in mind if we have to actually make a fix. Next. In fact, one of the things last summer, the Japanese NISC group basically put out this critical infrastructure, cybersecurity policy for critical structure. And the thing that they highlighted right there is maintenance and promotion of the safety profile. So, for cybersecurity, you're going to have to pay attention to safety if you're working in critical infrastructure. So the question now is, okay, how are we going to track all this? And how can we actually make sure these things stay safe and make things happen faster? So, next. So, you know, one of the things we were looking at, and we've got, there's a blog post there, the links and the slides, but the safety standards are looking for things like, what's a unique ID? How does this stuff come together? What are the dependencies between components? You know, what's the component build configuration? What are any existing bugs in the workarounds? That's kind of what they're looking for as well as documentation and the requirements. So, gee, that first column looks a lot like an S-bomb. And so, the question becomes, okay, how can we start to extend some of that infrastructure so we can start to catch this and build upon it, build upon what's emerging right now from the security space so that we can also get to the stage where we can do the safety analysis automatically as well. And realistically, at the end of the day, it's going to be as Pete was saying, you know you're done after applying a patch that you've actually updated your system and you've retested it and you're satisfying the requirements. That's the heart of this talk, okay? How can we do this? But we have to do this at scale. So, the security availability happens in the workarounds. Security availability happens in another component. All of these things are there. How do you actually make sure that we're staying in a safe state? Well, the S-bombs at least are giving us a way of starting to see inspection about what's in there and how we can build upon it. Next. So, go for it. A minimum S-bom is just literally the components and the relationship between the components, the list of ingredients, so to speak, and then how those ingredients interact with each other. That's sort of the minimum. That isn't sufficient for what we need, but we can build from this starting point forward and take it from this. This definition has come from the publication from NTIA about two years ago now. And a lot of this is information is collected and available accurately at different points of the life cycle. So, we're tying it back into the life cycle as well here. And so, you know, things are available at source when you bring the source into an organization. Things are different when you're doing your build. You know, your configuration information when you do the build. So, scroll it off to the side and record it. Or, you know, after you've done an analysis, the information is there in our flows today. We're just not capturing it in a structured way so we can use it later. And we're relying too much on manual. Next slide. So, you know, both safety and security expect there that configuration management information is going to be there. And, you know, it's there. However, it's not being recorded in a structured fashion. And it happens at all these points in the software life cycle. And so, the question becomes is, okay, can we use this S-bombs to track these key artifacts and dependencies as well as the configuration and other information? So, over the last six months, there's been a working group happening under CISA in the US, but it's actually international. So, try to come up with some definitions of types of S-bombs. Because the tooling that you're using for a source S-bomb is quite different than the tooling you're going to use for a build S-bomb. Put your build S-bomb stuff in your, like, your build tools. Like Yachto, like Zephyr. Your source S-bombs may be someone, but when you're putting it into your OSPO, you might be doing a scan to understand what your sources are. But having the relationship between your build to your sources and what you deploy to your sources is pretty much the hard way you're going to need for doing some of this type of safety analysis. So, one of the things with those types is you can pretty much map them into that life cycle. You can actually get, when you're doing your planning for your life cycle, that's a design S-bomb. Your requirements are going to be, or you'll be thinking what your requirements need to be. And then your source is when you're bringing source in or you're basically creating your source, you're recording it there. As you do your build test and release a build S-bomb, the information is there for putting those out. Deploy, oh hey, I've gotten this product in from someone I'm putting it on my system. I know how I'm configuring it to put it on my system. So record that then. And these are the elements that we pretty much need for all of this. So traceability, understanding what's running on an actual system. Hey, you look at all the deployed S-bombs, what images have been deployed, and they may point back to other build S-bombs. And so you have that traceability down the supply chain potentially. And then if you go next, when we're actually working in safety critical, we may need to go down to the source level. For a lot of the security cases, we may be fine with just at the build out level. But if you start to get to safety, you have to know how it was built, what tool chains were using to build it. Because these are all points where things could interact. And so the source becomes key here. Next. So if you were managing a security fix, let's say your customer security team is here. Oops, up there. And you get told from someone you've been an integrator that you've been working with, and you're working on the NBD, that there's a vulnerability. Well, go in and you're checking to deploy S-bombs. Hey, you know, can you figure out if it's actually there or not? If you go down to the source, you can be very authoritative that this file is compiled in or not. And then you need to potentially, and if you think it is there, do the impact analysis and potentially look at what mitigations are. Oh, and if there is an indication of compromise, okay, let's talk to our integrator and going back that step, we have the information. They go and get the product update, build a new one, create a new S-bomb for the build part of it, publish that out, and they have to confirm the safety profile for their piece. They have any requirements that are there. They hand that off to that customer procurement who has that pointer to the new S-bomb now who then goes and takes the operation that deploys it. And that deployed version, again, has to be looking at that safety profile. Is it conformant or not? And maybe pointing back to that build S-bomb, that new build S-bomb. Not to the original S-bomb, but to the new one, because Ivanka is going to stay current with exactly what the image is doing. The S-bomb should be reflecting. Next. Next piece, yep. So if you start looking at these types, you'll see that the design documentation requirements, a lot of that stuff is happening in the planning stage. You have test frameworks, you build options. Source S-bombs may catch that. You have your build tool, it has its bomb too. You're specifying that. And when you actually build your application, you're using the build tool, you're using the sources, and you create these build configurations, and you've got that information accurate and there. And then when you're deploying it, you're maybe referencing back to that build S-bomb. And as you're doing the testing all the way along, maybe having log files and evidence being generated. These are all artifacts we can capture. So in FOSDEM in February, Nicole Papler gave a talk about using the S-B-X relationships to capture the safety plan. We can extend beyond just the safety plan aspect to actually artifacts and tie into our build flows and have all this information available in tied systems so that we can actually query it and understand, okay, something's changed. I don't have to go through a whole bunch of spreadsheets. All I have to do is do a query to understand if I'm affected or not. And then if I'm not affected, I can be authoritative. And if I am affected, okay, we have to change. We can follow it and look at regenerating. So going to the next slide. So if you have your design concept here, you've got your safety concept. It's a specification for a bunch of requirements. And those requirements will help you generate your source packages and test to prove that you're covering your requirements. Okay, the source package generates an executable with the build tool. And the tests are part of a test framework that uses the executable and maybe generates evidence reports as well as generating logs when things are running. These are all pieces of evidence we can catch and stitch together in an S-Bomb. And the S-B-D-X project has relationships for all this today. Next, or close to. I think there's one we have to add still, but to be able to have it. So when we drill down and look at a specific requirement, a requirement is going to just be a pointer to the files that make good big information and then some tests. So every requirement potentially has a traceability matrix associated with it that's automated. It goes and generates some, you know, executable with the test framework. You are running all the tests and generating the log. And those logs are evidence for that requirement. You have your cycle. So you can put these relationships in place to make this automated. Okay, and we're starting to do this already for now for S-Bombs and security. So if we do that at extra step we should be able to get to this type of information out of it too. Which when we have to adhere to a safety profile it replaces a lot of manual effort and it's okay. Next. So what happens when we have a bug fix? Well, okay, we know there's a bug in Fume. We have to put a patch onto it. Okay. We regenerate the executable. We run it on the test frameworks. We read to the log and that's basically satisfying it. So you know you're done when you're able to satisfy the requirements after you've applied the fix. And that's something that's sort of missing right now in some of these areas. But I think we can get there, even with open source components being part of this, by being able to look at how do we start structuring this so the automation can come in play. And then the other case that Pee was talking about was an impact analysis. Sometimes your requirements change. You've got a bug in the field. You may need to change your requirements. So you may be adding any requirement. You may be changing your code, adding a new test by generating it and comparing back. But this way you actually have your loop and you have the circle. So I think that sort of summarizes most of what we've been wanting to talk about today. And basically I think automation of the safety profile traceability is possible without a potential commercial tool. Commercial tools are doing some of this stuff already today. On the other hand I think we can do a lot of this stuff with open source now. And if you use the SBX S1 and the support for the safety analysis profile, there's a group that's working on this right now. Anyone who's interested come up, talk to me. I'll pull you into the Friday meetings. And we're looking at working through use cases for Zephyr and for some of the system stuff I have at Lisa to show that we can do this. And at this point we need to make the tooling creation. We're going at the standard. If SBX aligns to this. But then we need to start putting tooling into this infrastructure to make it all come out together. That's pretty much what we had to talk about today. That's it? Any questions? Don't be scared. Hang on. They can't hear you online if you don't use the microphone. Right. Hi. You know, I thought that was really interesting with the ears. Okay. I was wondering if there's any tooling around the machine readable traceability from configuration back to requirements. You mean an open source right now? There are definitely ALM tools that do this. There are tools like CodeBeamer. I don't remember them right now. I'm mic'd, so I don't need the microphone. But there are tools to do this but they cost. Obviously they're not open source versions of it that makes it feasible. But yes. And in fact there are some of the there are requirements tools for capturing and maintaining requirements that actually do some syntactic analysis of the requirements that you write as you enter them so that you sort of automatically are increasing the quality of the requirements as you enter them. Which makes a huge difference. Are they like ecosystem specific? Configuration specific? No. It's where it has a jama connect, I believe. And they do that just as really you can use them for any system. Question over here? Be blinded again. There you go. I'm curious what factors affect the kind of recommended retention period for this SBOM data. I work with application performance and infrastructure monitoring and you're lucky to get 13 months retention but because of this stuff being so safety critical I wonder if there is a government regulation or yeah how do we I think it's going to depend on the product lifecycle and what the company is willing to support. Okay. If the product is supporting something in the field they need to retain the data for that whole lifecycle. If you're looking at airplanes or automobiles you can expect 10-15 years of data retention. Yeah. Follow-up, does that affect how much you want to compress the data or how you think about the cost of storing this over time? Or is that a little future forward? You're probably not actually yes it's a lot of data but you're not potentially hopefully you're not doing a lot of updates. It's one update every five or six years or something like that so it shouldn't be that big a deal. I'll also stress it's not just capturing the requirements and documentation used and so on and so forth. My wife actually had to pull a Windows 98 computer out of mothball in order to redo some testing on one instance because that was what had been officially considered that was for avionics. And I will also add in though that as open source components are showing up in these systems more and more that frequency may change. And that's part of the reason for the motivation here is because of the change in open source and new capabilities everything else but we still have to adhere to the safety profile so how do we make it so we can do this at scale? Right. More questions? Yeah. So what was the question again? We'll make the slides available. Yes the slides will be available, absolutely. And like I say feel free to come up and chat with us afterwards and if anyone's interested in helping to pilot out examples and things like that we'd certainly welcome the participation of some of the working groups in the ELISA project as well as in the SPDX. Yeah, I'm actually holding and ask the expert session both today and tomorrow on software quality and software safety if anybody wants to come by and ask questions. What is the difference between like safety and security or where is there overlap with the two? That's a very interesting question because if you look at all the European languages Spanish, French and German it's literally the same word for safety and security. The distinction here is that safety there really isn't a compromise. And with security the company has the ability to decide to accept the risk before shipping something whereas with safety there is a the risk is actually defined by Actuarial tables. Yeah, it's defined by the actuarial tables and how likely somebody is to die in any given year and they extrapolate that to how likely somebody is to die in an hour. We safety engineers were really fun at parties. So with that, but that's because there's a sort of definitive set of criteria about how many people live and die and how likely they are to die in a given year or a given hour. So another question. So I was just going to say essentially it's about life and death because security you could argue that like there's real harm that's done to by choosing to accept risk. So it's just that it's not going to kill them? Well which is also a great question but that is literally the distinction is that the companies have the ability to accept that so say we're going to ship this anyway and accept the risk associated with shipping it whereas with safety that sort of distinction doesn't exist. Well some companies do ship. Okay, yes they do that too. But also Kate did point out that the security principles they say that you should be adherent to the safety principles as well. For cyber security, yeah. So no question. Japan's thinking forward enough that way. Yes. Say the question again. Instead of risk is the word liability better? Yes. If we look at the safety standards or even the security standards fundamentally they are risk evaluation frameworks and that risk evaluation specifically is because of the liability that the company whether it be industrial, medical avionics or automotive that's the risk, the liability that that company is going to be under should something go wrong. Which is also why most of the stuff that's out there right now is under NDA and we have a hard time analyzing and automating it. And so the one thing that Lisa is trying to do is buying examples that we can work out in the open so that we can show how to do these techniques in a way that can be scaled into things where it is under NDA because of the liability issues. Other questions? You got another one? I'm not sure that that's accurate. Can you say it again? And then he'll respond. I heard this definition that I like which is the safety is about the risk of the system on to its environment and about killing people. The risk the system can pose on its environment and security is about the risk of the environment on to the system. It's a different set of criteria because we're talking about cybersecurity so specifically we're talking about attacks whereas with safety it's about something on the system malfunctioning and putting the people either in the vehicle or around the vehicle at risk. Which matches this definition? Fair enough. This is the system, something in the system malfunctioning. So if your brake controller failed what's the consequence of that? Is it going to be safe then? Does everybody agree? Brake should be safe? Yes. More questions? Okay. More questions? So I work on the security side of SBOM and one of the takeaways I've had here is having multiple shades or multiple types of SBOMs. I was wondering if there was any tools or anything out there to corral all of these types of SBOMs into something, single pain or single pain? So one of the things we're trying to do is start to classify the tools according to the types. So we're working on trying to make a landscape visible to sort of say this is a tool to use in a build and flow or this is a tool to do for source analysis. There isn't a lot of tools out there right now recording the config information and I think we'll see more of that emerging as people are deploying. So as part of the CICD continuous deployment as the deploy is happening we're seeing a recording right now of a lot of the build stuff like the Octo will record the tool chain records what that tool chain builds and then records what the build gets assembled into the final image which is what the safety standards are needing. Similarly with SPDX we can do the same thing in that space when we're low level embedded we've got some of this already in play from connecting it but the deploy piece is missing and there are tools out there for looking at instrumenting a system and catching what's running over time but they're not spinning it out in like a format, an S-bomb format everyone has their own way of expressing it and everyone thinks that they know best and so we kind of need to basically sort of standardize on a few ways so that we can actually have the interchange and we can mine the knowledge so we can get to this type of standard. I'm still in the thousands spreadsheet stage and it's a place to have fun. But like I say I think as you find tools let us know if you apply the type of tool a tool can do multiple types but at least starting to have this language so we can explain things to people because I spent a lot of last year with someone talking to me, oh I've got a facility generating this and I've got this tool coming out of the bill flow generating this for the same package, they look really different what's happening, it was source, one's a build S-bomb so there's different information available and they're reporting on so I think that's part of it we're pretty much out of time right now. Okay, but we're happy to we're going to stay up here and happy to come up and answer questions. I just don't want to take on from the next talk. Okay. Thank you very much. Thank you very much.