 Well, thank you very much for attending this last session. And we're having a bit of a boff today. And so having a large room is going to be a little bit hard for interactive, but there is a microphone there. And so feel free to come up and ask questions. And we want to make this interactive. Just give a little, we're going to start a bit of giving a bit of background as to where we're looking at. And we're mostly going to be focusing on the ELISA project. So let's start off with this. So we've all been seeing this in the news recently. This application is there in reality today. SpaceX is sending things up into orbit and is fairly active today. We also have dashboards in our cars that are computer screens effectively. And more and more functionality is moving into being digitalized in our cars and systems. And then, quite frankly, some of our cars are actually starting to learn to self-drive and have driver assist as well as various AI applications starting to come in. And then we also have, there's a project that we're looking at in the medical devices working group in ELISA that is the open APS project that's been working since 2013, which enables type one diabetics to do better control of their glucose through a combination of raspberry pi and then some FDA approved devices like a glucose monitor and insulin pumps. And what these all have in common is they're all running Linux in some way, shape or form. It's been documented that the SpaceX platforms have Linux in several places. In the AG, like in the dashboards, we're seeing more and more of the AGL systems being deployed into the telltales use cases and so forth. And so Linux is there today. We're also seeing that the Tesla cars are running Linux and the stuff is going up to mainline now. And so open APS, everything in the algorithm is running there, but it's running on a Raspberry Pi, which is running Linux. So Linux is underlying a lot of safety critical applications that are in deploy today. And what we need to do is figure out how we can analyze systems so that we can be effective. Now, to put this into perspective, about 69% of the embedded market is running Linux today based on various surveys that people have done in the embedded space. And Linux is very pervasive through a wide range of other market segments as well. But we have a challenge. Open source software is going into safety critical systems. However, there's different mindsets and different cultures in both places. And this is where we're coming from. We're trying to figure out how can we get this interaction to happen better? And so we need to build bridges between these communities. And as you can sort of see from these early pictures of the Golden Gate Bridge being built, you set up structures and then you build the pieces so that the communication can happen across this. So figuring out how we're trying to build these bridges is one of the things that Shoe and I feel pretty strongly about between these communities. And we have started working on this project and is open source compatible with safety standards? Well, yeah, it is. However, it can be very expensive and difficult to find the evidence we need to do the right argumentation. And so part of what we're doing is looking at use cases and trying to understand and working with a lot of our colleagues who care about these problems as well to figure out how we can get this improved. And some of the insights that have become very visible over the last few years is you have to think like a system. It's being used in a system. That open APS system, for instance, or a car. And so there's different mitigations that you can make in system level as part of the analysis. But you need to get the right data about the software that's running. You need to define and limit your scope. You have requirements you have to satisfy. And you have to automate as much as you possibly can because there's so much data out there. It's not going to work without it. And then there's evidence that you're going to be having to collect as part of your development cycle and making sure that that's available. So there's a lot of software engineering best practices that need to be considered as we work our way through this type of thing. And some of these mindsets are not necessarily the first fail, quick fail-off type of thing. And so we have that culture difference that needs to be done. But to be fair, open source has strengths. The code is available publicly. It's been scrutinized by anyone. Code reviews are happening. There's direct feedback. We need to know if we've got the right side of reviewers. But there are so many eyes on the Linux kernel There is a lot of inspection that's going on already. And there's a lot of debate about the right types of things. I was sitting in the discussions yesterday afternoon about how do we basically handle print K-Bear so we can get the last preempt RT patches upstream. And doing the analysis of all the various cases and what could happen, what should be preempted, what should not be preempted. This is the type of argumentation that the analysis people will be looking for, but it'll be very hard for them to find. And so the question is, how do we start to surface the strengths better? And there are no good examples publicly that we're aware of right now. So there's three projects that I have familiarity with that I work with. And Shua obviously is in the Linux kernel community doing a lot of work on the test and tracing, so forth, validation of the LTSs. But the Zephyr is a very small footprint. The Zen is a hypervisor. And Elisa is looking at Linux. How can we get the stuff we need out of Linux? So we need a path forward for closing the gap. And we have these available starting points. And we wanna try to build industry consensus and this is why this is off. So if there's things that you wanna talk about, just step up to the mic and start discussing it with us. And you wanna take it from here? Yeah. Thank you, Kate. As Kate mentioned, the challenge here is how do we have, we know we are doing the design. We know we are, I'm talking about Linux kernel, and we follow reviews, lots of ice get on the code and design itself and we do all of that. The question challenges, it's not demonstrable the way the safety experts want to see it demonstrable. So that's kind of a nutshell challenge. So what we are trying to do is we have safety experts and then we have Linux kernel developers and then we have people that want to use Linux in safety critical applications on their platforms, develop them and deploy products and such. So we have these three groups. How do we make sure, for collaboration, we do want to use Linux. It's been used, it's proven. We have been here 31 years and been using it in all sorts of fields. Why not in safety critical? So it's not why not, they're already doing it. They're already doing it, it's just, we are coming together as an open source and we're all facing the same challenges, right? Everybody involved that wants to use Linux in safety critical applications have the same challenges. They might be from automotive, medical, aerospace and other areas, but there are other domains, they're all, they have the same issues. How do we demonstrate safety? So that's where we come in in Elisa and we are saying we are providing this neutral place for all of us to come together and talk about our challenges so we can identify common processes and techniques to be able to work with. To work with, yeah. So we are that, so we have challenges in some of what are the components we use. So let's talk about a little bit of our mission. So we're essentially going to be providing a common set of elements and processes and tools. What does that mean? What does that, what means is we might come up with, say a process to say, how do you go about figuring out what your workload needs are on the Linux kernel, for example, what is it using? Linux kernel has multiple subsystems and multiple system calls, multiple Iotls, but what does your workload need when it's running during runtime? So runtime image. So you figure out, okay, these are the system calls, these are the subsystems that are actively being used. So once you understand that, you know the areas that you can focus on. To figure out, once you understand that, you can be looking for dealing with continuous change that happens in the Linux kernel and then also looking for regressions and just focusing on the areas you want to focus on. And then also coupled with that, you have the runtime picture, but do you have, depending on the, you want to look at the workload and say, is my workload giving me the complete picture of the footprint? The example I give for that is, say you opened a file system, opened a file and you did the data access, open and you access data and you close the file. So that is a path that your workload can show you, but that's a success path. But what about the other paths that haven't been exercised by the workload? So to be able to get the complete picture, you need a workload that will tell you what the footprint is. So that is one challenge. So we have our, white paper talks about our strategy. Take a look at that. And then, and then also we are, I'm going to tell you what we cannot do first and then we'll switch into what we can. We cannot engineer the system to be safe, right? We cannot ensure that you know how to apply a described process and method. What we can do is, we can create, we cannot create also, I totally Linux kernel. We're not providing, at least I is not providing a safety Linux kernel. Yeah, there's commercial offerings out there. Yeah, no code involved here. You still have all the responsibilities as a provider of this platform or product that is based on a safety critical platform that's based on Linux to meet all your obligations, legal obligations and libelies. But what we can provide is a path forward and make it easier for you to go through, go through the safety analysis and be able to say, we are going to present these evidences too for certification. So we are going to provide a qualitative analysis for automotive and medical and we have now, We're working it, yep. Aerospace coming in. Yes. And with going joining us, I fault. So we are going to take these use cases and the three domains coming together, sharing their challenges. So we'll be able to come up with a process that works for all three and generic. Yeah, and each of them have different standards associated with them. And so it's figuring out what those common elements are from the standards that we can assemble evidence for and try to basically build some consensus. So that is our challenge right now. Yes. So essentially in a nutshell, bring people that are working, trying to deploy Linux on their safety critical domains. And then with working with different standards, like Kate mentioned, and then we will figure out processes and we'll come up with processes and resources and pull all of them together and say, system will make that available to system integrators for them to look at. It could be white papers. It could be GitHub. We have a GitHub presence. We keep publishing our documents there. And we have working groups. We have several working groups that are focusing on various aspects. We have automotive, medical, and we're going to be adding. We're not showing it yet, but we're going to be adding aerospace there. And then those two, those domain working groups, like for example, automotive, medical, and then aerospace, they feed their challenges, discuss, come up with the telltales and use cases, and they feed that information into our other working groups that are looking at safety architecture, open source process, engineering process, and then Linux features. So they are all looking at, they bring, that's where we are bringing exports in different domains. Linux kernel features, looks at Linux kernel features and configurations. Ilana is here. So she leads that. And then we have safety architecture and GAB is here doing that. And then we have systems. Maybe, Philip, oh, okay. Phillips there, cool. So he's over there. And open source engineering process is led by Paul. He's not here, but we'll be meeting him. We'll be meeting in Manchester and working group next week. So that's kind of what we're doing. And we have tools and investigation code improvement. What they do is they're looking at, continuously looking at, they're taking Linux kernel bits and then testing them, doing sysbot kind of analysis on that, making sure quality is good. And then also when they see gaps in documentation, because we do need documentation, right? So when they see gaps in documentation that something is not explained correctly or the process documentation or they keep making enhancements. They are actively sending patches to the engaging the Linux kernel community from Elisa and the safety group. And so those are all of these working groups working together. We'll put up many tools doing patches and then we come up with all these processes and we engage them, or engage the larger Linux kernel at conferences. We just had kernel dependability and testing microconference at the Linux plumbers Monday. So talking about common issues. And so technical steering committee oversees the technical strategy aspects. And it is, we have voting members from all of the working group chairs. We make decisions together. We vote on them. And when we have a strategy decision to make technical strategy or reviews of our documents and publications and so on. So governance and those are, and then yes, we're also, technical steering, GSC also coordinates a linkage between working groups. We come up with the meaning if our automobile medical working group wants something. Something tools working group need to do to prove something. So we kind of coordinate all of that. And we meet Wednesdays, once in about bi-weekly meeting. So mornings nine to 10 Eastern, right? Yeah, it's open to anyone. It's open to anyone. If you're interested in understanding what's happening, just show up. Totally, all of these working groups open to participate. So you can check them out. This is working groups in action. And then... So this is just a bit more detail. Just details about what's happening. Each of the groups has their own mission statement and has their own activities as to what they're actually looking at from the architecture perspective. The ideal, I think for most of these groups is to have things upstream, drive it into the documentation for the kernel or else into our repository for documenting the processes. You wanna talk that one? Yeah. So Linux Features Safety Critical LFSCS, what it does is it does a deep dive. We come together to do a deep dive on kernel features and their potential value to support safety goals. So we're continuously looking at that. And then open, oh yes, open source engineering process. They, this group works very closely with the Linux Features Group, identifying processes and techniques to apply safety engineering principles because that's really where the gaps are, right? Because we know we are doing the right things. We know we have processes that get followed but we need to be able to map them to... To those pillars in the bridge as it's being created, right? And then we string the connections across and then we can start to put the roadway across it so people can go back and forth. So these are all parts of putting that framework in place so we can get to ourselves, be effective and bridge that information. And some of the motivations are literally this motive, Elkstep is really focusing on trying to get that bridge built. Yes. So we already talked about tools. I touched on that earlier. So, you know what we are doing there? And then... For medical devices, we're using the open APS system that we're showing earlier and we're using system theoretic process analysis, methodologies on it to decompose the issue. And we've been going back and forth and iterating on refining our requirements as well as our loss scenarios and use cases. And we've been finding this to be an effective way of getting ourselves to the boundaries of Linux. And now what we're doing is starting to do workloads through Linux. And one of the things that she has been working on with Shafali who is our intern this summer is figuring out how we can actually do the traces and understand the pieces of Linux that have participated in a workload execution. So yeah, thank you. So we did a higher level analysis for this last year. We come up with a higher level picture of what overall open APS load is using. And then this year with the help of our mentorship candidate, we used, we wanted to go surgically, go deeper into looking at closely what a workload is using. And we wanted to do a generic workload because that way system integrators can take this process and say, okay, this is what the blueprint for how I can understand my system needs, my platform and my workload needs. So we analyzed, we also strategically we really made a conscious choice to pick workloads that are easier to just opt in and find. So Perfume, StressNG and PaxTest, we picked those for different reasons. And once we did that, we did the S-Trace analysis and we published that on our GitHub, you'll find that. There is a link right there. Starts to let you see the bridge into the architecture group because it's letting us see the trace history and which actual modules and components and what the control path is through the kernel that's being exercised so that the architecture team can start to look at these components and the interactions between parts of the kernel that may be executing in parallel with them. And then we can focus on regressions and safety considerations with the dot in mind. And automotive. Automotive is basically going after the same sort of use case except for the telltale from the AGL approach. And I think over time, there's a lot more sophisticated ones that people want to see us to but we need to get our basis properly understand in the methodology of working with this. Like I say, autonomous driving is obviously a hot topic, all the data sets. How do we start working with data sets, training data? These are dimensions we'll probably have to go to as an application down the road but our focus right now is Linux and the interactions there. That's correct. And then the one most recent working group that's been formed is systems. Linux is happening in a system. How do we get a reference system available? And so Philip, who's there is taking the lead on helping us put together a mixed criticality system and working with Linux then, which is a hypervisor and the Zephyr project which is a resource constrained RTOS to basically show enhancements and how to start analyzing a system. Because in all these cases, Linux is gonna happen in a system. So trying to figure out methodologies to work with this type of information is one of our emerging areas. And as you can see, these things are all starting to interact and build up the various pieces that we want to look at. Reality is software code out in the ecosystem is composed of a lot of these elements here and have to figure out how these interact and the challenges for the integrators is going to be part of where we want to be working. So it's to help us build a bridge. It took four years for the Golden Gate Bridge to be built and it is safety critical in the sense that if anything happens to that bridge, people are gonna die. And there's millions of people going across it each year. So figuring out this framework so that Linux can be effectively deployed in these places is kind of where we want to be going and what we'd like to talk to you guys about. And so I guess with that, join us, but if anyone has any thoughts on this or next steps or ideas or things we're not thinking about, we'd really like to hear. And I know the big room is intimidating but maybe someone will come up with a microphone for us. Okay, thank you. Thanks for starting us off. Much appreciated. Oh, you'll hold the microphone for someone? Okay. Yeah, does anyone have any questions or does anyone have any concerns about the approach? Have suggestions for doing it better? Yeah, I don't know. It's not a question, but what I just thought about was yet the slide where we see Xan, Sapphire, and also Liza. Maybe we can also use this because I see we're giving a lot of talks on the Liza side but maybe we could consider also for one of the next events that we just set up a panel discussion. Yeah. We'll see Zephire Pratt, and I guess we have the standard Zephire Arters front. They have also the safety track in there. And then along was Xan, which is normal and safety. We have five, six people and we could just actually use it. Yeah, so it's a good idea to do a panel. Like I say, if I had more time, I probably was gonna put in some of the Xan and the Zephyr stuff as well, but I didn't want it just to be us giving a presentation. I did want to have a bit of interaction with the problem, but yeah, I'll put that down for, let's put it into a future talk for doing a panel on with the three different communities. I just suspect Stefano would be quite willing to do it if we can find a place where he'd be able to talk. And I suspect we can probably get Simon in from Zephyr, or Nicole. Anyway, I also thought on things we can do differently or things we want to improve. One of the things that was happening at the Linux Plumbers in the Kernel Summit yesterday was the discussion of improving the Kernel documentation. And one of the things that seems an obvious opportunity for us is they're looking at trying to restructure the documentation. And so I was asking, well, where do we put our safety information? Okay, there's various personas. There's a developer persona. You know, there's an application user persona. The integrator persona and integrator that's looking at safety thing is gonna want to see certain evidence. And so as we rework the Kernel documentation, if we can get a structure in place so we have an easy place to put the analysis from the architecture group and then get the maintainers to review that analysis, I think that will be something that'll be very useful to a wider community. And quite frankly figuring out all of the modules of the Kernel, figuring out where the boundaries are mentally, how the interactions happen, is not well documented. You go to the code, you get lost for a while. That's kind of what seems to be happening, or at least for me, other people may be smarter than me, but it's okay. But getting it so that we can make that available, I was talking to John Corbett. He seems to be receptive to us. He wants to get this structured. So possibly having him talk to the architecture group and as you guys are looking at understanding the interactions, putting that information available and getting it reviewed by the Kernel developers in charge of those modules and interactions, I think will give us some useful knowledge that isn't there today, and build some consensus that way. So that's one thing that's come out of this week that I'm excited about. Yeah, definitely. Talking to John Corbett, who's the Documentation, Kernel Documentation Maintainer. And then we also had a discussion that there is other people that are wanting to see documentation include. We are looking at it from safety perspective and we are saying how do we do that and how do we provide that documentation to somebody that is looking for demonstrable evidence. But there are others that want to look at the documentation for other reasons. They want to look at it to see what needs to be tested. When something is tested, a test fails, who do we reach? And those kinds of things. Our system in perspective is quite different than someone who's running an application, running integration into these applications. So the aspect of doing persona and a developer is going to care about it. And there's a lot of documentation for newbies, but that's one persona and someone who's doing integration and trying to understand how the interactions are happening is going to care about different things. And so there's a receptivity right now and some prototypes out there to start improving the documentation into the persona perspective. And so it's an intriguing idea and I'm hoping we'll continue to have more discussions and moving it in that direction. Anyone else have thoughts? Go for it. This is someone back there. Oh, okay, well, up to them. There's one here, he'll get the microphone. So he gets the microphone first. Run. Hello. This is a very practical question, I hope. Okay. I think we're all in the sort of business of taking what are very generic projects and making them very specialist. And one of the difficulties when it comes to audit is that you're confronted with a project that has tens of thousands of files, whether that's the kernel or U-boot. And even for just developing, it can be very difficult to actually get that mental model. Do you or perhaps anyone here know of any automated tools to automatically strip that down to just what's required? The trace efforts. Yes, so we are trying to do that. So what we have done is we have done two things. We took the OpenAPS workload, which is a Python workload. We took that and then what we did is before starting OpenAPS workload, which continuously runs on Raspberry Pi, we modified the start script to turn tracing on. Kernel has tracing. So we just said we are going to just let the workload run for maybe 30 minutes in an hour and then see what it's using. In the process, we wanted to also run, we also ran some commands. We did that. We got a bigger picture of what device drivers, as the workload is coming up, it unloads modules and loads modules. So we get a picture of exactly what it's unloading and loading, right? So we get all of that. We were doing LSMART before and after and then we collected a picture of, we got a higher level picture. We did that. And then what we wanted to do was run OpenAPS commands, individual commands. For example, you have a command that says, tell me what is the insulin level in the pump right now, right? So there is a command. And then we wanted to run that in S-Trace to see what exactly does that look like. We were unable to do some of that because we didn't have the hardware, we didn't have the rig. So we just said, okay, maybe step back and just take a generic workload, which is already in the kernel perf command and perf does benchmarks and then you can run various benchmarks on CPU, memory and so on. So we said, okay, we'll run perf benchmark with S-Trace. That would become a process for, we did that with few workloads, Pax test and then we picked the stressNG because stressNG is another workload that can exercise various things. So we did that, we put together, put that process together and we have identified all the tools we need. How to, when you see a memory, I mean subsystem, a call, system call, how to find it in the kernel using C-Scope and browsing. We pretty much put together a process for how to do, how to process. So we did that and then now we are looking to open APS. So we've been reaching out to the open APS and there's been people who've said they'd be willing to help us collect these workloads but they wanted to know how to do it. And so we put this information together and I think a lot of the safety work we're gonna be doing is running these workloads that are requirements through this system, understanding what it's touching and then understanding what the implications of it touching these things are. And so making these tooling available in a more practical way for people to access this information. A lot of the kernel developers know how to do this because they do it for debugging right now. They're using it for debugging. We are doing it, we took, I mean what we did was okay, we have this tool available that can also give us insight. It works as a discovery mechanism. So next step for us is going to the open APS community and say you have the rig, you know how to use open APS, please use our process and give us the data. So we have requested that and we'll probably get that in the next few months and hopefully we can share that. And then the other hope we have is that we can use it in automotive, use telltale and develop that picture. We have a telltale, automotive telltale that we have mixed to the quality telltale. So we can potentially use it to gather that. And the other hope is I'm thinking when aerospace comes in, they will fill in the blanks in that space and they can give us their workload and sample workload that we can run. And see quite frankly whether or not, like I say, we started off with STPA in medical and then the automotive guy said yeah, this will work for us too. And so we're starting to evolve these processes for doing the system decomposition into what's effective and not. And so as we figure these things out through multiple attempts, we're trying to get it documented and see that others can work with it too. It's a multi-pronged approach. And other monitors, there are monitors work happening and Gaby's here, he can probably fill in a little bit more about that. And then those things are, the monitors is in the kernel monitors, they will real time going and figuring out which it's a different way of collecting data. Runtime where Claude is running and we are going at it as a stress and look at the system and then monitors is doing monitoring, usage is monitoring stuff. So that's happened. This is that, did that help? Sort of it, I suppose my, that's sort of a dynamic analysis. What I'm after is a bit more sort of static. Oh, there's a lot of static tools out there for doing cross-referencing and understanding call flows and everything else. And I can probably point you at some of them. What I'd like to be asked is like make clean only my stuff. Oh, everyone wants only there. And not for any build objects, not for build objects, but for source code. And lots of people would probably hate that, but for my thing, just being able to cut down the 10,000 files to. So one of the things you can do if you're using Yachto is build your kernel with Yachto and you'll get an SPDX S-bomb out of it and it will list only the files that are included in your image that you came up with from Linux. I'm not using Yachto, maybe I should be that. Yeah, so like I say, that's the hack right now. That's there is just to say, okay, all these configs are set up. This is what it's going to bring in. Here's a fine executable image. And Yachto will do that with changing one config line right now. And then if you mind that file for the Linux kernel, and so Saul Wold from Wind River did a lot of the work behind that. We're trying to refine that type of stuff, but that gives you a quick snapshot of exactly what you care about. So you're looking at the static image of what your workload includes static. Okay, I think there was one at the back. Before we run out of time, they're starting to show five minutes left. Just at the minor pause. Okay. Oh, thank you. Really quick, so I appreciate you being able to share the fact that you're partnering with Automotive as well as Aerospace. But is there also any sort of plans for working with the other lifeline industries such as water or the energy sectors? Yeah. Is there any sort of outreach happening or is there a way to get their feedback into what types of information they might need? So I interact a lot with Shuli, who's the lead for the LF Energy Project at the Elinx Foundation. And I've been working on Espalms and some of that stuff with her. And I think there is definite interest and it's a matter of going and presenting. I've presented in the past, I'll continue to present to see if we can get people want to form together. I would certainly love to get an industrial group going of people who care about the industrial standards, which overlaps with those two specifically a lot. But there's regulations and things like that and no one's talking about those aspects that aren't to this yet, as far as I can tell. Excellent, thank you. Sure. Feel free to suggest people to talk to you. Go ahead, Phillip. Yeah, I just wanted to add from the previous part that we have the special interest group for SPDX on functional safety, right? Because this is also the idea to add tags and things which help them just start it, but if it goes on the long run, you have a little more. Yeah, so if you're interested in how to summarize and create Espalms and so forth, there is a group that's crossed between the Elisa community and the SPDX community that is busy looking at figuring out how to document the evidence. And we're also working with the Zen community as well on this one. And so what the requirements are, how the specifications are working, what are all the evidence that you want to automatically generate out of your system so that you can do the appropriate analysis. Thanks for that reminder. Appreciate it, Phillip. Okay, oh, we've got one more hand before they show me the number. I think this will be the last question. Yeah, hi, I just got a question on the Linux kernel. So I mean, it's a pretty complex system, right? Oh, yeah. Statingly obvious, but I think one approach of my systematic approach might be maybe to go back in time and look at the Linux kernel when it started out and then architecturally look at the evolution. So doing dynamic analysis can be pretty difficult on a complex system, but in parallel you could have a group that's looking at, let's go back to basics when we really understood it and it was easy to analyze dynamically and then maybe look at the evolution of the components over time. It might, it might help. So we actually have that capability today. It's called Kregat and C-R-E-G-I-T.linuxsources.org. And if you look, there was a report given called the Kernel History Report. It's about two years ago now. Two years ago, yeah. But we were able to trace pieces, tokens in the kernel back to 19, the start first kernel. And there's, it's actually in the print, it was in the print subsystems, you could find some of them. And there's a few of them, but you can see literally all the Git commits. So there was, there was three Git trees that were stitched together. There was, once Git started formally, there was a BitKeeper version of Git Tree that Thomas Gleixman was keeping. And then prior to that, someone created GitTrees for every version snapshot of the kernel. And if you stitch that all together and replay the history by looking at a token version of it, you can see evidence over time. And there are literally tokens from those first versions still there. And you can see the evolution of modules, what's coming in, what's not. We have that transparency over the history of the whole kernel if we need to. And we have people that want to work on analyzing that tool. Great. So, you know, it's a question of someone wanting to go and do the analysis for aspects in some of the modules. And it's a good point actually, is when we start having issues with some of the subcomponents, we may start looking back in history and using these types of tools for that purpose. That's great. I wasn't aware of it. And I was actually at least in many summits on Monday, which was really good. And one of the observations as well is that the security community doing a lot of work right, analyzing the system as well. And to do a lot of threat models and analysis for vulnerabilities and exploits. So that may be something that could be leveraged as well, right? To work with them a little bit because there is a bit of overlap there too. So yeah, we are tapping into that. We follow the kernel protection project as well and look at see how they are looking at it. You know, the problem spaces are a little bit, there is no overlap, but there is a little bit different. We are looking at that as well. We look at their scripts and we tap into whatever we can to get the, I mean, you also have to see a particular, to understand the system, you also have to see what is supported. Different architectures support different system calls. So we have to, we kind of looking at that as well. We are using audit tool to get the information on which system calls are actually supported on a particular architecture. That's one of the tools we use. And we go and check the kernel protection website and then get any of the tools that they are using to configurations. They look at the configurations more closely. They are coming from a security angle and say analyze your system and tell us, well, hey, you're missing this configuration that's very important for security for this reason, right? So our goal is at some point, we will have a script like that that will go and analyze safety once run, will tell you this is something you would need for safety. So that's our goal to get there. So that's really what we are looking to do. So yeah, just this is, this is the Kregat tool I was mentioning and you can just basically look at that and they're going to cut us off now but you can go in and drill down and then see who actually committed something at what point and it goes down to the variable level. Okay. Thank you. Again, thank you very much everyone for staying for this last session. Thank you so much. Feel free to reach out to us directly.