 So, welcome to my talk on collaboration on Linux for using safety-critical systems. And in this talk, I'm going to sketch how I want to develop a collaboration project for using safety-critical systems. But before I start, just a few things about myself. I'm a functional safety software expert at BMW, and I've been working on a number of collaboration projects since 2012. So in the very beginning, we looked into the idea of doing research on an open-source software platform for autonomous driving using open-source components. This led then to a development in an industrial collaboration forum, Adaptive Autosar, in the Autosar Consortium, a well-known player in the automotive industry. And afterwards, after this has taken off, I looked into the next challenge, namely building up a safety argumentation for Linux, and again, doing that collaboratively in the SILTO-Linux-MP project at OZADL. So I'm presenting here actually in three roles. I've been active in the SILTO-Linux-MP project. I'm also active in a new development that we are having in the Linux Foundation, headed by Kate Stewart, where we're discussing a new formation, and I'm of course also an employee at BMW, where I'm responsible for the operating system safety argumentation in autonomous driving. And what I'm presenting here is really just the combination of the ideas and thoughts that we have in these three different projects. So before I start actually with the, let's say, engineering challenge, I want to give you a short introduction of why are we even considering that from a business perspective. And what you're seeing in the recent years is that software innovations are taking over very established industries. And the well-known Wall Street Journal article, why software is eating the world from 2011 really puts it on the point, right? These software innovations are disrupting all the industries, even the very established ones. And now companies in the mechatronic industry are really struggling with this change. They have to consider will they give their profits to software vendors for the new technologies that will emerge, or do they invest themselves to explore alternative scenarios where they actually are in control of their software stack? And to understand at which point we are actually in the situation with safety critical operating systems at the current date, we have to understand, we should kind of look at the history of operating systems in the general IT industry, and that's really the history of UNIX. So in the 1980s, there were various UNIX operating systems in the market, and those vendors largely controlled their users with a vendor lock-in and compatibility mess. Of course, industry reacted to that and they defined the basic standards, but the real disruption was really done by the Linux operating system. In the 1990s, Linux was born and was really formed as the coalition of tired users and the losers of this UNIX war, and kind of by working together in an open source collaboration model, it built a strong ecosystem of users and software companies. So nowadays, it's clear that Linux has really led to world domination in the general IT industry. The numbers speaks for themselves, but it's clear that it's dominating in public work clouds, it's dominating in the embedded market share on supercomputers, on smart phones, and so on. And the observation is that there are a number of companies, and you all know them, they're active in the Linux kernel development for their own Linux-based operating systems. So they're controlling the software chain and stack even though they don't really build a software product. They might sell hardware, they might sell services to someone, but they don't actually have a software product, but do engage in software development. And as it happens, this history could repeat itself as well. So the mechatronic industry is now at the same crossroads for safety-critical operating systems to run kind of complex algorithms and software, and again they have the question which way they should go. How do they create a healthy ecosystem of safety-critical operating systems and then focus on the innovative software functions that they want to develop? So here I'm providing a proposal how this future could look like, and it's up to the industry in the end to see what will really evolve from that. And the proposal is, of course, that we use Linux as a safety-critical operating system. And the Linux kernel has a number of strengths and weaknesses. The strengths are the large development ecosystem. I think this conference shows how large this ecosystem really is. We have security capabilities. We have multi-core support. We have unmatched hardware support, so there's no other operating system out there that provides support for so many drivers and hardware out there. And we have many Linux experts at all levels. But it's also missing a few things to actually use it in a safety-critical system. The one thing is that we need that many use cases are thinking about hard real-time capabilities. That is more or less addressed by the pre-MTRT patches. Getting the main line is, let's say, a challenge, but we're on the way. And that direction, the real-time Linux project, is moving forward. And the second, and that's the part of my talk here, is the question, how can we prove that the development process is actually compliant to the objectives of the safety standard? And to do that, to address that second question, we actually formed a collaboration project a couple of years ago called the SIL-2 Linux MP project. And its mission is really to provide the procedures and methods to qualify Linux on a multi-core embedded platform at safety integrity level two, according to IEC 61508. So if you don't know what safety integrity level two is in IEC 61508 is, don't have to worry about that. I'm going to explain a little bit about that later on. But of course, we don't only want to provide those procedures and methods, we really want to show that this works. So we're showing that we can apply these methods in a real-world system. And then we show that we can actually have a potential for collaborating on these artifacts that we're creating. Because again, it doesn't help if we show it only once and you have to redo it and there's no further use of the things that you did before. It's a collaboration project with a number of companies. So there are 16 companies in total. We have support from academia, so from Alexei Kololyshov, from ISPRAS, and from Julia Laval from INRIA. We have experts from certification bodies. And we have, of course, the SIL-2 Linux core working team. So there were four people, Nicholas McWire, who presented yesterday, Andreas Platschek, Lukas Don Böhm, and Markus Keidel. And they were doing the main work in that project. So before I can really explain what we were doing in this project, I'm going to give a short introduction into functional safety. Because as I've heard in the keynotes and a number of other conferences, there's, let's say, a slight confusion of what really the goal of functional safety is. So functional safety is the part of the overall safety of the system that depends on the system operating correctly in response to its input, including the safe management of likely operator errors, hardware failures, and environmental changes. And the objective of functional safety is freedom from unacceptable risk, a physical injury, or of damage to the health of people, either directly or indirectly. So that's the definition that you can read from Wikipedia. But let's break this down. So when we're looking at functional safety, we want to make sure that the system operates correctly. But it should not only operate correctly as it was intended to be used, but it also should operate correctly if the operator does something wrong, if the hardware fails, or if the environment changes beyond that what we initially thought that system to be built for. And it's clear that when you have this kind of large space of possible changes that can affect your system, you're not going to build a system that is 100% perfect in that scenario. So you are actually aware that these kind of changes can injure or harm a person in that surrounding. But you want to make sure that you build a system where this risk of hurting someone is acceptable. So it's the freedom of unacceptable risk means we have limited the possible harm to people to an acceptable level. So how do we do that? And the main course to do that is really risk management activity. And risk management is judging your system, analyzing your system, and to focus the quality assurance on the right aspects and the right parts. And it's not just that you write hundreds of documents to prove something to someone, but it's really the thought process that you do to create these documents to document what you've thought of. But how do we decide if a risk is now acceptable or not? This is a highly subjective matter. One person in the audience might say, well, I'm crossing the road without looking left and right, and I'm just passing it. And he's, of course, taking a much larger risk in his life than someone who would say, well, I'm not going to leave the conference if it starts raining because I could slip. But again, if we're going to sell this product to millions of people, we have to come to a common agreement what that means, that it is acceptable. And this agreement has been laid out in global safety standards. So there's IEC 61508. And it's a functional safety standard for electrical safety-related systems. And it provides a basis for all kinds of industries. And you find further adaptions for different domains that try to tailor it for their specific scenarios that they have. And you can now take this safety standard. And you read it as a good night lecture. It's about 10,000 pages, so it's going to take you a while. And you try to summarize that on half the slide, right? So then you would say, well, if you want to design a safe system, there are two things that you should consider. First of all, you do a system design in the system analysis. So you analyze your system. You know which parts have to be of high quality so that the system does not harm a person. And then you assign safety integrity levels to those parts and to those properties of the system that are relevant for you. And the safety standard gives you kind of a guideline of considering four levels, cell one to cell four, where cell one means it's kind of a low safety impact, whereas cell four means it has a very high safety impact if that system does not work. So when I'm talking about a cell two level, we're considering that it has a medium impact on the overall safety of that system. And then the question is, of course, so how do I achieve a high quality system element? And the answer that the safety standards provide is, well, it's really difficult to say that there's one possible way of doing it. But they say it's generally the development process that you go through, and that has to be rigorous. So you develop those parts with this high safety integrity level with a sufficient rigor. And the safety standards tells you which objectives you want to meet at each development phase so that you make sure that the overall product that you develop has a certain quality. OK, so let's move away from that kind of abstract description to a concrete example. And we're going to look at a robot that can move around and he has a possibility to harm people. But he can only really harm a person if he kind of leaves the blue area. So here we have an example where you see the robot is in the blue area, and around that blue area is a red fence. And now the question is, with this red fence, is this system safe or not? So maybe that's a question for the audience. Who believes that this system is safe with the fence? So who believes that it's not safe? OK. And who believes that I didn't tell you enough about the system to even judge if it's safe or not safe? So there's a large maturity that understands that when I just give you a little bit of this knowledge, you don't know if this can go wrong or not. And of course, so one of the questions that come up is, well, is this fence actually strong enough to hold back the robot from moving out of the area? So this is something that we should consider. Further, we have to know, is it actually set up properly when you start operating with it? And does someone keep maintaining it, right? If the fence breaks down and you don't notice, the robot could just leave that area. So let's consider a second example. We have, again, the robot. And it's connected to a power supply. It doesn't have its own battery. So it can only work with that power supply. And that power supply is in the middle of that blue circle. And we have a line that connects this robot to that power supply. And now the question is, again, is this system safe? And again, we look at it, and then we say, well, probably if we know a few more things about the system, we know that the cord is shorter than the radius of that blue circle, then what could possibly go wrong? Well, the robot tries to move out of that blue area. And either the rope or the cable is really stopping that robot because it cannot move with its motor beyond that cable. Or, well, it can actually do it. It has a motor that's strong enough. But then the cable would just unplug or it would break. And then the robot would have to stay in that area because it doesn't have any further energy supply. So it's also, if we know the length of that cable, that system would be safe. And now we have a third example. We say, well, there's actually nothing physically around that robot that's stopping it. And the question now is that system safe? And now we actually have to look much deeper into what is actually steering the robot, right? What is actually controlling that robot to stay within that blue area? And in case it's a software system, we have to understand the software system, the hardware, and its interaction with all the devices on that robot to know if it will only be steered within that system. And what we get out of these examples is that if you really want to understand if your system is safe, you need to understand your system sufficiently, right? You have to understand what are the important conditions for your system to make that proper assessment. And also what was mentioned in, I have to contradict to one of the things that was mentioned in the keynote yesterday morning, there was the claim statement or the statement that you would build a safe system from a bottom up approach. As we see in these examples, it's clearly that I don't have to understand all its internals considering the fence, considering the cable. I don't have to understand all its internals. I can already judge on a very high level that the system can be safe because of that. So system safety is something that you develop top down, not something that you develop bottom up. But let's look at, let's play this game a little bit further. And let's say there's a steering application that steers this robot. And now the robot can only harm someone if the steering application steers the robot to leave that blue area. And now we look at kind of the architecture, how this looks like. There's a steering application, there's G-Lib C. It's, there's a kernel, there's hardware. And now the question is, is this software safe? And here it's of course much more difficult to answer. But it's clear that you have to consider the influences of G-Lib C on the steering application, the possible influences of the kernel on the steering application and the possible influences of the hardware on the steering application. And then you can start judging if that system is safe or not. So an alternative system, some alternatively could come up with yet another architecture and could say, well, I have a steering application, I have G-Lib C, I have the kernel, I have the hypervisor of the hardware. And again, the question is, is this system safe? And all the questions that I mentioned, they are just repeating again, right? I'm asking, what's the influence of G-Lib C? What's the influence of the kernel? What's the influence of the hypervisor? And what's the influence of the hardware? So just by adding further complexity into my system, I just have to answer more and more questions about my system. And I have to do more work to understand if that system is safe or not. So again, if you really wanna know if your system is safe, you have to understand the system sufficiently. So let's think about how we would use Linux and safety critical applications. Now that we know these kind of basics of functional safety. And if we take this general message to assess whether your system is safe, you need to understand your system sufficiently and apply that to the Linux kernel. It means, well, if your system's safety depends on Linux, you need to understand Linux sufficiently for your system context and use. And understanding Linux includes two things. First of all, it means you actually have to know how Linux works. That's the obvious answer, right? You have to understand how Linux works for your system context and the way you want to use it. And if it actually does that, what you are intending to use it for. And the second thing is, of course, you have to understand Linux as how was it developed? What was the development process that was used that made this system exist, right? So you have to understand who's working on it and with which rigor they're working, who's checking that, who's testing it and so on. And this is something that Nicholas McQuire explained yesterday. This is the assessment of the safety by a process argumentation. So we want to show that there's a compliance to the objectives of a safety standard by a development process assessment. And kind of putting that together on one slide, it really is roughly the following argumentation. We said Linux has been kind of continuously developed for the last 25 years. And there are various people out there that state that the process, how it has been developed, has been continuously improved. And so whenever there's something technical or procedural in the kernel development that is pressing to them, the community addresses that. We have seen that for technical incidents, fixing bugs or redesigning certain parts of the system, but also from a procedural point of view, right? Or even from a social point of view. Now, what you can actually do with the open-source software is that you don't have to just trust these statements of a few of those kernel developers that are saying that this continuous process improvement is in place, but you can actually provide evidences that this process quality and process improvement really exists. And now you take these evidences and these evidences can indicate that all the objectives of a safety integrity level two for some selected parts and for selected properties are actually met. But that's the plan that you would go through to show that Linux has been developed or has been developed with a sufficient rigor for the selected parts and properties that you're interested in. So, but if you think about this argument a little bit, you'll find out that the real difference now between a safety-critical Linux and a mainline Linux is really not the source code that you're looking at. It's really the way that you use it, right? You understand your system and you understand Linux and you understand and you make sure that your system uses Linux based on those selected properties that you investigated and that you can assure that are working according to the expectations that you have. And when I'm talking about, well, you have to understand a certain property of the Linux kernel. When we're doing a system development, we're not talking about a single person building a system. We're usually talking about a larger organization, a team or even a company building such a system. And the organization's knowledge is really encoded, not only in the individuals, but it's also encoded in the processes you use and the methods you apply when you build your system. So really, you have to have these processes and methods established in your company that you can find out what are the qualities of your complex software system and what are the qualities of the Linux kernel. And then you could actually make sure that when you build a system, it's gonna work or it's gonna have the properties to claim it to be functionally safe under the constraints that you were considering. And of course, to establish processes and methods, the first step is education on these topics. And that's really the key to your safety product development if you start using Linux in a safety critical system. So you get more information on this, the different activities that are required in that respect at a summit tomorrow. We have the Linux and safety systems summit tomorrow at the Sheraton. And if you're interested in that topic in greater detail, just come by. It starts tomorrow after the key notes at 11 o'clock and goes until five. And it's also free of charge. So just come by and then and join us. Yeah, so I'm now kind of stepping back what we did the last three years and we kind of in retrospective considered how the SILTA Linux MP project was running. We found out that we actually had a very successful exchange of ideas and education of the challenges that we were facing. This resulted in a defined plan and compliance route that was reviewed by the project participants and the safety authority which kind of sketches in which way you can do your project development. And we had some first technical insights in that area looking into system engineering methods for complex software, looking into methods and tools for kernel investigations and understanding the existing Linux kernel verification tools. And we understood also that it was really important that we keep on educating and exchanging ideas which we did in the past in a number of three-day workshops on different topics. So, but what we also learned is that there were a couple of things that we just did not expect when we started the project. First of all, we really organized it as a research project. We didn't think about it, how we would actually properly collaborate with each other. And we also underestimated the difficulty of collaboration around functional safety. We always thought, okay, this kind of, there's open source model, everyone understands and it should just apply, right? It applies in software development so easy, so why shouldn't it work on functional safety? But functional safety is a difficult and kind of mind-bending field. It's very different from software engineering and also, whereas in open-source software, a collaboration model was already established, it wasn't established in functional safety at that time. And there was also a misunderstanding of that educational goal. So we told the people, well, in next three years you're gonna learn how you can do this. Well, and they left off and came back three years later and asked, okay, so now I should know how to do it, right? But that's of course not the case. What we meant was, well, if you follow us the next three years and you learn the next three years, you're gonna be at that point. It's not that the time bound was bound by the fact that time just goes forward. It was the time that they took to learn and to understand and establish that in their organization. We didn't have access to suitable hardware that was actually in the documentation for the collaboration. And of course, members that didn't participate had really difficult time to make use of the results. So now we're thinking about, how do we move from research to collaboration? And for that, we have to first kind of set the goals of that collaboration project. And now we're understanding the goal of that collaboration project is that we have a shared development and working effort on a number of topics, right? We wanna understand how we do safety engineering of a complex system. We wanna create the risk assessments of the kernel subsystems and features. We wanna gather the evidences of kernel development process compliance, develop supporting tools together, and create material to train and educate engineers together. So that's more or less the scope of activities that we're thinking about for a collaboration. Then of course, if we want to do this, we have to consider a number of conditions that we should follow. We do need to establish a well-defined governance and project steering in a neutral organization, right? You don't want to have a situation where someone thinks he's left out because of the common setup. You have to maintain a good community health. You want to make sure that people continue to work nicely together, that things are not robbed and that the community really grows. You have to keep educating on functional safety and process assessment. We didn't only consider that it was a good point, but it's also kind of a prerequisite to get new members into the group. We have to share a common system to focus on common activities. Again, think of the robot example that I showed. The activities that you would do are very different if you have the fence around it or you have the cable or you have yet another setup. So if you really want to say what's important to do, what's the important activities that you want to look into, you have to have a common system. Otherwise, everyone's going to do something else and you don't come to a proper collaboration. And we can't, of course, do that alone. We have to reach out to the different communities that are supporting that activity, that's the Linux community, the safety communities and the hardware vendors that have to support that. Otherwise, we don't create a full system and understand how a full system would work. So here I'm really just sketching what a successful outcome of that collaboration would be. And this is really an ambitious and challenging goal. But let's go through it, right? We're thinking about creating the assets for a safety certification of a Linux-based system. And that assets, they consist of a complete process description, selected kernel features that we considered, the tools that we considered and previous process assessments to give you confidence that you can move forward. We have to show that this is infeasible with the reference system. So it's not just paperwork that you don't know if you could trust or not. And it has to, of course, be usable for someone else to integrate in their own organization and that can be only used by properly educated system integrators. It should be maintained over an industrial-grade product lifetime, right? So it doesn't help you if you know that this is a one-time activity, if you know that your product is gonna be out with its customers for much longer time. It has to be well-known and accepted by the safety community, the certification authorities, and standardization bodies in multiple industries. And it should be positively recognized and impacting the Linux kernel community and come with hardware collateral from multiple supporting vendors. So these are the, for me, these are the important factors with which you could, in the end, judge if a collaboration project is working towards a successful outcome or not. If we can reach all these goals, then we have a really successful project in that domain. So we were thinking about that and thinking about how you would set up such a project structure for this kind of collaboration. And we identified that there is a core steering team that should try to bring the different project working groups that we came up with together. And it comes with four roles, right? Comes with the project manager. He makes sure that things are moving forward in time and fit to each other. You have a community health manager that makes sure everyone works nicely together and that you can onboard new people. You have a functional safety architect that understands the system and makes sure that the artifacts that the different groups create fit together. And you have a functional safety manager that makes sure that the artifacts that are created are complete, right? They're not missing a piece. So these are kind of the roles that you usually have in such an organization and that interact to get your overall results done. And then we have a number of project working groups. So compliance and certification, component quality assurance, tooling development, reference use case and incident and hazard monitoring. So that was kind of the first sketch of the different topics that you would have to address in such a collaboration project. Of course, we cannot just only talk about what this project will do, but we also have to make clear that this project cannot do everything. And very specifically, I'm gonna point out four points here. This project cannot really engineer your system to be safe. If you're using that system, you're using the collaterals, the assets that we provide to you, we still do not know how your system works. We can provide certain artifacts to you, but you have to do the engineering on your side to make sure that it really fits together. We also cannot ensure that you know how to apply this described process and methods. We can educate you in that respect and we can provide you the guidance, but in the end it's you and your organization that has to decide and has to make sure how to incorporate that into your processes. We also cannot create an out of tree Linux kernel for safety applications. Recall the argumentation that I sketched for the process compliance argument. We said there's a continuous process improvement in place, and if we now just start and create an out of tree Linux kernel, we immediately invalidate that argumentation because we're not expecting a further process improvement. And we cannot relieve you from your responsibilities, your legal obligations and liabilities. You're putting the system out into the real world and you have to live with the consequences of that behavior. So we were thinking about the different modes of collaboration and that requires informal exchange of experts, comment training, shared development, shared maintenance of evidences and a collaboration on use case. And we're gonna see which of those modes of collaboration will be established in this collaboration project. So of course, they're also, as I said, it's ambitious and a challenging task that we're trying to address. And there are a number of risks and opportunities when we do this. Rather than going now through all of them, I just wanna point out one thing that we discussed over the last years, that there are different conceptual approaches to come to a safety argumentation. And what I've seen in the last year was that people were starting to thinking in camps. They were saying, well, if you do it, you can do it this way or you can do it that way. And I'm only gonna engage in this one way of doing it. But really having different conceptual approaches can really result in an overall more robust argumentation. Right, you know that it's not singular thoughts that have led to that conclusion, but they're really multiple ones. And that's really the thing that we should try to address in that project, that we actually come to objective assessments without thinking in kind of different camps. Yes, so this is more or less the things that we have discussed in various people in the industry, how we could collaborate on the use of Linux in safety critical systems. So let's come to a conclusion. I'm just gonna summarize what I've told you in the last 40 minutes. So we see that there's a need in industry to have an operating system for these complex algorithms and software suitable for safety critical systems. And should be aware that functional safety is really about managing risk in your product development. And you can only understand this risk with Linux-based systems. If you understand the system and you understand the kernel. And this basis for understanding Linux in safety critical systems is available. This is what we developed in the SILTO Linux project. Now, if you wanna really extend this basis that we created, it requires a larger industry collaboration. I've sketched the technical challenges and the organizational proposal that I made to kind of move forward in that direction. So really at the current point, it's at the current stage that we are, it's really now the question to all of industry how it's really gonna continue. So the future of really enabling Linux and safety applications is really up to all of us if we wanna go in this crossroads one way or the other. If you wanna interested in that topic, you can follow us tomorrow at the Linux and Safety Systems Summit. You'll find the information on the webpage of the co-located events. And that, thanks for your attention and happy to answer any questions. Yes. Is this knowledge base already hosted somewhere so we can access and read something about what was achieved on this topic? So the question was, is this knowledge base already available somewhere? So I think the last few weeks, so actually the last few months actually, Nicholas McWire and all the project participants were discussing to make that information publicly available. We wanted to have it in place for this conference, but I think there were a few things that technically didn't work out, but I expect that it will be within the next two weeks available, yeah. Yeah, it's probably best if you just sent me your email address and then we can provide it to you. It will be probably announced on the OZADL webpage. And there you can get the information, but if you sent me your email, I can inform you as well, yeah. Any further questions? So the question was, does this continuous process improvement kind of break down as soon as you ship the product in your safety critical device, so car or robot or whatever? So it doesn't really break down, right? What you did is, you ensured that at the time where you shipped the product, you evaluated if all the activities were up to a certain level of that state of the art at that point. And that's what you do when you ship your product. Now, of course, if you then say, I'm not gonna update my product after that, you're probably really not benefiting from any further bug fixing or any kind of improvements that they make afterwards, but you really have to then kind of grow with that product, especially as state of the art moves forward, you have to kind of consider that. But you're not, by the point where you're shipping it, you're not putting your customers or your users at a level of harm that you wouldn't accept, right? So it's not that you're not that this continuous process improvement breaks down because of that. You just have to adjust in your development afterwards in a different way. Well, or you have a stable kernel because that's exactly the process that's defined in the Linux kernel development that you have a stable maintenance kernel for a certain time, and then a new version is released. And again, that is improved and maintained over time. But if you don't know enough about your system, I wouldn't say it's unsafe by default, but for sure, if you do not know enough about your system to kind of make this argumentation that I made in the examples, then there's a very, very high risk as soon as you have a rather complex system that you overlooked something and then that would hit you in that case at some point. Yes, so yes, the question was, is there a difference between not safe and unsafe? And I think you can consider, when we say something is safe, we have the definition of we did certain activities to make sure that we understand the system that we did a proper risk assessment that we can say that we have limited this risk of harming someone to some level. Of course, if you did not do all these activities, it doesn't mean that the system will immediately kill someone, but you just don't know, right? And I think that's the point, right? And you usually look at functional safety because you know that there is a potential for it, right? If I develop a robot that can move around and has some kind of possibility of harming someone and I program it to do whatever it wants, do random movements, it's highly likely that he's gonna move somewhere where someone stands and then just gonna hurt that person. Okay, yeah, maybe in the back here. So the question was, what, is there any? Yeah, so I think the question was, is there something like a safety critical hypervisor? And I think the answer is, as I've shown in the example, right, what's the definition of a safety critical hypervisor? You have to look at the whole system and then what you probably want to argue is that this hypervisor has no impact on the safety application that can, from the hypervisor itself, can make the safety application go wrong. But that doesn't really answer your question because you still have the question, does the hardware make the safety application go wrong? Does the operating system that you use make the safety application go wrong? Does the G-Lib-C make the safety application go wrong? Does the safety application in itself do something that the system would harm someone? And the real question is really, what's your intent of providing, using that hypervisor? What are the, what's your expectation that lowers the impact of any of those elements? And so far, nobody has really answered that question, what it really reduces the impact of certain elements. But yeah, that's really a system engineering activity that you have to go through. But it doesn't, you can't kind of have this kind of default answer to that. Yep, back. That does conflict with that, for sure. That statement certainly conflicts what I presented. But it also conflicts with, hopefully, your personal understanding of functional safety now that you've seen those examples, right? Do you have to understand every line of that robot where the fence was around that blue area? Do you have to understand every line of that? Or do you really want to spend the effort on building a proper fence? Do you want to spend the effort on making sure that that fence is installed properly? Right, you want to make sure that you check every morning that that fence is still there. You don't have to check every line for, for building that system. Well, you have to know a certain property of that fence. The property of that fence that you have to know is it's strong enough to stop the robot, right? You don't have to know other than that and what are the properties that are important for, or what are these, let's say, sub-properties for that property to hold? That is, that is the relevant things. You don't have to, you don't have to care about its color, right? The color is really not important for that, for that fence. If it holds back the robot, the color is irrelevant. And the same holds for code, right? If there's parts of code that determine the color of that operating system, it's really irrelevant because you want to know what are the real, the properties that you want to guarantee. But of course, there are different approaches to it. If you know every line, you're going to spend a lot of effort but I don't know if it's really making the system safer. Yeah, so that's exactly the question, how many, what are the selected properties that are relevant for you? And that you can only drive with a concrete system that would tell you, okay, what are the relevant parts? And that's why we have a common system to discuss that. Okay, yeah, thanks for your time. We're already 10 minutes over, so if you have further questions, we should discuss that afterwards. But yeah, thanks for your time and if you want to know more, just join tomorrow and we're going to show more about the methods and process approaches that we want to take.