 Hi, everyone. I'm Stefano. I'm a fellow at AMD. Good morning, everyone. I'm Bertrand Marquis. I'm a Principal Software Architect at ARM. So today we will present you something around safety certifying an open source project and the specific example of XEN. But before I want to start by giving you some background on why we are doing that. So the automotive market is going towards more and more software in cars. We all see it every day where our cars are, for example, more helping us. We have more driving assistance where the car is following the lines. In some cases it's almost driving for us. I say almost. You need to keep an eye on the wheel at the moment, sadly, but it will come. We have more and more entertainment system where our cars have lots of screens with music, games, etc. Evol GPS system and we have more and more connectivity. Our cars are connected to have real-time traffic information, but we also have applications on our phones where we can stop, start our cars, see where it is, and so on and so forth. And a lot of cars right now are even doing software updates over the year. You don't need to go any more to service your car to do some software updates. And autonomous driving is actually coming. That was seen as something impossible five or ten years ago. But right now there is no car manufacturer in the world which is not working on autonomous driving. So this is really something coming when it's the big questions. So we have more and more critical workloads on our cars. This is making new constraints. The software complexity is increasing. We have video processing. We have artificial intelligence system used to recognize the objects on videos. And we have more and more computing power in the car. It has been some years that we have had a lot of MCUs in the cars, but up to now it was more or less microcontrollers or small processors. But with autonomous driving we are going towards more server-grade system because the autonomous driving stack is requiring a lot of computing power. And this is making new constraints which are new to the automotive market. The automotive manufacturers are going towards safety certification because they need to have a better quality on their software stack. But the public authorities are also looking at that. Cars are a system on which you can potentially be killed. And the public authorities will enforce some kind of official certifications for software in the future. This is kind of what we had in the past in the avionics markets, in railway markets where there is a public authority saying, you cannot fly a plane unless we checked that your software was right. And this is actually not limited to the automotive market. There are more and more systems out there which are autonomous and can potentially harm lives. There are some IoT systems using medical environments. There are some robots in the industry, but those robots are coming to your home tomorrow. And there are some drones. Those drones are flying alone in the air. They can harm people if they crash at their own place. So they will go also towards some safety constraints in the future. So we are going towards software-defined vehicles, which means the software stack will become a lot more complex. There are also multiple OS, air-toss applications running on one single MCU. And it will be based on heterogeneous architectures to optimize the power consumption. So you will have server-grade cortex-A processors for video processing, artificial intelligence, and so on, some cortex-A processors for real-time critical workloads. And you will still have some small cortex-M processors to have computation done nearer to the actuators. so for brakes, for engine management, and for things like that. All those systems will be interconnected, but will also be updateable remotely. This is something which is brand new. And we have also a trend on demand features, which means you will buy your car, and you will be able to add more and more features, which will be purely software based in the future. Mercedes is going towards that, if you look at what they do right now, you can buy your car, and four months after, you can go online and say, it's winter, it's cold, I want to eat my seeds. You go online, you pay for a month, for two months, for three months. Remotely, the feature is activated, and you have heating system for your seeds. So summer, you turn this off, you stop paying, and you can turn more and more features of your car on remotely. This is something kind of new. So on the right of the slide, you see what the SOPHY project has been thinking of as an architecture for cars. What you see here is an hypervisor based system with several operating systems. Then on top of this, you have containers to deploy application, and what is new on the top is you have the cloud. What is the goal of this is actually to develop, test, maybe certify software remotely on the cloud, and do deployment directly from the cloud to the cars. This is to speed up the development to market time in automotive. So all the system will be hypervisor based, and that's not only on server-grade CPU anymore. We have hypervisor support on Cortex-A processor, but now we have also hypervisor support on Cortex-A processor. And hypervisor technology will be mandatory. The main reason is that safety, critical, certification cost a lot. So you want to reduce this to the safety features. And hypervisor technology, which is providing new partitioning, is allowing those kind of certification, cost optimization. So the market is in need of more standardization, and ARM is leading or participating to a lot of initiatives in this area. So first, you have the SOPHY project. That was the architecture that I presented before, where automotive manufacturers, processor designers, and software engineers are setting themselves together to try to define a base architecture which will be used in automotive systems. Global platform, they are focusing on secure services, and they have an automotive working group. System ready, which try to define a base architecture and boot requirements, FFA, which aims at securing the secure workload, or isolating secure workloads, and standardizing the interface between normal and secure world, and lots of other initiatives. The goal of all those initiatives is to let automotive manufacturers focus on their added value, or the difference they have to other automotive manufacturers, so that they have a base software stack that they will reuse among cars. But their application, where they have a difference among other automotive manufacturers, this one will be specific to them. There is also a big need to reuse software. You cannot have one specific software stack per car anymore. So the standardization will allow to have one functionality provided by one company, and you will be able to reuse it on different systems or move it between cores if the base architecture is standard. So open source safety certification has a role to play in there. Why is that? It's because if we move towards standardization, open source becomes a possible answer as where several manufacturer or industry actors can pull their forces together so that they share the certification cost. This way, it can cost them less, and they can certify bigger and more complex software. And this is the trend we see because there are several open source projects we are going towards that and it's sponsored by company. You have the Elisa project for Linux, the Zephyr project, which is going to certification, and the Xen project where myself and Stefano are working towards certification. So the Xen project is critical form that the reference type one hypervisor, we use it to demonstrate what our CPUs can do and do some research and proof of concept on new features. The SOPHY project is using Xen as the reference hypervisor and we have several development ongoing at ARM. So PCI pass-through, MPU, FFA support are some good examples. And we have a big team of engineers actually working on Xen related technologies and contributing to the open source project, including myself as a maintainer and as a member of the FUSA group. Xen is also the AMD open source reference hypervisor for embedded in automotive for both ARM and AMD X86. We have an in-house team of engineers to develop and enhance the hypervisor, Xen, for embedded in automotive. If you follow XenDevelop, you will recognize many of the names of the people in my team. Xen is given to customer and is supported via forum, premium support and engineering. So I'm gonna highlight some of the feature here, not necessarily to let you know about feature of Xen, but because you see that are relevant for safety. And as an example, some of our customers on the ARM side, they're using Xen for real-time isolation. So they're running a real-time workload in one VM and a regular Linux larger Linux workload in a different VM. And Xen is used to enforce separation and enforce isolation of real-time properties. So Xen as a very diverse open source community, the Linux Foundation Project, there are a number of contributors. This is actually an older pie chart of contribution and George, the community manager, just this week at the new pie chart for the last 12 months, AMD is a third contributor. So it's like the yellow, roughly the yellow chunk in this one. Again, I'm gonna take this opportunity to highlight what is important for safety here. So we have an independent panel of experts reviewing code in Xen Project. What this means is, a company often happens that, yes, you have reviewers reviewing the code, but then you are tight on schedule, you are close to the deadline, your bonus is a stake and the patch get committed. We all seen it. And if an open source project is opened by license, but it's still dominated by one company, that can happen in open source project too. It's not gonna happen on Xen. We, all the maintainers were for different companies. We have maintainers from, I don't know, Amazon, Citrix, Suzy, and you name it. So there is no way to push through your bad code because your bonus is a stake. So that's also another thing that is very important for safety. So this is, again, like a chart of all the companies using Xen in different verticals. And yet again, highlighting it for safety reasons. So if you look at the top right, that's data center, top left, cloud, bottom right is embedded and we are there. But I want to highlight bottom left. Bottom left are a bunch of projects and companies that work, that use Xen as their core for security properties. These are security companies. Either developing, selling, or having open source project based on Xen to do security. As an example, one that is famous is Cubes. Cubes is an environment for your laptop to separate, you know, an environment to keep sensitive information and separated from whenever you browse the web, normal website, normal work environment. That's, it became famous when it's known and famously recommended to use Cubes to separate sensitive information from the rest. So I'm gonna, so here are some of the reasons why making Xen safety certifiable is viable and I'm not gonna say easy because it's never easy, but easier than other projects, easier than other open source projects. So and some of this key, you might, you know, if you work on your own open source project, you have a different open source project, you'll see that you might have some of these key characteristics and then it applies to you too, right? So one key characteristic is that we have, like I mentioned, we have a very strong quality and validation process in this, I mean quality review, like the independent panel of expert I was mentioning earlier, it's famously difficult to get coding Xen. Like we get regular complaints from contributors that it's too hard, the reviews are too strict, too many iterations, and normally it's not a good thing, but here we're talking about safety. It is a good thing, right? You don't want bugs to slip in. Of course, if you're a project that your top priority is velocity, right? You want to go as fast as possible. That doesn't necessarily, you know, it does not good thing for you, right? But if you want a project that you have the smallest possible number of bugs or the best architecture and that without compromise, then this is a very good thing for you. And if you want to run Xen on a car and, you know, put, you know, where your life has taken, then definitely you need this kind of quality control. Another thing is the security process. So Xen, also because of the background on the data center, cloud, and also all the security company I was mentioning earlier, as always I had a strong focus on security, right? Security and safety, we all know that different thing, but they share a lot of common themes, like attention to quality for sure. And Xen project is a very strong security process in place, very well detailed, and has been used as a model for many other open source projects, including OpenStack and others. I wouldn't be able to name them all because it was like the grand daddy of all the other open source security processes. So security and isolation are top priorities for the project. It's been for a number of years. We have full traceability. These are another things that cannot happen with Xen. So you know when your colleagues come to your desk and tell you you really need to commit that patch and then one year later he leaves and then nobody ever knows anymore why that patch was committed. That cannot happen on Xen because all the communication is on Xen development, mailing list, and is archived back 20 years. So there is no unknowns. We have two CI loops. Now one is called the OSS test, testing on real hardware. Another one is GitLab CI. Now also testing on real hardware as well as QM and other emulation environments. And we have wide deployments on a number of places where they might not be safety critical today, but they're still somewhat critical, like the real time separation I was telling you earlier. We have cubes user that that's definitely critical for them, right? The isolation between sensitive and non-sensitive material. And even data center and cloud, you know, the hypervisor there still play a critical role because you don't want your credit card data in a VM to be, you know, stolen by another VM just because they're running on the same host. AMD is working on making Xen safety certifiable for AMD platforms, both X86 and ARM. We are targeting IC6-6158-CL3 and ISO 262-ASLD. So ISO 262-262 is a 1.4 automotive and IC6-6158 is a standard for industrial and they are roughly equivalent. So they're similar in level of requirements and strictness. So what I want to highlight that maybe is the most important thing if you have to have one take away, that's the one take away of this presentation, open source was certified before, right? It's not, so this is not the first. So however in the past, what used to happen is a company would take the open source project, would take a terrible release, fork it, do a bunch of things on top of it, in a way become complete owner or whatever would that open source project, safety certified and from that point on, maintain it in house. So yes, the open source code was originally open source and in license is still open source, but in all extent, I mean, that became a company product, basically a company managed through regular company workflows and processes and never to get back to the community, right? In fact, many of these places where open source were safety certified are not public because they became part of the product and nobody will ever know exactly where these open source software was run and Xen is one of these projects that went through this with companies before. Now, this is different. What is new here, we are actually working on Xen safety certifiability based on the open source projects, open source processes, open source software releases, open source everything. So when we have right safety artifacts describing the quality control process, we are gonna describe the upstream quality control process, right? So when we are gonna describe the tool that we use for development and testing, we're gonna describe the open source and upstream development tools and testing. This is a great step forward for two reasons. One reason is we're gonna be able to get back a lot to the community, right? So we are gonna get back, we are working today on Mr. C. We have sponsored full Mr. C course to all the key maintainers and many of the key community members. We are also working in partnership with Buxang to make Xen fully Mr. C compliant. There is way more to come and you've seen the coming slide. So we'll be able to give back a lot more to the community. And also, you know, egoistically we'll want to be able to re-base this in the future, right? That's the problem with the other approach is it's gonna be like a frozen snapshot in time. You're never gonna be able to update. Now here we want to be able to update maybe not for every Xen release, but maybe every other Xen release. So because we are gonna target open source processes and releases, we'll be able to update with, well, not zero effort, but a relatively small effort. And like I said, not everything will be open source that we'll do, but many things will be, more things will be. I mean, some artifacts might be only available to AMD customers, but all the Mr. C work is public and upstream, not just public, upstream. And the documentation of public interfaces, documentation of boot interfaces, that has not started yet, but we wanna make it public and upstream in the project. And GitLab testing, GitHub CI testing, so we want to make many of the tests public and upstream. So for the people of you that are curious about what we're targeting, like I said, ARM and AMD X86, only the new hardware, like we assume AMD-V, AMD-VI, HPET, PCI on the X86 side, on the ARM side, SMM-UV3, GP3, and so on. You see Xen doesn't have that many drivers, so it's a pretty easy selection and it's also, because it doesn't have that many drivers, it's also easy to update in the future. So we think it's gonna be easy to move it to newer generation, a newer family of boards. Xen only has driver for timer, timers, intrap controller, SMM-U, MMU, maybe a UART, and that's pretty much it. So there are no OSI provides of dependencies. This means you're gonna be able to run whatever you like on top, whether it's Linux, so QM is a short end to say this is not an environment safety certified, right? So you're gonna be able to run Linux QM or whatever, Android QM, whatever you like. You're gonna be able to run Zephyr, a safe West. You're gonna be able to run proprietary safe RTOSs, Nucleos, VxWorks, you name it. You can run anything you like in any combination. You can run more than one. You can have Zephyr safety certified and Nucleos on one end and you can have also Linux and Android, right? You're not limited to two VMs. You can have four, five, six, depending on the hardware and your customer needs. We are gonna also include component for VM to VM communication. So you're gonna be able to exchange data between the VMs. We are gonna use Hyperlounge, which is basically this feature to boot all of the VMs in parallel, which not only shortens the boot time significantly, but also it means you don't necessarily need to have any specific ZEN knowledge in any of the VMs because the environment is basically pre-configured by Xenoboot. So you could run all VMs that are not ZEN aware. You are gonna be able to protect safe VM from the non-safe VM. So of course, I mean, this is the kind of thing that goes without saying, but it's better to say it's the whole purpose of this is to protect the safe VM from the non-safe VM. And also one safe VM from another safe VM. That's obviously the use case. On X86, if you're curious, ZEN has a number of VM types on X86 and we are gonna target only the latest and best, which is PVH, is a fully static configuration. So you boot, you get your four or five VMs and you are not gonna create any more VMs, right? At the moment, those are the VMs created to boot. We are definitely gonna support real-time. On ARM, we can go down to three, four microseconds of RQ latency with interference. Most devices, we expect them to be directly assigned. So one thing you can do with XEN is you can take one de-make-able device like the network card and assign it to one VM, take the GPU, assign it to another VM, mix and match in any way you like. And you're still gonna be able to do device sharing. And one of the things that is unique to... Is it working? One of the things that is unique to XEN is, I mean, at least it's uncommon otherwise, is that you can share the device, assign a device to a VM and then share it with other VM, no matter what. Should I disconnect and reconnect? So in the meantime, there are any questions on this? I was actually gonna finish, that was my last slide on the XEN overview and I was gonna move to the description of what we are doing for the safety certifiability of XEN. So if you have any question before I move to the... Go ahead. Right. No, no, no, no, no extra code changes, no, no, no, no, no. Yeah, for the other Gs, can we certify it? Absolutely. Yeah. I'm using these word artifacts that's not very precise. So when you go to the assessor, you need to present a lot of documents, PDFs. And I actually have a list in the following slide of the things that you need to give, including a safety plan and safety architecture and those are the artifacts I'm talking about. There are no code changes that are expected to be zero, literally zero code changes we want to have secret. I mean, it's also actually not an advantage, right? To have, because it's better and easier if the actual release is open source target is basically as asked from upstream. So the goal is to have zero code changes that are private. And how do you think could you get such an effort recognized when it comes to certifications? So I think it's, so Xen is in a way, in term of process is similar to Linux. In term of strictness, I think we are a bit more strict in general. I mean, I've been worked on both myself and also on QEM and other open source projects. I can say that Xen as a truly like line by line review and scrutiny that goes even beyond. It happens sometimes in Linux, but depends on the maintainer, right? So if you touch certain core files in Linux you will have something similar, but Linux is a very large project. There are very different maintainers that have a very different style, right? So you could imagine there being only the strictest Linux maintainer in there, just to have comparison. Okay, I'm gonna resume quickly. So what gonna be able to do, and then I'm gonna, this is the last slide that I was saying on the architecture. So you're gonna be able to assign devices directly, like in this case, Khan and GPIO to Defer, the safe OS, and then also assign devices like OD and GPIO to, let's say, Linux, run the device driver and the PV backends in Linux and share these devices with other VMs, right? And this can be done safely using carefully shared memory without privileges on the Linux side. So Linux does not have to, one thing that is unique to Xen also is we are using Verti.io with grants, which is a technique that allow us to run Verti.io without privileges. That's not how normally it is used, which means the PV backend or Verti.io backend on the left in DOMD, the Linux yellow environment does not require any privilege over the other VM, which obviously is a condition that otherwise you couldn't really, if your Linux can do anything to say for a safe OS, not safe anymore, basically, right? Even if Linux was safely certified because you lose freedom from interference. So this is something we are really aiming, having isolation and freedom from interference, even when device sharing is in place. All right, so the safety certifiability project is divided into two phases. The first phase is the one we are working towards now, which is a safety concept. At the end of the safety concept phase one, you get a safety concept approval from the assessors, that basically is a letter signed by the assessor, saying you are on the right track, your plans are good. If you deliver, you eventually get to safety certifiability or safety certification. Phase two is the one where the heavy lifting is done and you actually need to produce all of the artifacts. And the one that you see on the left are the famous, what I've been calling artifacts. So this is a list of things that if you are a regular engineer, you might not tell you much because they're using specific wording from the ISO 262-262 spec, right? So when I say software safety requirements, you might have an idea what it is because these words in English mean something, but there is a very specific definition on what this is in ISO 262-262. So because I wanted to give you also the people in the room that don't, you know, I have not safely certified something before, so they might not know exactly what these are beyond their English meaning of the word. That's why this is the same for the phase two, okay? This is, I put this table on, which is a kind of translation in plain English on what most of these things are, right? And one, the software safety requirement is basically a documentation of all of the expected behavior of the project, of Zen. Expected behavior depending on the configuration, expected behavior in case of errors, all expected behaviors. The software, if you think why, you cannot make sure the software behaves correctly if you haven't documented what the behavior is supposed to be. So you need to know what the behavior is supposed to be to check that the behavior is correct, right? That's why you need to document it. The architecture specifications, the description of the architecture, the main components and the relationship between them. MISRA-C, so it's very important to have safe coding guidelines. MISRA-C is basically what gives you, so C itself is not a safe language, so MISRA-C builds safety on top of C. And if your software is written in C, you definitely wanna look into MISRA-C. Verification and validation, this is over, together is a whole set of tests. And keep in mind that testing safety need to be very strict. So everything that is run in your safety configuration needs to be tested. So the idea is, you know, the behavior needs to be documented, is implemented, and then is tested. If you have things that are not tested, then it's not safe, right? Everything that you have should be tested. Now, this is when you're going back to projects like Linux, you can imagine not, you know, and Eliza is making progress. You can imagine that also, and also Xan today, honestly, like we have tests, but we don't have enough tests to test everything. We need to test everything here. This is the biggest effort in the project, the testing. There are a bunch of other things that are not very intuitive, but important, such as the software tools classification document. This is the document where you list all the tools that you use in development, but also testing. Things like your compiler, and you might think, why? I mean, you're gonna compile with anything you like. Sure, but if you're using for all your tests and all your code validation and everything, a compiler that is buggy and is introducing bugs between your, you know, in the compilation phase, then all of your effort, you know, don't mean much. Everything needs to be working from A to Z. That includes the compilation phase somehow, right? Doesn't mean you need to safely certify the compiler, right? But you definitely need to do, to at least qualify the compiler, and that's what this step is for. Software failure analysis is, as it says, is one of the most difficult steps, and basically you need to look into all the possible failure modes and describe how your software behaves. You have to make sure the software behaves correctly on each of the possible failure modes. And when I say correctly, is the as you thought it should behave, right? And then there are all the safety management and process control artifacts, such as what is your process to commit things, review things, you know, the safety plan, and so on. So overall is roughly, you know, in an estimate in our experience, 65% of the work is really writing tests, and the rest is writing documentation. And roughly for a project that is 50,000 lines of code, it can be done in two years, two years and a half. And like I said earlier, definitely we want to be able to update the overall set of artifacts by, you know, taking a new exam release. Maybe we have to rerun all the tests. Maybe we need to change a couple of tests. Maybe we need to add a couple of tests, but overall we think it will still be applicable. In part of the exam community, I'm gonna spend the last five minutes to talk about how this, I think, improves the open source project. And this is the takeaway that I'm gonna try to provide here is really to make you think how this is good for your own open source project. So these are lessons learned that are really widely applicable. So first of all, if you safety certify an open source project or make a safety certifiable, you're gonna let these open source projects be run in a wide variety of environments that it was not possible to be run before. Like you could run then on critical industrial application without safety certifiability. You could run it on cars, on trains, on planes one day. So the wider use means you're gonna get more engineers, more people looking at the code. In general, your software will be healthier. That's kind of obvious. But what are the other more direct benefits? So let's start from Misra. So a lot of the safety is about code quality and the open source project cares about the code quality first. Open source projects are all about the code, more than anything else. This is why we're all in open source. So Misra C is often from the safety perspective, not the most important item compared to everything else. But for the open source project, it typically is the most important item because the people in the open source project are gonna care more than anything about the code. So the things that are gonna be impacted the most is by Misra. And what we find is really important about Misra is to explain to the community why this rule is not just, you cannot go to the community and say, we have these 120 rules that we would like you to follow from now on because otherwise we cannot get these pieces of paper that allow us to gain business in this vertical. Now that obviously is not the right way. I try to make it look as bad as possible, but I mean, in practice, you cannot just push Misra to the community without explanation, right? You need to explain why this is useful and you need someone that has the knowledge and the expertise to go and explain why each rule is there. And there is a good reason for each rule. And you'll find that most of these rules actually apply to your project well. And you can select the rules that apply well to the project and decide the other to deviate or not follow them. And the end result is you have way better documentation of your coding guidelines. You also, you can automatically scan for violation. So now a lot of the review of trivial mistakes from contributors can be caught automatically with your Misra C scanner, right? There are all of these attention to safe language and in a way C plus Misra C and the Misra C checkers gives you what the safe language gives you. There is all of this attention about child GPT now and they talk about AI for coding. This Misra C study checkers, they're not AI in the sense that they're not statistical but they give you that advantage of checking the code in detail for a wide variety of errors. And we have ways to write deviations that which means we are not following the violation. We're not fixing the violations in this particular case. We're a way to record that information in the code itself in a generic way for multiple Misra C scanners. And we have Misra C scanners or we are about to have Misra C scanners as part of the upstream GitLab CI pipeline so that we can scan for Misra C error, patches for contributors, scan for Misra C errors immediately before reviewers start even to review the code. Now finally we have what I always wanted to have a bot that does review instead of me, right? This is actually supposed to really offload a lot of the review to really a program. Documentation, we haven't really started but it's key to keep it up to date with the code. So you definitely want to look into Doxy's and RST markdown files in your Git repository. And testing, I mean testing open source project traditionally I've been not very good with testing. Got a lot better in recent years with GitLab CI and other projects. And testing is also the biggest one item and we are working with GitLab. We have private runner, I mean any community runners. Anyone can contribute a GitLab runner to make sure Xen is working well on their favorite boards including one that is on my desk in San Jose. And we support more than 100 tests and they're very easy to add new and extend and we plan to use that as infrastructure for the safety testing. And I'm hoping to take questions if I can. Yeah, so we need a framework to inject failures and we don't have that yet. So that's one of the things we're gonna look into next year in the current plan. So we haven't started, you definitely need a testing framework that allows you to inject a failure. Yeah, right, no idea. So you're exactly right. So the goal of this is to be a reference for many other open source projects or and also many others that want to safety certify also Xen or what all cases and what you just mentioned is one of the things that we are looking at right now how to manage the requirements and discussing it. And there are a couple of options that we have already discussed. We want to make this all public for now if you're interested feel free to contact me and my email or LinkedIn are easy to find and I can provide all the details that we have so far. And there are lots of the meetings and documentation but they're not as good as they should be. This is also why I'm here to share our expertise. For tool qualification, is that gonna be something that's like given back to the community as well? Because say you probably have to qualify GCC or at least provide evidence that GCC is good. So you're right and GCC has been qualified before just for your information. Has been even used in Avionics as Bertrand said yesterday in another conference. Avionics has strict the rules and automotive for your information. So the tools classification document is one of those that I'm not sure if it's gonna be given back or not. So we're still at the beginning and certain things I'm sure there will be. Certain other things I'm sure they're not gonna be and that one is kind of in the middle. Other questions? I cannot say but we're definitely in discussion with safety assessors. We're working with them closely and it's actually a necessary step to achieve the safety constant approval. I'm not gonna name names here in a public stage but you can catch me later. Thank you for the very nice overview. Have you also in addition to increasing the test coverage have you also considered any formal methods like formal specification languages or formal verification methods like Pramila spin or TLA plus or something like that? So we had discussion about that and I think it might be viable for certain subset of the code. Now, thankfully the same code is actually relatively simple so but there are certain parts of the code that are not many but one of them I have in mind in particular on the X86 side that is an instruction emulator, it's not so simple. So for those you need to want to have some extra. So I wouldn't, I think it's possible for central subset of the code and in addition to the testing like a compliment like you were saying I think that could be a good idea. Project Y would be difficult because of spin locks and SMP and these kinds of things that typically are very hard to express all together a project wide in those languages. So I don't think we're gonna do a project wide. So the XANAI, so differently from, so there is a mixed answer here. So differently from Linux XANAI is very small, right? So on arm today without the improvement that we're gonna make to make it smaller is 50,000 lines of code. So it makes less of a sense to add rust in the core because we don't have like a device driver model, right? Like in Linux because we don't have drivers because we only looked at the very core components. So adding rust itself in the core I think makes less sense compared to other projects. I don't say it makes zero sense. So I think we could consider it. But on the other hand, we have a lot more things running on top. XANAI is more like a micro kernel approach, right? So you could easily run an Arto's written in rust with some key drivers in there, fully rust 100% with all your, I don't know network stack, all written in rust as an example and that would be totally fine, right? And also like all the miseracy and there are other things coming like see, a rusted seed, I don't know if you word of it, but we could introduce many of the, we also, I mean, in parallel, there is also a point of introducing the same safety guarantees to see, right? So we can also do that. All right, I think that probably last question, but yeah. Sorry, are there any hardware recommendations that you would like to see from various vendors to meet the ASLD like lockstep CPUs or something similar? You need lockstep for sure for certain level of ASL. ASL is a safety, how strict safety needs to be, right? A is at least, D is the most. So lockstep is one, I think one thing that I cannot stress is enough that vendors don't often get right is the IOMMU. You need to have all of the DMA capable devices protected with the IOMMU. Definitely a strong recommendation. Thank you, thank you everyone.