 Yeah, so I, everybody in the back hear me? Can't hear me? All right, first of all, I wanna say this is my favorite conference of the year of, I come to each year, I really enjoy it. Lots of stuff I learn while I'm here. And last time I gave a talk, it was so good that they had to shut down the world for three years, so we're finally back. Hopefully that doesn't happen after this talk. So basically I'm gonna give a talk on containers on wheels. I like to think of myself as the Ursula of containers. Basically, I started out by getting containers into Flora, then into Rel, then into OpenStack, OpenShift, Ansible, and now into a thing called RIVOS. So containers on wheels, we call ourselves the cow team. So it's a part of automotive. Basically nine months ago, I moved out of the container, container leadership, the Podman team, and all the low-level container stuff, and I moved over to Auto, mainly because I thought the container team was ready to go on their own and didn't need me in the way anymore. And now I'm working on Auto, but really just continue to work on containers and all things. So we call it the Red Hat and Vehicle Operating System. Everybody calls it RIVOS. We're still not sure if we're supposed to use that outside of Red Hat, but anyways, we're not supposed to use rel outside of Red Hat either, and that's not worked well. Last year at the Red Hat Summit, there was an announcement of a big agreement between Red Hat and General Motors, basically to look at getting a Red Hat-based operating system into all of their cars going forward. We have a lot of interest from a lot of car manufacturers and OEMs in the car. They're all looking at what we're doing. They're very excited about it, and what we picked, General Motors is our customer number one, mainly to control the amount of requirements and gathering. What we plan to do is over the next year and a half, basically design the operating system with General Motors, and then we'd open it up to other car companies and other OEMs and potentially other moving vehicle type things. This is all part of the Greater Red Hat Edge operating system. This gives you, we're building an operating system here, or basically taking rel as the operating system, and then General Motors is building all sorts of stuff on top. When I talk about this, what's the software they're gonna be running in these vehicles, we're talking things like self-driving cars, all the sensors, all of the infotainment systems, all different types of software, but the bottom line, we're doing things with System D and Podman, and adding new features like Composer Vest that I'll cover in a lot of these other functions in this talk. So basically, RIVOS is a binary distribution based on Red Hat Enterprise Lakes. We're not building a brand new operating system. We're taking all the basis of rel and moving it into automobiles, and really what we're trying to do is move towards justifying that the operating system that runs in your vehicle could be rel. One key difference, or at least the default, is that we're gonna be used in real-time kernel, and as we talk about what it means to run an operating system in a car, you'll find out why we have to use the real-time kernel. We're also planning on using OS images built by customers. So think, we'll keep this quiet, but it's core OS in a car. So it's gonna be an image-based system based on OS tree using atomic updates. We wanna have an immutable operating system. We'll talk about that a little bit later on. We weren't basically the main operating system to be read-only from the processes running inside of it. In the operating, we really are stressing that it has to be container-friendly, so we really want to run a lot of containers. All the applications that we run inside the vehicle, or mostly applications in it, are gonna be containerized. So the biggest hurdle to getting rel into a car is a thing called, a little thing called functional safety. And functional safety, this is the, if you go to Wikipedia and look up what functional safety is, is functional safety is a process of reducing the risk, both simple and complex systems, so they function safely in hardware. So this is fundamentally different, similar to security, but in some ways different. What we're trying to do here was we're trying to build into a moving vehicle, I mean build an operating system in a moving vehicle that is as safe as possible. We don't want the machine to cause injury to someone. So we wanna look at making sure that there's no, there's as little possibility of something going wrong in the software system that could hurt a person. So that's really what we're talking about, functional safety. And traditional functional safety, and this is one of the reasons the car companies have had a big hurdle to get new software into vehicles, was that you would have to design, write your design documents for the entire CPU, the entire operating system. Then you would have to code to produce, to match those requirements. And then you would have to test the code over very, to make sure that the code works as designed. So the thing of that is like the old fashioned waterfall design and everything had to be built from scratch. So what the car companies have had to do is they constantly are redoing building an operating system, redoing it and it just takes forever for them to be able to do it. And so they wanted to move to a new system. So from a Linux point of view, is there any design document for Linux? Anybody got a design document for the kernel? And Linus put out a document 25 years ago, I guess. So it really wasn't designed, right? It just evolved. And so Linux system is already written without any real design document, so it doesn't fit into the traditional functional safety model. But basically what we're doing is we're documenting the functional safety APIs, basically APIs that we tell General Motors to use when they're running the vehicle and they can run it in a safe mode. And guess how we're documenting the APIs? We have a little thing called man pages. So we're basically going through all the man pages, making sure they're accurate for the functional calls and things like G-Lib C. And then we look at the code and then we look at, make sure there's test suites to make sure the code. So basically that's part of our argument for functional safety. We're also looking at arguments like, this stuff is used for many, many years. This is the open sort that the kernel is probably the most examined piece of code on the planet, right? So looking at the way open source develops and that it's basically a functionally safe environment. So we have to make all these arguments and document it and then we have to get other companies to basically come in and say, yeah, you've proven that Linux is designed as a functionally safe operating system. So other than functional safety, we also have to do a thing called, we have a need for speed here. When you turn on your car, within two seconds you hear a beep that tells you to put your safety belt on. Well, that's coming from an operating system. That means the operating system, the hydro has to start, get settled, load the kernel and at some point after that we have to emit a sound to the speaker to tell you to put the seat belt on. If you put the car into reverse, within two seconds the backup camera has to be on. So we have to be able to boot an operating system or bring an operating system out of hibernation within two seconds. And that's a fundamental thing. Now, if we're running containers on top of that, we also have to look at how quickly a container starts up. So a lot of our focus has been around speed. And we in the Podman team, we wanted to run things in containers and we did some testing on the base of the lowest possible standard system, Raspberry Pi with very little memory and Podman took two seconds to start a container. So you hit Podman here, return, it takes two seconds. That's way too slow. So we basically went into the code and just looked at every piece of code all the way it was, used all sorts of tools to analyze it and we found all sorts of little speed ups, but we're talking microseconds, right? But we found hundreds of them and then we were able to achieve a six times speed up. So we went down to about 0.3 seconds. So now you can start with Podman and the upstream Podman right now, you can start a container within 0.3 seconds on a very low power system. Now for most human beings, it doesn't matter, right? If you have Podman, it takes a second to start a container, you're not even thinking about it. But when you're talking about the overhead of starting containers in the car, you have to look at speed all the time. So these cars are not gonna have one computer and they're probably gonna have, well right now they have hundreds of computers in them and one of the things the car companies wanna do is they want to consolidate down to a few computers and then have the computers analyze, basically using sensors all over the place. So these cars are gonna have multiple nodes, so how do you wanna manage those services? Oops, don't know what just happened. Is it working now? So what do you think about Kubernetes in a car? Right? So a lot of the car companies came to us and said, what we really want is we want sort of Kubernetes in a car or we want cloud native computing. We wanna be able to put all these cool, cool, whiz things and have the car constantly updating. And we came back and looked at Kubernetes in a car and then we looked at functional safety. So Kubernetes has the concept of eventual consistency. So the system will be eventually in the correct state. So the breaking system will eventually work. Okay, and so that's probably now where we want for it. The other problem is you're taking a huge Go program that's constantly monitoring, constantly working on it and trying to justify that, this multi-threaded behemoth is functionally safe is pretty much not gonna happen. Not gonna happen in a time frame that we want to get this product out the door. So Kubernetes, so we actually wrote an article, PM, myself and Alex Lawson wrote an article back in October because lots of these communities were forming around Kubernetes in a car. And we wrote an article that said that it just ain't gonna happen. We don't believe that that's the correct route for running it. But we have this really cool orchestrator that orchestrates lots of these system, right? The stats and stuff services all the time called System D. So we're really looking at, can I have an application profile? So my application, one profile can run one or more applications. I can have one application, can have one or more System D services as defined for it. And then I have a capability to switch between different profiles or different targets. So think of it when you boot up your system, right? So your System D goes into boot up mode, starts a bunch of services and then you go from there to the network mode. So it brings up network and you might have to shut down some other services. When you switch this from network mode, it goes to multi-user modes, turns on some services, turns off some services, and finally goes to graphically user mode. It turns on services, turns off services. So that's the way System D works, but how about System D in a car? So all the features that we just talked about, multiple applications run in these system services, but now I start the car. So it's gonna start up certain server, I might click on sensors, do, you know, turn on the cameras, things like that. Then I put the car into reverse. I put the car into reverse. It's gonna start certain services, shut down certain services, turn on the backup camera, turn on the backup sensors. Then I gotta put the car into drive. Again, it's gonna turn off the backup camera. So all these targeted run levels all can be handled just using standard, standard way the System D runs, starting and stopping services. So System D for a single node is the way we're telling General Motors that they should do and then just build services and you can build the relationship between the services that are gonna run the different pieces of software that are gonna be running. But System D runs for a single node, but RIVOS is gonna be multi-node. So how do we get multi-node capabilities into System D or into RIVOS? So we need to extend the System D concepts across multiple nodes. And so we built a brand new project called Herte, or Herta, or however the Germans wanna pronounce it, because it's a German word. And it's a German word for shepherd or a herda. And really there's two major components of it. So the first one is a Herte agent, which is gonna run on each one of the nodes. Eventually you might even be running more than one on each one of the nodes. We'll talk about that in a few minutes. So a Herte agent is just gonna be out there and it just talks System D and it talks back to Herte on the server. So you're gonna have a main processor that's running Herta and then you're gonna have Herta agents running anywhere and basically it's just gonna be like a spoke designed system where we can have bi-directional communications from the main node out to the agents. And all the agents gonna do is basically relay messages to System D. So the way you talk to System D is via D-Bus. So Herta is gonna be talking constantly back and forth to these agents and just relaying System D messages back and forth between them. We have a test program, a CLI, a tool, but Herta control, which is Herta CTL, which is based on System CTL. So again, we'd be basically taking what System CTL does and expanding it to go to all the different nodes. And to give you an idea of what the architecture looks like, hey, we have, this state manager is sort of what General Motors is gonna give. So this is the thing that's waiting for the human being to say, put the car, stop the car, put the car in reverse. And it's gonna talk to via D-Bus to Herta, the main Herta server. And that Herta service then gonna talk to Herta agent and that Herta agent's gonna talk to System D and System D will stop the services. It'll also go extend D-Bus over TCP and talk to Herta agents on each one of the nodes and similar, but just relaying those D-Bus messages around the environment. So that this Herta agent will tell this System D to go into reverse mode, tell this one to go in reverse mode. If a service crashes for any reason, then System D here is gonna realize it's gonna tell Herta agent that a service crash, that's gonna be really laid back to Herta and that's gonna go up to the state manager to say some service crash, right? So if you're driving along at 60 miles an hour and all of a sudden your sensors go down, your self-driving mode, your sensors go down, service crashes for whatever reason that General Motors, the General Motors application has to be notified and then the car is no longer safe. So it has to go into some reduced mode. So think you're in self-driving mode and this is when it tells the human being to take over. Human being, I gotta do self-driving anymore, something bad happened. So we had to build this entire system and it's really, if you go to githubcontainers slash Herta, this is a, it's available there. It's fairly simple, it's fairly elegant. It's written in C again because functional safety, we have to build code in non-multi-threaded environments. So when we're talking General Motors, we're also gonna tell them how we think they should build their applications. So a lot of people, we're talking structured language that we wanna run on the environment. So what structure language do you use for running these applications? And the answer is Kubernetes. So we want to use Kubernetes structured language in order to run containers on the car. So we're really talking about Kubernetes YAML. Podman has full support for Kubernetes YAML, understands how to set up containers and pods on Kubernetes YAML. And the nice thing is that we build on Kubernetes YAML, then General Motors or any car company then can use OpenShift to running all the CI CD systems so they could run, take the same definition of the application that's gonna run in the vehicle, run it in the cloud and run it all sorts of tests on it, even have OpenShift maybe running native, the actual native operating system and run some of the tests on that environment. So using Kubernetes is sort of a scheduler for all your testing environment. But we have the same language all the way up and down the stack. Podman supported Kubernetes for many years now. So Podman has this tool, PodmanCube generate and more importantly, PodCube play can take the same Kubernetes YAML files that Kubernetes understands and run Podman with it. So when we want to run, we wanna run Podman underneath SystemD, we decided to build a better way of running Podman inside of SystemD and that's called Quadlet. And to give you an idea where that comes from, if you play with Kubernetes at all, what do you call it when you take a Kubelet, which is the way you squash it down, you call it a Quadlet, okay? That's where the name comes from, real clever engineers. So this is an example of a Quadlet and this is in Podman now, full support. You don't have to get rivals for this, it's a full support. Anybody that's played with Podman in the past, Podman had Podman generate SystemD and that would take running containers and running pods and would generate a SystemD unifile which had sort of our best knowledge at the time we wrote it of how to run containers underneath SystemD. The problem is that thing then becomes a static document, it's a static service running on it. So when some of the rivals engineers looked at Podman generate SystemD, they said, well there's a better concept inside of SystemD called a generator. And what a generator can do is you can define something that looks like a standard SystemD unifile and then when SystemD, you define it execute, put an executable in place and when SystemD does a system control demon reload, it'll run these generators that can take something that looks like this, a Quadlet and actually generate a SystemD service file off of it. So now we get to have a fairly simple definition of what a container is, so I'm just putting a container here as UBI 9 minimum and I'm doing an executive sleep inside of it. So that's really simple and what that will actually generate is here, this is the actual service that that generates so you can see the original code here coming in but it generates a real fancy Podman command here and it does all sorts of SystemD witness to make this work and basically this is all the knowledge that the Podman team has worked with the SystemD team to figure out how to run Podman underneath a SystemD environment. I'm already down to 10 minutes. Gotta move it. So Quadlets also support running Kubernetes in the environment so we can run Podman PlayCube and again this is built in totally into the system. So we're using Quadlets all over the place for running containers and now the last concept that we had to work on was a concept in the vehicle is called freedom from interference. And what freedom if interference means is we have two types of software that are gonna run in a vehicle. You have sort of the functional safe code and that's called Automotive Safety Integrity Level otherwise known as ASIL. So you'll hear this term ASIL or ASIL A, ASIL B, ASIL C, and ASIL D. So these are standard ways you want to run functionally saved code. We're only documenting relopt of ASIL B. So but basically that means all the software that's used for like self-deriving vehicle things like that can do. My example of the break eventually applying that's actually bogus because that would be ASIL D and we're not putting that in our description here. But the second part of code that runs or applications that run in your vehicle are gonna be called quality managed which basically means they're quality code but they might not be functionally safe. So when I want to run quality code and think of this as being your infotainment software so this is probably in RIVOS we're describing that you might want to run your say your infotainment software inside of Android operating system types code inside of a VM running inside of the QM environment. Other things that might be QM are like that little heat seat, the seat heater application. So you press it on to turn your heat seat or maybe the windows going up and down. Any type of software like that that's not really involved in necessarily making the car safe but it's used and there's other applications that General Motors wants it and eventually the car companies want to make this a money maker for them. So they want to sell you software in the vehicle that they could use and that software is probably gonna come in the form of, well, they're coming to ASIL A and QM. But basically we have to take the QM software and we have to isolate it from the rest of the car. So we're designing an operating system with two different instances running inside of it. So you have the ASIL software and that's gonna be running lots and lots of containers and then you have the QM section that's also gonna be running lots and lots of containers. So we had to design a sort of a subenvironment to run for the QM. Now we could use virtualization for running this but a lot of the ASIL applications want to control the QM applications. So it has to be heavy communication between the two environments. So we've decided at this point to use containerization to isolate the QM environment from it. So if you want to isolate an app, say you're driving along in your car, someone steps up a curb, it's gonna launch an application to know that the functional safe environment that that human being is there, simultaneously you are saying turn the heat seat, the heat seat up, which might launch a container and run it. So how do I make sure that system D, which is probably doing both operations, is isolated? How do I make sure that Podman coming up and that starting the container is isolated? So we really need to isolate the entire stack from the environment. So we're describing running system D, separate system D instance, a separate fully Podman instance and this is how we're doing it. So this next section is all gonna be defining how we're setting up a QM and we're using quadlets for it. So a quadlet, this is the QM container and if you go to GitHub containers slash QM, you can actually install QM right now on your systems, on your Fedora 38 systems. So QM is basically setting up a system D unit file, again a quadlet that looks like this. The top pack is all things that go standard system D commands for setting up things like C groups to isolate these environments and the bottom part is all the fields that we're using for setting up Podman basically. So the first thing we're gonna look at inside of this slice is we want to identify the entire C group or the entire environment. It's gonna be running your QM environment in your car. So we're just gonna name a QM slice and then you can do things special things with C groups like the top one here actually says I'm gonna run all my applications in the QM and in this case my laptop has 12 virtual CPUs on it and I'm just saying I'm only gonna run on the bottom six CPUs. Now the rest of the A's of the environment can use zero through 11. They can use all the CPUs but the QMs can only run on that. This is easily changeable by General Motors if they only want to use two of the CPUs, they can do that. Similar CPU weight. So CPU weight basically says in C groups, a CPU weight goes from zero to 200 and the default CPU weight is 100. So if you set your CPU weight to 50 in your C group that means all the processes inside the QM are gonna get one slice for every two slices that the system gets. Then I can do IO weight very similar. That's again, and these numbers can be changed, right? You can set it to 10 and get 10 times as much. The next thing I wanna quickly mention is the idea of umkiller. So umkiller is when you're running in C groups, if you start to run out of memory on the system, the kernel can't take memory away from a process. All it can do is shoot it in the head. So what we wanna do in the QM environment is say, I am the catness, right? Pick me, pick me to kill. So by setting these scores and the power goes from umkiller goes from minus 1,000 to plus 1,000, all processes run with zero. And what we're saying here is anything in the QM is gonna get priority to be killed over the rest of the system. The last thing I wanna show in the system depot of it is we're defining where the software is. We're not gonna be using an OCI image for this environment. We're actually installing the software directly on disk. And so the software's gonna go in to use a live RudaFest and then in the container section of it, we refer back to that RudaFest. So this is how the connection goes inside of the Quadlet. Now we're in the pod, these are the commands that Podman's gonna interpret on the system. So the first thing we need to do is name the entire container and that's gonna be named QM. We want to run system D as the primary process inside of this containerized environment. We also want to, in this case, we're probably gonna share the host network because it just adds complexity. We can adjust the amount of capabilities they're gonna run in the container. Probably we wanna run a lot of privileged, somewhat privileged processes in here so we're gonna leak in all the capabilities so we're gonna be able to run containers. We can add special devices so if you wanna have special devices go in. We're gonna run root read only for the entire environment except we have to have read writeable etsy and vice and these are the ways you can set up a read only petition and then have etsy and vice be read writeable. And finally, we want to have SC Linux running inside of the QM. We want to isolate containers in the QM from each other and from the host operating system. So all that stuff generates a huge, that huge Podman command line to basically show how that gets converted. So last thing that we have in the QM package is a big setup script that sets up this entire QM environment and I'm gonna start that up as I run out of time. So everybody get off the network so this will work fast. All right, so QM is the standard package inside of Fedora 38. I'm running the script now. This script is gonna go out and actually install all the software that I'm about to demonstrate. So that setup script that I'm running right now is actually gonna do, oops, skip ahead. It's gonna install root FS so I'm dropping all the files in so I actually destroyed that entire directory and reinstalling it right now. These are the only packages we're putting in the QM. So we have SC Linux policy because we want to run SC Linux in it. We have Podman system D and we have a Herte agent because we want the Herte in the system to manage the Herte agent to manage that system D. So we'll have two system Ds running inside each one of these environments. This is the software we're installing on the system right now so that there's a DNF update that's installing those packages. This script can be run multiple times to actually update the software after the fact. We're also installing a containers.conf which is the way we can reconfigure Podman inside of the QM and there's a couple of key fields in here. So this tells Podman to again do set up memory C groups so that's also the catniss. So this does two things in the environment. Now if you recall QM was 500 now we set up all the containers as being 750 which again means these containers each container should be killed before the QM is killed. The memory um C group also tells the kernel to kill kernel using kills just processes but if you set that flag in the C groups it tells it to kill actual containers running in the environment or entire C groups. So lastly in the setup script we're also setting up we want to take advantage of user namespace so we want to make sure that UIDs are different from a QM environment from an ASL environment. So we're picking out 1.5 billion UIDs to run containers in and we're picking out a different 1.5 billion. So if you look up here I've allocated from 1.5 billion starting at 1 billion UIDs for the container and then I'm doing 2.5 billion and then 1.5 billion on there. To give you an idea there's 4 billion UIDs available on a Linux system. So the last thing we do is we set up Herta and Herta all we're doing is we're saying Herta inside of the QM environment is gonna have the same name as Herta outside of the QM environment with the QM pre-pended on it. Okay so that finished the install of the system. I'm actually showing the QM services up and running so this is a quadlet that generated a service and the service is now up and running on the system. If I look at the CPU weight remember we talked about setting the priority of CPUs so I set it to 50. Well the nice thing is from the ASL environment they want more priority during the running of the cast. Something's happening. I need to squash down the entire QM environment. You can do that with C-groups. So I'm just gonna set the CPU weight to 50. I'm changing it to 10 and now all of my QM environment has dropped down to only 10 so that means for every 10 slices of the CPU the QM's only gonna get one. What's interesting here is you see that a service running under the QM still has 50 but that's actually a sub of the 10. So it's only gonna get 50% of the 10%. So everything is isolated inside the environment. Here I'm showing what the QM looks like so I just did a podman exec to show you the processes running in it. We're running with a separate SE Linux label so it's running as QMT and now I'm actually running a, if you see the podman exec in front that's basically saying run podman inside of the QM. So I just ran a container inside of the QM environment. I ran another container. I ran a container outside of the QM environment in the ASL environment and to show you that there's two different podmans running, two different databases show the difference in the images in the environment. So this is showing you using user namespace so I can actually run lots of containers inside of the QM environment and notice that they all start with one billion something. Each one has a separate UID range and now if I run on the host, I've said my system isn't quite what the documentation is but you can see that they're running at 500 million so the containers on the ASL are running with different user namespaces than inside the container. I'm out of time so I'm not gonna show this. So with Herte set up on the system, here's Herte running and this is just listing all the running services on it so my laptop's called Fedora and down the bottom you'll see all the services running inside of the QM Fedora so what I can do is I can actually run so now I'm gonna demonstrate pulling down inside of the QM environment I'm pulling down a UBI8 Apache service just pulling down the image into the podman database and good you were all off the network that's good. Okay now I'm setting up a quadlet so I'm just defining a simple quadlet to run that Apache service that I just downloaded and I'm setting two fields in it just the name of this image that I did and I'm just setting up network equals host so it can be run on the host network. Now I'm using podman copy to copy the file that I created in the ASL environment into the container environment. I'm doing a podman exec to do a system control daemon reload inside of the QM environment so that basically triggered the quadlet to become a service. Now I'm actually gonna stop the service via her to control so her to control is saying restart that new my quadlet service I just generated and it can do it to the QM environment and then so I'm gonna list out and I'm showing that the service is now running inside of the QM environment and I can curl, it's not a great demo but it's running basically shows Apache is running inside of the QM environment. I can stop it with her to control I can list units and show that they're all done and that is the end of the presentation except for shameless plug to buy my book. So I think I'm out of time I'm sure they've been flashing that up but this comes out of Roddick's time so I don't really care. Any questions? Yes. So what types of SLU you're talking about hardware what is hardware? So right now with General Motors we're working on Qualcomm and Qualcomm's developing a brand new hardware for the operating system. We've talked to lots of car companies and they want to work the three vendors that we've seen they seem to want to work with Qualcomm, NVIDIA and Texas Instruments those are the three names they have but this is Ralph, right? We wanna be able to run on any operating system we're not building the operating system to run on a specific piece of hardware we want to be a general life, general purpose, so. By the way, it's very enlightening to be told. Yeah. Any other questions? Come on, you'll get less radic, this is good. Where did my idea come from for that? That wasn't my idea, that was Alex's last one. Oh, to get rid of the quad that, to get smaller? Yeah, I'm not sure. Yeah, go ahead. So anything would change, we're investigating there's basically about a hundred people working on RIVOS at this point. Anything that we change, find to help say speed a boot we'll go back into the regular kernel we'll basically go into the upstream kernel so everything we're working on is going back into REL. So right now I'm not the kernel engineer so I'm not sure if we've had to fix things we've had to fix a lot of things in Podman we're working with other parts of the operating system to actually even going through the FUSA process we're updating hundreds of man pages just because we're finding problems in the man pages as we actually look at them. Yes, are there any, I'm supposed to be re-asking the question, sorry. Are there any, why are we used to about disk size? So one of the things I did pull, this slideshow goes on quite a bit longer but I was, time usually takes well over an hour to go through everything. One of the things we talk about is potentially having separate disks for the QM environment so that the QM environment can't actually accidentally use up all the disk space so that the ASL environment can't. And traditionally in the car companies they have lots and lots of petitions, way more petitions than we would normally recommend to, but basically for isolation like that. The IEO stuff is also, the IEOC groups would also take away the ability to pound the disk and cause an application not to be able to run. All right, last question, yes. Do we have any type of monitoring going on to see what's going on? Yeah, we, so we have these, you sound like General Motors. So the, so what General Motors wants to, they want to know like the car is running out of memory before the car is running out of memory or they want to know when we get to 80% CPU or things like that. So we are looking in for open source projects so they come to us and say, we need to work this and we don't want to, you know, in HRTA we had to generate brand new code. We're trying to make sure we get open source projects so we're looking at different things. Right now PC, PCP is the performance co-pilot which I think is the way that REL right now does things like that. But General Motors wants us to be able to monitor things like special devices and we have to make sure that they can build code to look at GPUs and things like that so. But yeah, PCP right now is what we're thinking. But if anybody has suggestions we're always all yes. Anyways, thank you for having me and Roddick you can take over now.