 for, you know, hey, you have the statistics on where the RT patch is, right? And after asking a few people for the slides, I cut and paste them. So the only slides I have are not mine. I copied them. So just this slide's mine. So real quick update, I'm sure you hear about the preempt RT patch. It's been going on. I'll just give a little history. Back in 2004, it was a little small project, started by Engel Monar, was called, I think, the real-time preempt patch to try to convert Linux into a real-time operating system. There was a lot of other things, like Linux RT, or RT Linux, so it was called, by Victor or something or other. I can never pronounce his last name, so I'm not going to try. And what that was was more of a microkernel that ran as real-time. And it was almost like a hypervisor, where Linux just was a kind of a guest. But for these type of real-time platforms, which turn it to RTAI, which I think is now Zenomized, I guess, is still around today, still has the same things, they have their use cases because you can get really, really small. And if you really want to verify the entire kernel, having a small microkernel, it makes that possible. The problem with this approach is the fact that your applications that need to run real-time have to know about the interface on how to communicate to that microkernel. And because what happens is when your real-time application runs on one of these things, it doesn't really run in Linux, it runs on top of the microkernel, and Linux is just pushed off to the side. Our approach was like, let's make Linux itself into a real-time kernel. And a lot of people, I hate the term hard real-time. I'm glad I haven't heard this term in a long time. Soft real-time, hard real-time. Good, because it's no such thing as that. So real-time just means that you could guarantee outcomes. It's deterministic. I always use a joke. I say this all my talks. I hate the term real-time because it's so ambiguous. Whenever you hear real-time, you don't know if they mean like, oh, real-time delivery, whatever that means. But really, what our toss is, is a deterministic operating system or DOS? I guess people don't laugh that much anymore. But anyway, it's all about determinism. And I know I've told this one guy that I worked with a while ago. You say that real-time is real-time because there's always a deadline. If you're typing on Microsoft Word or something and every time you press the letter, it took two seconds to show up, you failed your deadline. So you may not be able to say what that deadline is, but there is one there. So every single operating system actually has real-time requirements. And I also like to tell people that if you go real-time, it doesn't mean that you're going to get fast. Everyone thinks Paul McKinney gave this nice talk called real-time versus real-fast. And the thing is, real-time is about guaranteed outcomes. So real-time is the fastest worst case scenario you'll have. So if you run your system on normal Linux, it's great. It'd be faster. If you go to your benchmarks, might be faster than running it with the pre-empt RT enabled. But you'll find outliers that could cause the system to crash if the requirements deem it. So that's basically what real-time is. But back, like I said, 2004, we started on this little project. Thomas Collection took over. Is it still there? OK. Thomas Collection took over. It looks like I can hear this, but I don't hear my voice. Can you hear me? OK. So and then in 2007, I gave a talk at Ottawa Linux Symposium. And all the articles about real-time was always said, real-time Linux by Engel Molnar, Thomas Glickcher, and others. So I got up there and said, hey, everyone, I'm others. So I've been working on it since 2004 with them. And I'm not as much involved as much I like. I do run the stable RT. So right now, my effort has been trying to keep the stable real-time kernels going. So when you have the development real-time, but there's other releases that any time we have upstream has a real-time, or sorry, a stable release, long-term stable release, we try to follow that with a real-time patch to have the real-time stable release. So if there's a real-time state, or is there a long-term stable release, LTS kernel, we try to have a real-time kernel following the same. So let's move forward and see the direction of where real-time is right now, the preempt. This is, like I said, I copied this slide. And it shows the patch queue status up to 19 RC1. We're at RC3 now, I believe. You can see we've actually shrunk the patch set quite a bit. What that means is the stuff is going into mainline. And a long time ago, I mean, John Corbett and LWN, you always predict that this year the real-time patch will be there. He's been doing that for the last 10 years. Yeah, what's the road map? And with the road map one, I told this question or showed his road map, it was a picture of the city with all the bars. But anyway, I always tell people, like everyone says, when is the real-time patch going into kernel? I'm like, it is. It is going in. I mean, every year it goes in. People realize F trace is from the real-time patch. Locked up is from the real-time patch. Mutex. The Mutex code in the Linux kernel did not exist until the real-time patch introduced it. Believe it or not, it used to be semaphores. And we went around and we saw all the mutual exclusive locations inside the kernel was actually a semaphore down and semaphore up. And this is back in 2005 or so, or 2006. The real-time patch required that a Mutex had to have an owner. Semaphores don't have owners. So we introduced the Mutex code to introduce that. So the Mutex code in the Linux kernel is from the real-time patch. So all these years, real-time patch has been going into the kernel. And it's funny. Every time you watch the patch shrink, it suddenly grows up again because we decided to implement something else. So it was like, believe it or not, there was a small time that KVM was actually in the real-time patch. Because they wanted to test it. I was like, oh, we'll pull it in. This is the current state of where we're at, at least RC1. We're down to 49 patches. So coming down from about 350. And although it might grow because if you looked on the Linux Trilman list this week, patch set went out to revert to print K updates. It hasn't been pulled into Linux yet. But it probably will. And we have to do a take two on it because we've discovered an issue. Actually, yes, someone reported it. And Lena said, this is on you guys. You've got to fix it. And we said, well, we had a big hammer approach to fix it. But a lot of us saying, we need to do this properly because this opened up a window like, oh, it's kind of like the specter thing. No one thought about all the issues of specter vulnerabilities. But once they did, you're like, oh, let's look at other places that the specter thing could be. The issue with the print K consoles, we have to go and look and say, OK. And we want to make sure we do it right. So print K may grow up again. But hopefully, it will be packaged with the rest of the kernel or the rest of the changes. Essential is basically the big switches that allow you to turn on preempt RT. The preempt RT config is in there. You'll see config preempt real time, or RT preempt RT. It's in the kernel. Everyone says congratulations. You've got preempt RT in there. It's in the code. All the code is there. It has a switch. But nothing enables it. It says depends on have real time. And nothing has real time. No architecture has it. So we got it in there. But you can't turn it on. OK, that's the essential code. And there's VLX here. The rest is just a bunch of changes that we have all over the place. Little details that just need to be clean up, minor things. Nothing that's hard is just, yeah, some code in the kernel didn't quite or makes it makes an issue of real time. And we just have to modify the code a little bit. Which means that we need to have education and teach people how not to do that anymore. Again, this is the breakdown of patches, files, things. But that's all my slides are. Because this buff was actually most to be more for Q&A. I think I kind of gave a little story about how everything is. So I guess anyone have any questions? Do you want to see here? That way I'm in the real time tree here. And I was just doing, if you want to see, well, let's see. Can you see us? Let me make it bigger. Do that right. So we actually walked down all the patches if you want. If no one has any questions, we could do that. I can appreciate your disdain for the hard real time nomenclature that you used. But what could a developer use to convince management or higher ups that say we have to use a proper RTOS for safety critical or for flight controls or things like that instead of using Linux? Is there an argument that you could help us make to say, yep, Linux has what we need. And it'll be able to do the job? Before I give it to Tim, I think I'd rather have you hand that right back to the person I gave it to and he'll answer that question. Introduce yourself. So I'm Daniel work with the real time people as well. More in the analysis side. And the real time for this question on going towards safety critical systems, right? The real time patch set is part of a bigger effort on trying to create the artifacts that are needed for the certification of Linux to a class of safety critical systems. The patch set is fundamental to that because it simplifies the task model of Linux. And it simplifies the way that we can demonstrate that we can deliver a predictable time to the failure, to detect and to react to the failure. And that's the main reason why the RT is important for the safety critical systems. Obviously, we have the need to do predictable deadlines for the tasks, right? These are important. But this goes almost towards a quality of service because the most important thing for enabling Linux on safety critical nowadays is being able to have predictability on reacting to failures. And so in this special case, the RT helps there because as I said, because the task model is simpler because like we don't have software cues, we have a very short preempted disabled section so it's faster to react to events and it's faster or easier to provide evidences that this is doable. So it's part of the problem, it's part of the solution for this problem, but it's a bigger problem. And there is a working group inside of Linux Foundation that is working on that, on working on trying to describe, for example, for me to certify the Linux to run on cars, it needs to be A-Z-O-B. And then they get the rules like the ISO, they try to decompose the ISO and try to create artifacts to justify the safe arguments. It's beyond technology, there's also procedures, and there is this working group at Linux Foundation. The name is Elisa or Eliza, depending on your accent. And there are many companies involved on there. Kate probably can talk better than myself about it, but the DRT is a fundamental part of this process. Sorry, Tim. Elisa just stands for enabling Linux and safety applications. So if you're interested in the safety space, we're pretty open, the videos are there and we would welcome having more of kernel people participating to make sure that the argumentation is sound. And just before Tim goes on, I just want to see if I can explain in layman terms, like Elisa or Eliza, whatever, I'm gonna try to do this. Basically, let's say you can't guarantee everything in the kernel because the space is so big to do, it's an NP complete problem. So the idea is, can you prove that if something goes wrong, you put in place that something that you can prove that that babysits that code, to make sure it's moving along as is. And it ever detects a failure where something, it did something that's not good, you have a safe way of getting around that. So basically, you implement a catchall, like I said, a watchdog or something, it just monitors that this is moving the way we expect it to move. If not, you have a safe way of getting out of it and warning everyone and saying, okay, we're going to safe mode, we're shutting down and moving on. I believe, is that basically the high level, 10,000 foot view of things? I think that the key thing is that safety always happens as part of a system, and there are different ways to make it safe. Some of which, one of which is the time, having a separate process, doing a watchdog, some of it, some things inside, some of it is basically just doing the trace analysis to prove that certain things will not happen. So there's many techniques that have to be used and are available to use. So the group analysis basically, there's different subgroups in there looking at different parts of these things, so it's just a question of what your application is, but the key insight is safety is a function of a system, not just a component. Yeah, and Tim, sorry. Oh, did you forget what you were going to talk about? Okay, so in terms of arguments to executives, we've been doing this a long time. There's a misconception that because the system is small, it's necessarily more proven, or it's better analyzed, and that's not always the case. If you want to throw big words at your management, say stochastic. So Linux has, the preempt RT patch has been used in a huge number of products that have hard real times. One of the ones that you can throw out is the SpaceX rockets, like the Falcon 9 rocket uses Linux real time. The Tesla full self driving, well it's not full self driving. And the mistakes that a Tesla car has made have not been related to real time. That's all we... Not preempt RT. Yeah, it's not preempt RT's fault if a Tesla goes off the road. But, so I mean it's, we used to do this a lot with routing that people would always say, well if you have like this tiny, tiny kernel, you're gonna be able to just do networking faster because your code paths are shorter. It's like, well no, there's also things like the algorithms that you use, and the amount, and so those types of things affect the real time guarantees that you can provide in a system. So anyway, that's my answer to what to say to management. And I guess the next follow up is, if you need to talk to someone, I would after this meeting, talk to Tim, talk to Kate, and maybe we can help you get resources that you need to go back to your management and prove your case. Sir, I had two questions. The first one is about the Yachto project has a reference kernel called Linux Yachto RT. And I believe it, I've never tried it, but I believe it could be, you could build an image and put it up on KMU. Is that a good platform to test real time KMU? So basically, you mean testing on KMU? Yeah, yeah. Because it's convenient, right? Okay, well it matters what you wanna test. You wanna test if your applications will run? Sure. Well, do you wanna test to see if your applications will run within their deadlines? Maybe not so much. But no, actually when I left Red Hat, there was a project of making real time guests. And the idea, and actually I've seen a lot of really good work on that. So if you wanna use on KMU, one thing is what you do is you would boot up your kernel, the hard, the bare metal kernel with the preempt RT patch, real time. Then you would set the KMU process to a real time process and isolate it onto a CPU that runs by itself. So basically in essence your KMU emulator is now almost bare metal because it's got a dedicated CPU, do all the work to isolate it nicely and run real time on that way. And guess what? You probably will get the premise of same guarantees that you would running on bare metal. But does the host machine also need to have preempt RT? That's what I said, you boot up bare metal on a preempt RT kernel. On a preempt RT kernel, okay. So, but actually you don't even need to do that because isolated CPUs, if you do it properly, like the Linux 5.19 should have enough information in there or enough tooling in there to actually isolate processes. But no hurts full. If you enable no hurts full, not no hurts, no hurts full. What no hurts full is, how many people are unfamiliar with what no hurts full is? Okay, good, so I'll explain it to you. No hurts full is if there's no reason to call the timer tick. In other words, if you have a single process, if it's a real time process, that there's nothing that's going to interrupt it. Scheduled on a single CPU, the tick will turn off. And once it goes into user space, it will stay in user space and never go into the kernel unless it actually does a system call. It will, you have to, to get this to work, you also have to make sure you move the interrupt affinity off that CPU. I usually boot with, you know, ISO CPUs enabled. There's a few things you have to do to get this working properly. You gotta push RCU no-call callbacks off there. So there's a few things. Once you get that all isolated, then you call no hurts full. And when you have your process, if it's a single process running on there, if it's a non real time task and there's other tasks, the tick will still go. So that, you know, that periodic tick, you know, your 1,000 hertz or 100 hertz or whatever config you do, that will turn off while the task is running. So if QMU, if it's running, and it doesn't have to go to do system calls, which I probably will and do something, so it might get actually be interfered. Actually, I mean, we got, I was shocked. The benchmarks we got when I left, I mean, the QMU doing this was like within less than a hundred microseconds of latency of cyclotests running on full load. Just so people know, I work for Red Hat and I work on this kind of setup. The, if you do proper isolation and try to run the gas like that, the host running pre-empt RT, it might also work without the pre-empt RT and then you run the gas running the pre-empt RT and the real CPU and the virtual CPU are isolated, right? And you dedicate a process to run there. It, the latency, the noise is up to 10, 20 microseconds. Yeah, so 10, 20 microseconds of latency. And there is, there's one tuning tool that Red Hat chips and other, this probably chip as well is Tunity. It has profiles and there's one profile which is the real-time profile and the real-time virtualization profile that automates these isolation for this use case. So it's possible and gives good results. Obviously, if you want to try to debug, the problem sometimes is synchronizing the, the debug from the kernel and the host but the results are good already. Yeah, second question was, there's a processor from ARM Cortex R, real-time, that's what they call it, right? So is that kind of applicable to pre-empt RT? I don't know if it's been brought up or not, but I'm just asking, I wonder what that processor is for. Okay, can you take the mic to answer it? If there's someone in ARM here in the audience can correct me, but I don't believe it's running Linux, it's got two different servers, it's got some restrictions and so forth in it. So it tends to just be used for, like the M cores and the R cores are being used roughly for the same functionality. Possibly, but I, like I say, I have not seen a port. Would you say something that could run like an OMM? Yeah, no MMU. Yeah, could run no MMU. So it's basically that's unrelated to, I guess, pre-empt RT and actually Linux in general. Just last question here, then I move to the other side. So if there is a user space code given a kernel code, if they design the badly, I will, they schedule that one as a 5-fold RT80, then do a while loop and doing there a long time. So it's a system, right? So if somebody have bad kernel driver code or user space code, this can actually make the, with the pre-empt RT, then it will. Well, I think we're, so you're asking like, so user space code, the thing about, the advantage of pre-empt RT, then from the other, like, you know, it's NMI and stuff like that, is the fact that it's normal Linux code. So if it runs on normal Linux, it will run on pre-empt RT and, but pre-empt RT will have you better latencies. And if you create a user space tool and set the priority to 80, and there's nothing else in the system that has is over 80, it will run, when it runs, it could run forever in a while loop. By the way, you do have to do some station, I mean, I don't wanna mispronounce your name. Kash, Kash? Grasian. Grasian. Grasian gave a great talk yesterday. Yeah. I'm gonna say, yeah. About everything that we talked about. So Grasian, go download the video. Yeah, Archie Strathen. Yeah, what was the name of the talk? Yeah, Tips and Tricks for RT tuning. So, yeah. So when I talked about isolated CPUs, so he actually mentioned a lot of that stuff on how to do that. And also I gave him a great segue because I'm like, you forgot about RCU, no callbacks. And he goes, great segue, it hits the next slide, on the next slide says RCU, no callbacks. So I just wanna, so the comment was, what if there's like a bad driver or what if you have bad user space code? Oh, so yeah, those are issues, right? So if you have a driver that for some reason has like a code path, it's not coded properly, then that's something that might need to be fixed. And the same thing with user space. This stuff doesn't just magically happen. You actually have to put your processes in the right scheduling classes, right? And at the right priorities. So there is still the analysis, it's not just magic. Yeah, it's a system that you have to analyze. Is there a point that you always have to do the analysis? Yeah, you always have to do the analysis. I gave a talk once basically about, hard real time is hard. There is no magic bullet. And the number of people that, and then sad part is some of the people say, hey, I'll just put real time on it or the preempt RT patch and everything will work run just fine. And I'm like, well, yeah, your SMIs take milliseconds, you know, there's, I have another talk kind of recipes, RTLA. Okay, yeah, so Dan, unfortunately, I had a conflict with Daniel's talk yesterday. I had to be at a board meeting. So when RTLA was there, I really wanted to say, that was actually one of my highlights. So the talks I was going to come watch is Daniel's talk yesterday at RTLA. So I can't talk to it because I wasn't there. But everyone including myself need to go back and watch the video of Daniel's RTLA talk. Yeah, it's a new tool that there is in the current that helps to analyze the timing and behavior of the system. Yeah, so it's real time Linux analysis tool. I got a easy one. Most of my team, we work with the SOC vendor, blah, blah, blah. Most of my team spends a lot of effort in terms of optimizing power for the system until we get to customers and they enable RT. You enable what? Real time, PM, RT, the entire shebang for it. And of course, CPFREC, of course, CPVital, for short. Is there anything that we are doing to see if we can marry these two to the, where we can have CPFREC and CPVital active with preempt RT? Okay, so I think the question is basically, it's a power versus the power savings versus real time. Is that basically what you're asking? Now, that is always a joke. If you really want real, real time responsiveness, you better invest in air conditioning. Because, okay, yes, the best thing for real time, or I mean power savings is when you're idle and you go, you turn the CPU off. The deep sea states, you go down. The problem with that, that could take a long time to bring the CPU back up. So if, I mean, what's like the longest, a millisecond maybe? To wake up, if it's CPU goes full deep sleep, that's like almost a millisecond to bring it back up to speed. It's not the same every time. Yeah, and it's not the same every time. So you're talking, I've already said, real time is about determinism. Now, if your deadlines are 100 milliseconds, 100 milliseconds, not microseconds, 100 milliseconds, and some, there are things that have latencies in the milliseconds. Then yes, great, if everything you could do, because I said it's all about deterministic, if you find the worst case scenario, and you say, in this situation, we could go full sleep and then wake up and still make our deadlines and make the latencies and whatnot, you could do that. But a lot of cases like high speed trading and stuff like that, they just, they put their system into idle equals pull, which just means that it spins. And they just invest in more air conditioning because that really becomes nice, a little heaters for everyone. So yeah, is there anything we could do? It's like one of those things that it's a trade-off. I think that's more of a hardware question. I mean, it's not really, it's nothing really you could do in software. It's, I mean, we could do trips and tricks, but I mean, what do you do if you want your CPU to be saving low frequencies? But every time you, everything that does power savings will slow things down, which may, will increase the chance of a missed deadline. So. Just add on top of it. This deadline is aware of the frequency that you're running and it can adjust the tasks for the runtime according to the frequency of the processor. That that's one step forward that direction. But it depends on your deadline. If you have very short deadlines, you need to. Now, what might be interesting to do is, I mean, this could probably take work and I'm not sure if anyone's doing this. Maybe someone's already doing this, would be to say, if you let your system go sleep, but you know when the deadline is about to come and you know when, basically you're saying, okay, we're gonna have some wakeups here, but in a minute, basically say, if you can identify windows where there's nothing happening on the CPU and never will, you could possibly write code to put the system into a deep sleep and then wake it up before, before things start happening. So if you know when things are about to happen, if there's some cyclic thing that you're doing, like, okay, we have this window of time where we're gonna be doing a lot of things and then a window of just idleness and a window of time, you probably could do something in software that would handle that. That'd be an interesting project. I don't know if anyone's doing that. Yeah, you can't really generalize that, that'd be like a... In the real-time scheduling theory, this is about being clairvoyant, about knowing what to happen in the future, right? But that's not a general case. Yeah, it's not very often, but there is possible cases, again. Hard real-time is very much, is like I said, it's hard and you have to know the entire system, which, and you're in environment. So it's not just saying how you have the software, it's like I said, it's the hardware, it's the environment, the interface, everything. It has to be considered. There's no magic bullet in it. Yeah, the problem maybe is that it's hard to generalize this corner case, right? And so each solution needs to be... Customized. Yeah. Just questions from the audience online. Just, I'm lost here, there are more questions. Does Courtney have... Okay, just help me, because there are more questions, I lost the first. Okay, there is Allison asking that that's probably, savior Sebastian probably, K-timers does not prevent transitions to deep sea states when the no-clock nano-slip, non-alarm timers are pending. That's true even with Frederick's subsequent bugfakes. Does orders also find that K-timers D is not raised when FD timers are pending? Okay, wait, say that again. Yeah, I did, because there's a lot of... You said that like basically very monotone and I'm like, okay. That's the kind of question that's better on the mailing list, where you can think. Which one is the first? K-timers does not prevent transitions to deeper... Okay, does not prevent K-timers, does not prevent transitions to deeper on, when non-clock nano-slip, non-alarm timers are pending? That's true even with... I think that it's related to the idle driver, because the idle driver tries to predict when is the next to wake up and try to go to the idle state that's not deeper than the next timer. And he's saying that there are timers that are not being observed for this. I think it's a complementary question for the previous question. Okay. So yeah, I don't know if I can answer that. It's better on the mailing list. Yeah, ask on the mailing list and we'll try to answer that properly. Because you need to think. Yeah, that's something I have to sit down and probably go back and look at the code. Just a second, wait, wait, wait. There are hints coming from the CPU idle framework for each of the C states. What are the residency latencies and entry and exit latencies? How do we comprehend that within the timer decision-making? When is the next event supposed to happen? When you go know-hards full, what do you need to do? These are things that we should probably have a deeper discussion on. Yeah, yeah, basically let's refer to the mailing list. So is there any hope for timer FD and E-Pole to work with RT? In fact, that's another question. I'm not gonna be able to answer. I'm not even gonna try, unless someone else wants to try. I didn't know about that, that didn't. What kind of design tools do you use to plan the timings, other than trial and error? Well, that's like, that's basically who you want to take your lap out back. That type of question is more of, I mean, it's not trial and error. I mean, it's measurements. When I've done anything that has to do with things, we actually say what's the incoming response and what's the outgoing. I know high frequency traders have a lot of data saying here's the requirement. So it's basically you look at the requirements of your system that you need and then you write the code and then you make, yes, you have to make the measurement, you have to run it to see what's the worst case scenario. I guess maybe one of the questions is, oh, actually the other thing is, how do we know what we hit the worst case path? Maybe something, does your runtime verifier do something like that? We can create models to try to capture those cases, yes. So there's a runtime verifier that Daniel is working on that allows you to create a model of the system and then put that into the, at least for the kernel side, you probably would need something for user space as well. Is there something for user space too? I think it, we would just plug with the user space trace points. Yeah, so if we had user space trace points, we hooks the trace points in, it puts as, so, you know, the, like, autonomous model on how you think, and it sets all the states and then various trace points inside the kernel. You can add more there. It will move the state from next state to next state to next state. If ever something happens outside, I guess you even have timings and stuff in there. Not yet, but the different plans. Yeah, so eventually you'll have timings. Anything ever goes outside that model, you'll get, like, you could panic the system or print or you could say, what to do when, like a greater reaction, say, when this model breaks, what do you do? You could crash the system. You could run this thing. I guess it's pretty fast, so you could run it even in production? It, running the model is faster than saving the trace to trace buffer. Right. So you can run it in production? Yeah, so there's a question back there. Kind of along those same lines of that, and sorry, I'm not very familiar with the preempt RT stuff, so maybe this is something I already answered, but one of the things people like to do with those little tiny real-time kernels is do the mathematical proof that they can actually meet all their deadlines? Is there anything that, I mean, how you measure that is always a cork, you know, how you measure your workloads is always a sticking point with doing the mathematical model, that is, like, having something that... Well, but also the runtime verifier seems to be right, is it? Does it do... So it actually will do the mathematical proof for that? That's the older research I did. Sorry? You can help me enter that. There is part of the work that I did on the verification was creating a model of the synchronization of the preempt RT. And from that model, I could distract the theorems that describe the worst-case scenario for the latency, the scheduling latency. And I have a published paper where I describe the worst-case scenario on the synchronization level, which is the level of the preempt RT, and there is a proof of the bound for the scheduling latency there. This will be part of the RTLA toolset that I presented yesterday. And that's for the scheduling latency. And for the scheduling itself, we have a schedule deadline. And with schedule deadline, one of the things that we try to maintain it, it's keeping a scheduling model that is theoretically correct. And the model that we have there, which is basically global scheduling that you can reduce to cluster or partition scheduling, there is theory that backs it up. So you can drive those conclusions. But then this more synchronization and scheduling level, there is the other problem, which is the measurement of the worst-case execution time. We have five minutes left. And then we have, there are methods to do that and people are working. I'll just say real quick that a lot of people when they do mathematical analysis on code, they're not taking into account the nuances of the architecture, things like cache hits and levels of caches and those types of things. And so you can, okay, I read a really good article a couple of weeks ago, which is, your machine is not a PDP-11. It's not doing in-order execution of anything. There's all these really complicated things going on in the processor, register renaming and everything that makes it so that you can't just, I don't know how good the analysis is, I'm not casting aspersions on it, but I've seen analysis that just doesn't take into account all of the stuff going on, the microcode changes that Intel just made on your CPU. Just want to say one thing actually of my, surprising, I accidentally did something that kind of turned off branch prediction and it got really bad. I didn't realize how much performance branch prediction gives you, which is the whole Spectra Meltdown thing, because that was one of the things that Spectra uses is branch prediction. And that is, I mean, that's some people don't even think about branch prediction, but I didn't realize how much of a performance savings branch prediction, correct branch prediction gives you. Just adding what people are doing more in the research side regarding the execution time measurements, right? There is, in the very simple system, people used to break down every instruction and try to compose it, but if you try to do on modern architectures like Intel where we run, it becomes too much pessimistic, right? There's too much pessimistic involved and sometimes it's harder for you to give a worst case execution time. What people are, what the research are going and there are companies that try to go that way as well is used part of this kind of analysis or let's say fine grain, but splitting on blocks and having statistical models that help you to give some level of certainty in the prediction of the worst case execution time. It's a complex problem, but there are tools and methods to do it, but most importantly, when you go to the safety critical side, maybe more important than these is being able to detect failures and react accordingly. It's, that's the way you can work around the mis-prediction on these tools. So I've read a paper, I wish I knew, I wish I kept this idea about, and it was basically, there's a whole idea on this, that the theoretical worst case scenario where you say that you missed the branch prediction, you missed cash, you missed this, is so great compared to the realistic worst case scenario that if you try to measure, if you try to put your deadlines to the worst case, you're talking in the milliseconds when you actually could be in the microseconds. That's how much of a drastic. So there was a paper on this, like about the statistical analysis to determine the actual, or at least the range saying that there was a head of proof in it that said that by doing this, you could say that it will never be worse than this, which is much, much less than the actual, if I miss every single time I miss, do a cash miss, because that's not realistic. And there was a great paper, I wish I saved it. And I was like, darn it, I read it like three or four years ago. The point is that in the real time theory, we are always fighting against being too pessimistic and being less pessimistic. And that's where you try to be as much pessimistic not to be too much pessimistic. That's the fine spot that we try to find, right? But for the scheduling latency on the preempt RT, because of the design of the code, I did the theorem and I did the tool for measuring it. Even with the worst case scenario and exposing worst case measured values and composing the worst case one after the other, the latency was still below one millisecond, even being pessimistic. So for the goal of the preempt RT, which is trying to give better control of the preemption, even with the pessimists, the valor are good. There's a paper on 2020 from my Domestifying the Real-Time Scheduling Latency. It's on ACRTS. I guess we're out of time, but I'm just gonna do one last call. If you know the paper I'm talking about, because I don't even know the title, email it to me. But anyway, thank you very much.