 Can I start? Let me introduce you. So hello. We have now Daniel, who's going to talk to us about the deadline scheduler. Daniel Bristot is a Red Hutter. He's an SME Red Hutter. OK. Hi. As he said, I am Daniel. I came from the pizza planet. I'm in Italy. I live in Pisa. And I work as a corner engineer at the support engineering group. And I work mainly with the real-time related products, like Red Hat Enterprise Linux for real-time. And as my hobby, I do a PhD in Brazil. And I am doing part of my research at the Scuola Superiores Santana in Italy, where the scheduled deadline were developed above the theory and the code. And my research area is obviously real-time. So let's start a little bit with some theory to be able to compare schedulers. And then we will go to the scheduled line implementation and how to use it. So what are real-time systems? Our real-time systems are those systems we deal with external events from real-world. And those events have some timing constraints. And the response for those events must to be delivered within the main time constraint that is the deadline. So even on AGPC or on all kind of processing, we have some kind of max latency we must respect. The difference between the AGPC and real-time is that if your system starts to deliver response after the deadline, your system really loses its value. There is no value on the response if it arrives after a non-deadline. So the correctness of the system depends not only on logical stuff, but also in the timing behavior of the task and the timing requirements. So on a mode task in a real-time operating system, like Linux, a real-time scheduler is responsible for scheduling tasks in a predictable way in such a way that all the responses are delivered within the deadline. And on the scheduling theory, there are two religions, the fixed priority religion, where every task has one fixed priority. And all its activation, all its jobs have always the same priority, like we implement with the real-time scheduler on Linux, the FIFO scheduler. And we have another religion that's the dynamic priority schedulers, which we have one that's a job-level fixed priority scheduler. That means that one task doesn't have a priority. But each job, each activation of a task has its own priority. And the next job will have a different priority. And in this case, Linux implements the deadline scheduler. That means that the task with the earliest deadline will be served first. So early deadline first scheduler. Well, in the real-time theory, the tasks are represented by the timing constraints. It doesn't matter the language. It doesn't matter the processor. It's just the timing constraints. And by turning these timing constraints into variables, we can do some F and compare schedulers to see how good is a scheduler comparing one each other and analyzing the properties of the schedulers. And another point about real-time is that we are always thinking in the worst case. Doesn't matter if the average case is better. The only thing that matters is the worst case. Like the running time of a task. We don't care if the task in the average is better than another. We care if, in the worst case, the run time of a task is bounded, it's known. And you can compare with other algorithm. It's the worst case is shorter or not. So to be able to do this math, we need some patterns of the task. The task must follow some patterns. And one no pattern is the activation pattern. That one task may be periodic if it arrives at every period. Like after three units of time, the new job will arrive. We can have sporadic tasks when we have this minimum arrival time in between each task. But it can be bigger than the period arrival. And we can have, in the absence of the pattern, we have a periodic task. We also have a constraint about the deadline when the task will deliver the response. And in the deadline task model, we have an implicit deadline when the deadline is equals to the period. We may have a constrained deadline when the deadline can be shorter than the period. And we can have arbitrary deadline when we have a deadline that it's after the next period. So let's do some exercising then to be able to compare the schedulers. So let's suppose we have a system with three tasks. The task one runs, in the worst case, for one unit of time at every four units of time. And the deadline of the task is equal to the period. And we have two which consumes two units of time at every six units of time. And the task three with three units of time at every eight units of time. If we divide the worst case execution time by the period, you have the load of the task. And the sum of the load of these tasks is less than one, which means that my processor is not over committed. I will have these tasks running for 95% of the time. I still have some band to guarantee stuff. So using this task set, let's try to schedule them using a task level fixed priority scheduler, like the SCAD50. And we arranged the tasks using rate monotonic. That means that the task with shortest period, we will receive the highest priority. It's optimal among all the tasks, the level fixed priority. So I'm using the best way to schedule these tasks. And it turns out that the task one and task two, the task one arrived, our task arrived at same time. Task one will start to work. Then it finished, the task two start to work. And then the two, three will be able to run for you one unit of time. But while it was running, the task one arrived and it will start to run. And then the task three start to work again. And the task two arrived, it sprinted. And boom, we missed the deadline. So the fixed priority scheduler is not able to schedule these tasks on a unit processor, while meeting all the timing requirements, obviously. And on the job level fixed priority schedule, let's take one example with the schedule rule that is the task with the earliest deadline will be served the first. So in this case, you can see we don't have a red block here. We are not missing deadline. But just to explain the rule, we have these three tasks. They were released at same time. And the task one will run, because it's the earliest deadline. Then the two, then the three, will be able to run. They are at same deadline. And you can pick anyone. Like if you do one preempt it, it will still one unit of time here. Because the two, two deadline is in front, like this is the early deadline. So it turns out that it's mathematically proven that the early deadline scheduler, the early deadline first scheduler, is optimal for unit processors. It means that the early deadline first can schedule any task set that is schedulable on one processor. And so that's good. We just need to use EDF. And moreover, we don't lose time selecting the priority of the tasks. You just put the parameters, and the scheduler will do it for us. Like we don't do the highest priority task. We don't keep switching tasks very often. Like we have other schedulers, optimal schedulers, that do it like, be fair. But, well, we always have a trade-off. Although EDF grants you the deadline, it doesn't grant you the minimal latency. When you have the fixed priority scheduler, the highest priority task will be delivered with the minimal latency. And when you want just in time computing, that EDF is not good because of it, even though the deadline is always granted. And moreover, the complexity to implement this scheduler is O of log n. And you can implement the fixed priority on O of 1. But wait, it's O of 1 because the user already selected the priorities. And a user to select the priorities can be as hard as O of n factorial. But the implementation is faster, I'd say. But in any case, when you arrange your tasks with the shortest period, with the highest priority, you have an optimal scheduler for a fixed priority. But you can schedule list tasks then while running the deadline scheduler. And when you don't have a complete predictable system, the EDF is not as predictable as the fixed priority. So when you have one task that runs for more than its set runtime, or when you have an overloaded system, you can have the domino effect on deadline scheduler. And in this case, when one task runs for more than its runtime or one task misses its deadline, all other tasks can lose it, can miss their deadline. On the other hand, on the fixed priority scheduler, only the tasks with the lowest priority will miss their deadline. The highest priority task will never miss their deadline. So, OK. Let's put some complexity here. I was talking about unit processor scheduling. And now we will start to talk about mood core scheduling. So on mood core scheduling, we have another problem that's not only how to prioritize the tasks, but also how to put the tasks on the CPUs, like the allocation problem, how to allocate tasks on CPUs. And for this case, we have some kinds of schedulers. We have global scheduler that will manage a single queue of process and will distribute over the processors. We can have a clustered scheduling when you have some domains of scheduling, global on the CPUs, which it manages, but in a small set of CPUs. Or you can run partition scheduler when you say, OK, this schedule will manage this CPU. This scheduler will manage this CPU. And here, you have a system with many like unicorns system. And we can have arbitrary affinities when you say, OK, a task can run on this and this cluster and this partition at the same time. So just an additional deadline scheduler is a global scheduler. But we can implement all this other stuff with configuration and I will talk about it later. So mood core scheduling adds on the math side a lot of anomalies. And one anomaly is that there's no way to say what is the worst case to dispatch all the tasks. And this turns the development of a real-time scheduler a little bit harder. And many things are working in progress yet because of these anomalies. Like for unit processors, the theory is stable. Everything is there. But the state of art of a mood processor or mood core scheduling is still happening. Like the development is still happening. And there's another anomaly. That's the house effect that adds a contradiction. If you have, for example, a system very low, I have a system with mCPUs. And I have mTasks. And these tasks run for all the period. Like the tasks run 100% of time. I have a completely full system. And it is schedulable. Even if I have a global scheduler, it will put one task on the CPU and the system will be schedulable. So let's turn things better. So I will have just small tasks, very small m, small tasks with the deadline a little bit earlier than the old, big task. But, well, in this case, we don't see the anomaly. The system is just schedulable. But let's add a little bit of load. Let's put just one of those big tasks on the system. The load of the system is lower than the load of this system. But this system is not schedulable anymore. Because these tasks have early deadline. And we'll delay the big task for this small amount of time and it will miss the deadline. And this is the half effect. That's why Moodcore is scheduling. Many times it's pessimistic. Well, so far I was just introducing the characteristics and the problems we have while working or maybe better. The benefits of the deadline is scheduling and the problems we might have using deadline scheduler. But now let's talk about the deadline scheduler. We have all Linux and how to deal with these problems. So the schedule line is a global, all CPUs, or deadline first scheduler. For sporadic tasks, the task can arrive in a periodic fashion like at every period or sometime after the period. And the deadline, it must be implicit or constrained. The deadline must be equals to or smaller than the period. In these cases, we can schedule using the schedule line. And the schedule line receives three parameters. The runtime, that is the amount of time that task needs from the CPU to be able to finish its work, the amount of CPU time the task needs, the deadline of the task and the period. So the schedule line is scheduled. We will guarantee for this task that it will receive the runtime at every period within the deadline. So the task will have its CPU time to run within the deadline. And then we can guarantee that the task will finish before the deadline. Well, so starting how to deal with some problems. It's very hard to specify the runtime of a task on Linux because the processor is very unpredictable. And we have a task's influence on the execution time of another task. So it's hard to say the worst case execution time for a task on Linux on an Intel processor with a lot of parallelism. So in order to avoid that task with a runtime that was not correctly estimated, to avoid that this task run for more than its runtime causing the domino effect, I told, the schedule line adds a protection with a constant bandwidth server. The constant bandwidth server guarantees that each task will receive the runtime. And if a task misbehaves and tries to run for more than its runtime, the task becomes throttle. So explain a little bit the CBS, the constant bandwidth server. At every period, a task will receive the amount of runtime. And while running, the runtime is consumed. And if the task calls the scheduled system call, it will be put to sleep and will be awakened in the next replenishment. So we will implement a periodic task. If the task never suspends itself, in the end of the budget, it will be throttle. And we will be able to run again only in the next replenishment in the next period. So this helps to avoid the domino effect. And if the task goes to sleep during its activation, if it's awakened before the deadline, it will be able to run again until the deadline. But if it's awakened after the deadline, it will be throttle and it will start work again in the next period. But we have this here because I found a bug on it and we are working fixing it now. So all this complexity just to arrive to one command line. That's how you set the parameters for a task. So you can use CHRT just saying that, okay, I have a video processing tool. I will schedule, I will process 60 frames per second. And to process one frame, I need five milliseconds. And to be able to schedule this, I will have a period of 16 milliseconds. So at every 16 milliseconds, my command will receive five milliseconds of CPU. And to be able to release the next task before this period, I needed to deliver the result for this processing before the period, like within 10 milliseconds, allowing the next task that will receive this frame to run within this deadline. Well, and you can also set these parameters in C using the this system call, set ATTR. And but this system call can fail. Like on the fixed prior scheduler, you can put as much task as you want because it didn't try to guarantee that task will meet the deadline because it doesn't have the deadline abstraction. But the deadline scheduler will start to reject tasks when it notices that it will not be able to do the work. So in this case, the deadline scheduler, we will start to accept new tasks when the real-time tasks are using more, try to use more than the allowed time for real-time tasks on Linux. So by default, on Linux, like because of this EC2 else, the system, the real-time tasks can run for 95% of time. That's configurable and the default is this. Well, so if the system is lower than 95% times the number of CPU, it will start to reject. But this is just a necessary but not sufficient test. What does it mean? Well, let's explore one thing, the house effect. If I have, in this case, if this guarantee, I cannot avoid the house effect because I can have one task running all of the time, like running the 95% of time in the CPU and more than M other tasks running for a short period. And then the M earliest deadline task be scheduled and sometimes it can happen that my big task will have the deadline really late and our other tasks able to run and it will prompt this big task and it will start to miss the deadline. And why? Why do developers choose to not use one admission test to avoid the tasks missing their deadline? They choose it because this test is too pessimistic. We have one admission task, admission task that guards you, that all tasks will be scheduled and will run within the deadline. Sorry. Even in the presence of a big task. But it's too pessimistic. It's in the worst case, if I have more than M, if I have on a system with M processors, if I have more than M tasks and one big task, the maximum I can schedule is one processor. Like I will not be able to schedule more than the load of one processor without missing the deadline. So global scheduling is really bad while dealing with tasks with a larger utilization without meeting their deadline. So, well, how to deal with big tasks while running in the schedule deadline? How to put more load on a system in which I have a task with a high load? Well, the schedule deadline is global. So it will suffer with the house effect. And a task on the deadline scheduler cannot set its affinity. Like it will run on all the domain. But it's possible to break the system on smaller root domains and schedule deadline tasks inside the smaller root domains. Enabling us to have a system with like a... Like I can break my global system into two clusters of deadline tasks and other like a partitioning. Like I can have one schedule domain here which will schedule all deadline tasks on these two CPUs. And I can set one schedule domain here to schedule just this CPU. So how do we do that? We do that using CPU sets. It's hard to set this, it's not easy. And people are working on schedule deadline to make it this more flexible. But that's how we do nowadays. We need to create, for example, I will create two CPU sets. One with a single CPU to put my big task and let it run along there. And a cluster of CPUs with our other CPUs. So I need to disable load balance on the root domain. That's all the restriction. And I go to the cluster, I set CPUs, the memory. I set it exclusive because I cannot have the overlapping of root domains with deadline tasks. Deadline tasks must run one scheduler for one CPU. I cannot have two scheduler managing one CPU. So I need to set it exclusive. And then I move all my tasks to this cluster and I can start all my small deadline tasks here because I will not suffer the hauls effect. And then I go back and create my partitioned CPU set, set it exclusive, set just to run on the CPU zero, and I can run my big task here. And so we suffer from the hauls effect because we are global, but we can work around it managing the root domains using CPU set. And so, okay, the presentation was way shorter than what I thought, but well, when should I use the deadline scheduler? What are my requirements? Like, first the question is, do I have the timing requirements? Because you need to know the timing requirements. You need to know how often a task is awakened and how much CPU time it needs and what is the deadline? Well, if you don't have these parameters, it's really hard to start to use the schedule line because you don't know how to set the system. You need to know this, but we have a special case which I will talk later. So, okay, I have the requirements, that's good. So am I more concerned about deadline of the tasks or about the latency of the highest priority task? If I care about the latency of the highest priority task, it's easier to provide this with a FIFO scheduler. Then with a deadline scheduler. But if I'm really concerned about deadline, it's hard to guarantee that a task on the FIFO scheduler will meet the deadline because it does not even have the idea of what is a deadline. While on the other hand, we have the notion and we guarantee that we schedule things within the deadline. And do I have a large number of tasks? That's another good thing because you don't need to care about the priority of tasks on the schedule deadline. And if you think on allocating tasks on processors and setting the right priority for this task on the processors with a fixed, in a fixed way, like a fixed priority, you have an NDP hard problem. And if you have a larger number of tasks, it would be really hard for you to set the appropriate priority for the task. So that line scheduler really helps on it. But we have the a periodic case. Like my algorithm is not discrete. It's like continuous. And I have a formula that provides me the guarantee that if I receive this amount of CPU time at every period, it will be enough for me to schedule all the tasks I want. So if you don't know the periodicity of your task but you know how much CPU time you need to accomplish it and it's enough for you, you can use the schedule deadline just to provide the queue CPU time over the period using just a constant bandwidth server. That's for such kind of tasks, a periodic task. And do you want to use a base loop with schedule deadline? Please don't. You will cause the house effect in the first time. And the schedule deadline is, it has a static priority higher than the real time tasks. So it will try to first pick deadline task. And if there's no deadline task, it will try to pick a fixed priority task. If you run the queue over P, you will just run forever and you will delay everything in the system and you delay immigration threads and you end up having a hog system. So please don't, that's not your case. Yeah, this should help for no real time tasks. But you might delay the real time tasks with fixed priority because you reach the end of the line. Oh, can you repeat the question on the microphone? Oh, okay, I got it. So if my CPU, if my deadline tasks, we will use only 95% of time, I will still have a 5% of time to run other tasks. That's correct. But that's for no real time tasks, CFS tasks, not fixed priority tasks. And then you will delay fixed priority tasks. And like migration thread are fixed priority tasks. That's it. But yeah, that was a good question. So, okay, deadline scheduler and the real time kernel. So these are two different things. The deadline scheduler is scheduled tasks. And it is scheduled tasks in the same way both on the real time kernel and in the non real time kernel. So what are the benefits of using the real time kernel? Well, the real time kernel resolves the priority inversion problems. A priority inversion takes place when a lower priority task is running in a condition in which it don't allow the highest priority task to start to run. For example, if the task is with a preemption disabled, like am I right back, the right back function for my block of disk, disk is block disks, is running, it disables the preemption to have a better throughput and it will start to delay the activation of a high priority task. That's what the real time kernel resolves. In the non real time kernel, we don't have a guarantee of how long a priority inversion will be. For example, we can have a three milliseconds priority inversion, 10 milliseconds priority inversion. And so if I have a 10 milliseconds priority inversion, I cannot guarantee that a task will give the response within 10 milliseconds because it can suffer a delay of 10 milliseconds and then we will start to work. Like I cannot guarantee. And that's the benefit of running both together. The real time kernel grants you max latency of 150 microseconds, which allows you to give a short deadline for a task, like one millisecond. The task will start to run after 150 microseconds and I will still have all the rest of time to run my computation. So this is one benefit of using the SCAD deadline with the real time kernel. But you can run, if you don't care for so tight deadlines, like if your deadline is one second, there's no reason to use a more predictable kernel. There's another issue that's about the priority version, but it's in this case, if you have tight deadlines, you need, actually, to use the real time kernel. And you can find, oops. So you can find more about the comparison about the real time kernel and the non real time kernel on this link. And actually, this presentation is the resumed version of an article which explains how to use SCAD deadline. I was working on it. I will put this article on Raoul's blog on next week probably, explaining in details and how to use it and more stuff. So what is next on the SCAD deadline? We are actually working now on a group protocol. It's not group but loader. It's a runtime reclaiming protocol, agreed runtime reclaiming protocol that will turn the deadline schedule more flexible. For example, if I have a task with its runtime, assigned runtime, it will not be able to run more than its runtime. But if I have just this single task, I may turn the system idle. And well, why not use this idle time to run deadline tasks? Why not use this spare time using deadline tasks, running deadline tasks? So the group will turn the runtime more flexible and it will allow a task to run to get more CPU time as long as it doesn't interfere on older deadline tasks. Luca Bene is a professor at Scola Superiore Santana where I'm continuing the PhD. He's implementing this and he will soon release another version of this patch. I'm actually testing it. And there is a lot of discussion about trace points to explain the behavior of the deadline task. I submitted patch with trace points but Peter Zistra said no. And there is a lot of discussion because he don't want a trace points specific for a scheduler. It wants it to be generic for all scheduler. Just one can switch trace point. And the people at, I forgot, Matthew, the Snoyers company is working on it. LTT energy people. He submitted one patch set. It's there for people to review. I need to review this but I didn't review it because I was doing these slides. We need to add more scheduling status for people to know how much runtime the task actually consumed. If it misses deadline, if it doesn't miss deadline, Tomaso Cucinotta, that's a professor at Scola Santana is working on it right now. I had another patch set rejected by Peter Zistra because, well, we have the RT Prottling on the real-time scheduler which prevents the real-time tasks to run for more than 9.5% of time. It throttles and let no real-time schedule, no real-time tasks to run. And then I submitted a patch saying, okay, if I don't have a, no, let me introduce the problem. I didn't finish the problem. So this 9.5% will be, the task will be throttle once they reach this 9.5%. Even if there is no other tasks to run, if the system goes idle, so you are in a missing CPU time, you could run for more than if you are not starving any other tasks. And then I submit a patch asking, hey, if I have nothing else to run, let the real-time tasks run. But Peter Zistra said no because what he really wants is a hierarchical scheduling with deadline scheduler in the base and you will put no real-time tasks inside this deadline server. And then you provided the guarantee by scheduling the deadline server and not by throttling the real-time. This will turn things on like, things on CPU sets probably more easier to provide real-time throttling guarantee because you actually don't, don't respect things there. But it turns out that these will be really complex, mainly because we don't accept arbitrary affinities. Okay, you mean, I would not have time to explain this, but this will take some time. And in no way, okay, currently we cannot say the affinity of a task on deadline scheduler because it's global. And we need to respect this rule to respect the guarantees of schedule line. But there are, there is an article from Bjorn, from MEPI, MEPI SWS Institute in Germany, which he gives some proofs that you can actually do arbitrary affinity on global scheduling, deadline scheduling. But this will also take some time and everything is under development. Modecore real-time scheduling is something that is still a problem for academic stuff. Like, they need to take some time until this become reality. But on the other hand, it's challenging. We have a nice stuff to do here. And that's all folks, questions? Oh, one question. Okay, the question was, is there any to help me to estimate run time or period? Actually, you need to use tracing tools like Perf or Ftrace, set the schedule wake up to see how often a task is awakened and set the schedule switch to see when the task is, start to run or go to sleep. And then you make some, some mathematics using BC and you'll be able to estimate. But that's the state of art. Like, the deadline, the trace points for a schedule line would allow us to print how much room time a task is still have when it goes to sleep. So you could estimate using just one trace point, but this is still under development and you have to use imagination with Perf and Ftrace. We have more questions than deadline scheduler users. No, in the world like, we don't have two deadline scheduler users in the world yet. The house problem, explain it again. Okay, he asked me to explain the house effect problem. Again, yeah, it's tricky. Okay, so here I have a system with M tasks, one task per processor, per processor. And this task runs forever. This is, the load of my system is one for each processor, 100% one, and times the number of processors. We have the load of M, that's huge load, right? And then let's be, let's turn things better. Let's say I will have the minimal run time for a task, like a one unit of time for M tasks. My load here, if I put everything inside the same processor, will be smaller than one. It will be the minimal times M. And the system is schedulable. Obviously it's schedulable. But here we find one anomaly. We will increase the load of the system. But the system will not be as load as this. So I'm still reducing the load of the system. The load of my system is one, like this task running all the CPU time, it's still one, plus M, the minimum time. So the load is obviously, we can see in the draw, that it's lower than this. But in this case, as this M tasks have the deadline earlier than the big task, they will be scheduled first. Because one property of this CAD deadline is that the M earlier deadline tasks will run first. So it correctly schedule all the M small tasks. But by running this time, it end up pushing the start of the big task. And it will push it for after the deadline. And then I will miss the deadline of the largest task. You might say, hey, but I can put this task on the other CPUs. So you can guess this because we humans are clairvoyant. We can see things in the future, but the scheduler don't. The scheduler is not seeing that the task will miss the deadline there. So we have to, how to deal with this? Create a CPU set to put all the M small tasks and confined on the other CPUs. And then put the big task isolated. That is the example that I gave with CPU sets, is how to turn this system is schedulable. I put all M tasks on a part of the system using CPU sets and let this task run in alone. Poor task in the CPU zero. But that's another scheduler. Zero lacks the first scheduler, which have other properties. And then it's always like you have one other property, good, but you have another better, another worst case. And the only optimum is scheduler for this class of tasks is using Befair, which I slice the task in a very small slices and try to schedule them. But then the overhead is repetitive. Like you have to have a lot of timers, a lot of migrations and the schedule switch and you lose your CPU time. Yeah, that's the example I gave in the, yeah. Okay, that's more complex. It's really hard to do, okay. I will repeat the question. So you have a box and you want to run a virtual machine on this box. And this virtual machine will have a VCPU and this VCPU should run alone on a processor. And you want to guarantee that this CPU, VCPU will receive all the CPU time from the real CPU, avoiding any latency, because if you have a latency, you may miss package, network package, for example. And this VCPU, actually, they wanted to run this the more time as possible, like from forever, correct? Well, okay. Problems of using the schedule line on this case, you will always delay all other tasks. And currently the state of heart is that there's no way to completely isolate one CPU on Linux. You always have something to run there, like a AK worker that will be indirectly scheduled there and so on and migration threads and so on. In this case, if you want to give all the CPU time for this VCPU, the best thing you can do is try to isolate all the stuff from there. And the state of heart will be able to have a no-hards full, only the tick of no-hards full at every second. That's the best you can do. But in this case, doesn't matter if you use SCAD deadline, CFS, or real-time five-way scheduler, you receive the CPU time because you isolated it. You might not want to use the ISO CPUs, but you will reach the same results, like trying to set affinity and so on. Okay, it turns out that the best you can reach in this scenario is the same of using SCAD FIFO. So yeah, you can reach the best, that's the same of SCAD FIFO. And if you really isolate your CPU, because if you run a SCAD FIFO or SCAD deadline forever, you end up postponing older tasks that might be scheduled there and causing problems. So in the worst case, you have a, okay, I'm finished. We think the deadline. So it turns out that doesn't matter which schedule you use. You might want to use the FIFO scheduler in the real-time kernel to avoid unbounded priority inversions. That's another subject involving mutexes and priority inheritance. But the best you can reach is the same of SCAD FIFO, including unbounded priority inversion. And there is one problem with SCAD deadline, which are the control of priority inversions. With SCAD deadline is not as stable as everybody wanted it to, because the SCAD deadline doesn't have a priority. It have a deadline. And so the priority inheritance will not be inheriting a priority, but the deadline of the task. And there are no issues about it. Peter's history I even mentioned on the last real-time so I mean, I'm running out of time and I was startled. I'm sorry. That's it. Yeah. You can continue the conversation outside, just mute it from there. We can fully talk without... Sir, we have to be... No, yeah, yeah. We're going to be, let's go. We're going to be, let's go. We're going to be, we're going to be, okay. Okay. Okay. Okay. Okay. Okay. You have a very good friend in Brazil. Oh, what, Rafael? Yeah. Are you a... No, you're from my... I'm living in Italy. I'm from Sicily, too, but I used to live in Brazil. Yeah, yeah. Actually, he's a very good friend of mine. Yeah, yeah. He comes up and visits me and... They say, if you met Mary Woodman, say hi. Oh, yeah. Well, good. Him and Matt, yeah, they're a really good guy. Yeah, yeah. I really like it. He comes up and visits us and we play. And he likes math and we start, we have a discussion about math and then it's started together. Yeah, yeah, yeah. It's nice. Good answer to the last question, too. I'm with Larry. My name is Shaq. I read that. Okay. What's the, how do we do? We run that program. Oh, do you have... Jay? Yeah. I run the first part of that program. Oh, never. So what have you been doing? DPDK gave me a break time, all that fun stuff. Yeah, I almost, they were meeting and I knew the problem on the beforehand. That's why I have... That was nice. That would have been the same. I am almost dead. That's the second question. I knew it was a long answer, but you did very well. That's all. I never said, okay, don't do it. I say, huh? So, you used to find their luck, nice. Very nice talk. So, affinity, so we, even if you put things into CPU sets and then allow you to just get that line within the set, is that reasonable? Because then you can also get that line. Okay, hold on. No, I'm gonna do this. Or C group, yeah, whatever. I'll just drag it over. Where? Shrink that down, all right. Don't do a full screen. All right, right here. This guy. Yeah, the middle. And then drag that over to the other screen. There we go. Well, then he can mirror it, right? Roger? Yeah. So, how do we, what do we do? You wanna do is a presentation first? However you want. Yeah, we'll put it in a slide show. A slide show. Put it in a slide show, I'll let you grab it. Just put it in a slide show. Do you mirror them both, though? How do we, let's grab it again, yeah. Now, slide it back. How do I do that? Yeah. Okay. Push it, it won't go. Yeah, here. You got it. I'm gonna get that. You can grab it from here. It's not working. Just move it. Oh, okay. Move it over that. Exactly. You're saying put it in full screen first? No, put it in slide show. You wanna do both? Yeah. Yeah, here we go. You got a question? Here we go. Put that in one of your USP in here. Okay. Put it up or down? Is there a water in here now, Dan? Yeah. Try to get it in. Okay. Okay. Where to, either, then? This way, too. All right. Okay. Where? Here. Okay. So we first thing. Okay. Like 10 minutes of that. Okay. 10 minutes before. Okay. I'll watch this. Okay. This is the...