 Thanks for coming. I'm Uri Lele, some of you already know. I've been working for ARM for the last three years. I've been following the energy-aware scheduler effort. I'll talk about it a bit later. But basically, I'm here to say that I don't know if you actually know Skadedle, but it's actually alive. So that's basically the main news I have here. I think there is people sitting over there. If you can come forward, it's probably easy also for me. I mean, it's not a big deal. Can you actually sit like there are all these places here? I won't, I mean, it would be a pretty relaxed presentation. I just, thanks. I won't, I mean, I won't make any questions or anything. So, okay, thanks a lot. Well, actually, I'm lying. So I'll actually make questions because the problem is that I have to assume this presentation that you have a little background on what Skadedle is because I won't have time to cover the basics and I cover the new features. So I'll be making questions. And I also get prices. So let's see how it goes. All right. That's the intended agenda. I'll quickly cover. There is one slide set in the background of Skadedle. Then basically there is a why. So why all of a sudden we have this new set of features that we actually developing. And then I'll cover the features that are actually being under development. So there will be a banner reclaiming, then a frequency in CPU scaling, coupling, and with the frequency selection, so CPU flag governor, and then group scheduling. All right. Let's get started. So what and why? Deadline scheduling. So this thing has been around for, yeah, something like three years. It's been merged in 314 kernel. And I'm saying that it's alive now because basically in the last three years, I basically had a bit maintaining it that there's been like several bugs to fix and other stuff to do, but nothing major actually happened on the scheduling class. What's a Skadedle? Skadedle is basically another scheduling class is an RT scheduling policy. RT like for example, Skadarar, Skadafifo. And the difference is that using Skadedle, you can actually explicitly give actually the kernel per task latency constraints. So there is actually an API with which you can give this information to the kernel. And it basically the algorithm itself by design avoids salvation. And in general it's basically reaching the shared knowledge about the quality service constraints that tasks might have. It basically implements EDF and CBS. And that's the first question. So who's going to tell me what EDF and CBS stands for? Okay? Okay? CBS? Anybody else? So you actually get the price. So this question was actually two pens. I left one pen handed to you. This is a very nice pen. This is for you. I put it here. Then you can collect afterwards. I mean, as you know, I mean, there are pretty cheap items, but we do pretty inexpensive chips in arms. So that's why, right? So CBS, who's going to tell me for another pen? Nobody, really? All right. So I get my pen and CBS stands for constant bandwidth server. It's actually a nice, I'd say it's a nice thing here because EDF is basically pre-dummy. So you basically schedule the task with the earliest deadline. It's like a priority based algorithm based on deadlines. But the nice thing is that the CBS, it's an algorithm and this algorithm actually picks the, let's say, the dynamic deadlines of the different tasks. And actually, it's that algorithm that it makes, for example, temporal isolation and avoiding service possible. All right. If you want to actually have more details, I actually gave a presentation to last year, ELC, you find the link there and through the slides also be referring to papers describing the algorithms. So you can actually go and offline search more about this. So, but why this developing is now happening? Well, the why is that in ARM, we have been working in the last four or five years on this thing that we call energy aware scheduling. It's basically a set of extension to both Linux scanner scheduler and several subsistence, for example, CPU freak to actually make them power and performance aware. So both like meeting the performance requirements of user space application while saving energy. And the effort so far has only been basically modifying and taking care of modifying schedule wise. Then last year, this set of changes got merged in the Android common kernel and it's basically now used by Android. And then in that particular case, basically for Android performance, for the workloads that we care most, it's actually meaning meeting latency requirements more than actually performing more work. So when you actually have strict latency requirements, schedule normal can be maybe it's not like the best fit. And what's actually happening currently is that schedule five also RT scheduling class and policy are currently used to actually be able to meet this latency requirements for certain tasks. Now, basically my point here is that for the same, I actually believe I had to prove this because this is really experimental and working progress stuff that I'm actually going to talk about. But I really believe that for the same use cases, we can actually make better job using deadline. It should be it's theoretically better fit. And actually also, let's say that Android actually already does make some modification to the main lines get five. And I'm not sure that those modifications can ever get upstream, just because I guess the feeling that I got from maintenance is that schedule five is probably not the best thing you want to modify. Instead, you probably have to do to schedule. So what I'm saying is that if we know and we know that we are going to make modification to schedule, discussing those on many lists should be less contentious. That's my personal feeling. Yeah, actually, I want to mention that this work is not only arm doing this, but we are basically collaborating currently with the School of Superiors Santana of PISA. School of Superiors Santana of PISA is basically the university I've been studying before joining ARM. And actually, the old scheduling project was born there. So while I was there, actually, we got this thing made. So it's basically the same guys that continue working on the project. All right. Okay. So that was basically the general introduction. But let's talk about what are the new features. So Ben, we're reclaiming. What's the problem with the current SCAD deadline implementation? So I guess the main problem, the main thing that might be problematic while trying to use SCAD deadline for basically a software time type of application, like for example, drawing rendering pipeline, is that the task banded it. So the amount of CPU that they can actually associate to the task is fixed. So basically, you have these SCADs called SCADs et cetera. You call it for your task and then you basically associate the runtime and a period to this task. So it's basically a fraction of CPU time. And it's actually enforced. So if the task tries to execute for more than it's actually granted to him, it will be stopped. And that, I mean, it can be problematic because, for example, what happens if you casually need more bandwidth than what you actually asked for? I don't know. It might be fluctuation in the network traffic or, for example, if you are a task that belongs to a rendering pipeline, there might be some certain heavy frames you actually have to render. And just for one of those, you need more bandwidth. So you might be missing your deadlines just because the bandwidth allocation is so strict. So the proposed solution, and it's proposed, basically, there have been at least four sets under discussion on the Linus Karimini list. It's going to be most probably a post in, like, next week or week after next. It's something that we call bandwidth time. The idea is that you would allow tasks to consume more than what they've been allocated at syscall time, at the admission control time. Of course, to don't risk to jeopardize everything, everybody else, so because we have still scared others, scared fee for task, you can allow this reclamation to happen up to a maximum of the CPU time. And of course, if this doesn't break other scared deadline guarantees, so you don't want to jeopardize other scared deadline tasks. So the algorithm that we actually implement is called GRAB. It stands for greedy reclamation of unused bandwidth. And I guess, I mean, it can be a bit tricky to understand the, I mean, the particularity of this algorithm, but I guess the name is actually giving you a big hint because basically, the basic idea is that when I live, so I will admit some tasks to the system, and I will have a fraction of CPU time spare, so nobody's using that time. And the idea is that the task is currently running will greedily use the portion of bandwidth that is not currently used by the others. So that's why greedy reclamation of unused bandwidth. And that makes the implementation algorithm more very simple. It's really a, I mean, minor modification that actually be made to implement this thing. So it basically composed by three main components. One is we had to track the utilization of the active tasks, so the tasks that are currently active on the system, because we want to reclaim the portion that is not active. Then we have, we can use this information to modify the accounting rule. It's basically how we keep track of the runtime that we actually are granting to a task. And then, well, I quickly go through one of the issues that we found out while implementing this thing and extending the support to multiprocessor systems. So the original algorithm was actually designed for uniprocessors, so as soon as you try to support multiprocessor, you'll find some issues there, and I'll detail about one of those just to give you an idea. As said, there are references to the papers, and you can find way more details there. Okay, so let's try to understand what tracking of the active utilization means. I guess this is basic. This and the next one are kind of the most tricky slides of the whole presentation, so please bear with me. Let's see if we can get through this. All right, so I'll make an example to try to make this thing easier. So let's say that you have a task that has been turned to the system, and, of course, this task has a runtime and a period. The runtime here is depicted by the capital QI, and the period is capital TI. So those are the two parameters you actually specified when calling the SCAD-setout syscall. Now, this task, let's say that was sleeping, then it activates for the first time during a period. The idea is that you will be tracking the sum of the active utilization of all the tasks per CPU, so you have, we basically added a pair rank queue variable called running bandwidth that actually keeps track of this sum. So when the task wakes up for the first time, it's pretty easy. So you know that this bandwidth, this utilization is basically runtime divided by period, so QI divided by TI, and you can just increase your act by this amount of bandwidth as soon as the task wakes up, so that you know that the other task cannot reclaim this task bandwidth. Okay? That's easy. Then the tricky bit is that what happens, so the tricky bit is understanding when you can actually remove this task contribution from your act. Because the problem is that if, for example, the task goes to sleep and you remove instantaneously his contribution to the reclaiming, so his contribution to the active utilization, the other task can actually reclaim instantaneously his bandwidth. So if he then wakes up again in the same period, he could actually find out that the others used his bandwidth, so he will be basically potentially jeopardized. So you don't want that. Let's say that there are basically two things that can happen. One is the simplest one. The simplest one is the task executes for a bit. I'm not repeating this one. What I'm repeating is the more tricky one. But let's say that the simplest one is that starts executing and then it consumes all its runtime in this period. In that case, it will be throttled because we implement basically throttling. And at that point in time, you actually know that you can remove his contribution to the actualization just because he already depleted all its available runtime. But what happens if the task actually goes to sleep and has some leftover runtime to consume? Now, the tricky bit is that you have to remember what the constant bandwidth server does when the task wakes up. There is one of the rules implemented by the algorithm that checks this inequality. So second question. It's going to tell me what this inequality actually checks. So why do we want to use the remaining runtime divided by basically the deadline minus this amount of time when the task wakes up and compare that with the theoretical worst case benefit? Any idea? Maybe not. So basically, let's see if I can help you. As said, the task goes to sleep at decision time. It has some runtime, leftover runtime. Then you want to know if it wakes up in the same period, if it's going to use its runtime, it's a leftover runtime no matter what. The problem is that if it wakes up, for example, here pretty close to the deadline, it will actually go and execute for the... I mean, using more than what are located. Just because it's basically free to execute because it's probably the highest priority task. So if there is, for example, another task that is running concurrently with him, it will basically execute in the other task reservation. So this thing actually just check if we can recycle or not the current runtime. So basically, what we do in practice is that when the task goes to sleep, we actually have to compute this point in the time in the future called the zero-luck time. This is actually deriving from this inequality. So basically you make this an equality and then you calculate t from this thing. And t is actually your zero-luck time. So basically you know that after this is inside, the task cannot reuse the leftover because if he actually is going to use the leftover, it's going to cause troubles to other tasks. More or less. Okay. Let's say that maybe you believe me. So it's basically what's happening. So you go to sleep, you compute this thing and you actually set a timer to fire this instant time that will remove the task, bandwidth from the UAC. That's in practice how the implementation works. Okay. So that's the tracking of active utilization. This thing happens on each CPU. All right. Now that you have your active utilization thing, you can actually use that to implement the reclaiming itself. The current accounting rule, it's pretty easy. Because basically a task that wakes up and starts executing, at each tick you actually call this function called update curd dl. At each tick also the last time when the task goes to sleep. And this function basically uses the delta xx. So it's basically the deltas between the last time and now. It can be four millisecond, one millisecond depending on the health rate. And then decrement the runtime of that value. So that you know when the task will be depleting its runtime. Because you then want to stop the task as soon as the runtime becomes zero or negative. To be able to actually reclaim the other's bandwidth, the idea is that you want to reclaim basically... So uact is the value that's between one and zero. So it's a fraction of the 100% CPU time. The idea is that you want to reclaim one minus the uact. And if you do the math, you actually come up with this equation here. So instead of removing the whole delta xx, you actually remove a fraction of that delta xx. To make an example, if the task is 30% utilization task, you will multiply the delta xx by 0.3. So if you execute for nine milliseconds, you actually remove three milliseconds. So basically you have more time to complete. But that... I mean, if you actually allow this task to actually reclaim 100% of CPU time, that will be a problem for non-deadline task. So you will end up starving other guys. A simple example to try to understand better this point is... So let's say that you have five seconds over 10 seconds task, the connection you reclaim. So if you can reclaim 100% of CPU, what does it happen? Anybody that can help me here? Because that's actually another quiz. So this is probably easier to compute. So let's say... So what's the current uact? So consider there is only this task on the run queue. And as a run time of five seconds and a period of 10 seconds. Sorry? 0.5. So 0.5 is actually the current uact. So let's say that this task is executing. And basically at each instant in time, let's say that it executes for one millisecond. But instead of removing one millisecond of runtime from each runtime, how much do you remove by using this formula here? 0.5. So basically you remove 0.5 millisecond every one millisecond. If you do this for... Considering that the runtime is five seconds, basically you will end up executing for 10 seconds. And that's basically all... You'll be constantly executing over your period. That's basically... I'll give you a USB pen because anyway, I have it here. So that's basically what happens. You multiply by 0.5 and you will deplete in your run time in 10 seconds. So that's basically... If the task doesn't reclaim, it gets topped after five seconds. And the other tasks can actually run and they're happy. But then if you implement the reclaiming and it's unbounded, it has to be constantly executed over the 10 seconds and constantly do that. So the others cannot execute anymore. Does it make sense? Okay. So the solution here and you... Well, again, you had a bit to believe me here but that's basically easy to compute is to actually have another variable called umax that you can actually set to be the limit of bandwidth you can actually reclaim. In this case, for example, in the same example, if you set the max to be 0.9, the task will only reclaim up to 90% of the available run time. And so it basically gets to execute for a maximum of nine seconds and you will have one second of time for the others to execute. That's basically the solution for this problem. Okay. Yeah, multiprocessor. Let's say that we actually discovered this problem just because they said the algorithm was actually designed for a single processor. Let's say that you have two CPUs and you have a task that goes to sleep at this time. So you actually, when you woke up the first time, his contribution to the UAC has been added to the UAC.CPUK. Then it goes to sleep. You set the zero time. But then let's say that when you wake up again, so in this point in time, you wake up again, you have another higher priority task executing on the CPUK in this case. So the task is actually put to run on the CPU. Since basically nothing happens if the task wakes up before the zero lag time at the UAC of the CPU, the task was running on. What you do is basically you don't do anything to the UAC. You just remove the zero lag time. But then when the task actually, for example, depletes his run time, you will be subtracting his contribution to CPUJ UAC. And that's going to be a bug because it's basically it's going to be negative. So in this case, the fix was pretty simple. So basically what we do when the task is actually migrating, you both cancel the zero lag time and also instantaneously migrate the task contribution between the two CPUs so that everything is fine when the task is blocked again. All right? Okay. Some simple data and results from synthetics. As I said, this is a work in progress. Basically my next step, as I will tell you why it is to try and start using this thing, for example, on Android. But I mean, this thing is basically a simple example. Here I have one task that is executing inside a six millisecond over 20 millisecond reservation. A task is it actually has a constant execution time. So every time it's activated, it's executing for five milliseconds. You have another task on a single CPU here that actually has a reservation of 45 milliseconds over 260 milliseconds. The problem with this task is that it experiences occasional variation in its actual runtime. So it varies between 35 millisecond and 52 milliseconds. So there will be instant timing in which it will try to execute for more than 40, 45 milliseconds. And you see here that basically this is task two without reclaiming. Here I'm basically executing a cumulative distribution function of the major response time of the task at each activation. So we actually measure how much time the task actually required to finish its current activation. And basically this means that there is basically at least 40%, 35%, 40% of the cases in which the 25, maybe a bit less. There is a normal percentage of the activation of task two in which the response time is actually higher than what task two actually wanted, the 260 millisecond. So that's why basically the problem I was telling right at the start that this mechanism is too strict, is too fixed. Using instead of reclaiming, so reclaiming basically the one millisecond plus up to the UMACs that you set up in the system, you can actually, task two can actually always in this case finish before its reservation period. So that's basically meeting always its deadlines. That basically depicts easily how the algorithm actually helps and works. Okay, so once it is in what we need, so the idea is that since now we assume that the clock frequency was fixed and that's when the clock frequency is fixed, it's easier because basically you can ignore the fact that at the runtime of your task actually scales with the clock frequency. But what happens if the clock frequency varies? It's pretty simple to actually deal with it. We just have to scale the reservation runtime. So the idea is that since basically you can assume that the actual runtime scales linearly with the frequency, you'll be scaling linearly also the runtime. It's the best case runtime. And that's the formula you have to apply. You basically take the original runtime, and then you multiply by the ratio between max and current frequency. And that basically allows your task to still execute inside the same reservation without modifying the reservation. So you specify your parameters considering the highest OPP. So let's say 10 millisecond over 100 millisecond. And then you get the runtime adapt to the current frequency using this formula. That's basically the idea. Yes. Yes. Basically the algorithm works in the same way. I actually have an example here probably answering your question. So let's say I run this test on an i keyboard. The keyboard has five operating points in the A53 CPUs. Those are the operating points. So it goes between from 208 MHz to 1.2 MHz. And there's an associated capacity in 1024 scale. So the formula, let's say that you have this task that actually has 12 millisecond over 100 millisecond reservation at max frequency. That translates, for example, if you then try to run the task at minimum frequency, that translates to 69 milliseconds. So you let to extend the runtime, the 12 millisecond, up to 69 milliseconds. And that's actually what's happening. So here, basically, in the first plot I'm running the task. So basically I'm running a 10 millisecond over 100 millisecond task inside the 12 millisecond, 100 millisecond reservation. So everything is fine. The task computes and ends up computing before it's actually throttled. If I then run the same, but at the minimum frequency, the actual runtime is extended. I mean, it takes 60 milliseconds then to actually execute the same amount of work. And that's still fine because I know that my reservation actually gets extended to 69 milliseconds. So as soon as the task, for example, here, I incremented of 10 millisecond, the third one, the actual runtime of the task. So as soon as the task, in this case, tries to execute for more than 69 milliseconds, it's actually throttled. So the algorithm is actually working consistently. Yeah? Right. So for the question is what about basically big, little systems? And basically it's the last point I had here. You have to apply the very same formula by considering the max capacity of the two CPUs type. Basically it will be, you have to scale twice. You scale one for the frequency and then you take the same scale again comparing the max capacity of the CPUs of the system all over and the current, so the max capacity of the CPU you are executing. It's basically applying the same scaling twice. And that's actually what, for example, the parented lot raking in the scale fair is actually doing. So it's very the same solution. All right. So now we basically have all the ingredients to actually be able to scale, I mean, control the clock frequency from the scheduler. So you're probably aware of the schedule till CPU flag governor. This governor has been managed recently in the last year. It's basically a small thin layer between the scheduler and the CPU flag driver. And with that you can actually drive the clock frequency from the scheduler. Currently it uses Foursquad normal. Foursquad normal, so fair task. It uses the utilization average, utilization signal. And then uses that compared to the max capacity of the CPU to actually compute the frequency needed to meet the task requirements. This is basically a running average of the task it's executing on the system. Currently the problem is that both schedule Foursquad normal, as soon as you schedule schedule Foursquad normal task, you go to max. Because basically there is no idea of how much utilization bandwidth and clock frequency is actually required to meet those task requirements. The idea is that once bandwidth reclining will be in, we can use the running bandwidth, so it's our Uact I was talking about, to actually have a per-CPU utilization contribution of schedule deadline. So the idea would be I have the utility leverage coming from fair. I sum up this with the Uact, so running bandwidth of deadline, and then I can translate this amount of utilization into a frequency, a clock frequency. So that's how I'll be driving frequency selection also for deadline. Another, let's say that it's not the only modification needed currently in mainline, so one of the problems is that the triggering points for actually driving frequency selection, so when basically the points where the scheduler actually asks the governor and the driver to actually change frequency, for both for FIFO and deadline, there are sites in the basically tick handling code, which for deadline, I mean makes sense now because it's, I mean commonplace you always call when you have something active, but in the light of what I've been actually talking today, we have to move those points where the running bandwidth is, so where the Uactive actually changes. That's a special question actually. So if you actually was able to follow, so where do you think this running bandwidth actually changes? Anybody? So well, one point is probably easy. So let's say you have a task and then when do you actually have to increment the UAC for the CPU? Sorry? That's basically the finishing, so let's say the first thing you have to do is actually increment the variable when the task wakes up the first time. So that's one of the points where you actually potentially want to trigger a frequency selection. The other one is actually what you were saying basically. When a task goes to sleep, you potentially want to actually remove his contribution to the active utilization and that's where you potentially want to trigger a frequency selection. It's not instantaneously when a task goes to sleep. It's the infamous zero lag time I was talking about before. So you set up the time here and when the time er fires and you actually decrement the UAC you can actually trigger the frequency selection and say okay this task is gone so you might be slowing down your frequency. That's basically it. Yeah, another modification that will be most probably I mean required is that on platform that actually needs a sleepable context to actually change a K worker thread to actually do and to actually call in the driver and actually perform the frequencies which and that the thread is currently a schedule five for thread. Of course if you are, if you want to change the frequency to for a deadline task we let to make also the thread deadline and maybe treat it like especially because you basically want to always that the guy always preempt any other deadline task just to be able to change the frequency in a lot of those. So those are basically three main modifications. Some results I probably scheme through this because I don't have much time left just saying that for example here the basic idea then you can go offline and see better I have actually extensive results collected and you can there you can actually look at them basically the idea is that at least with the simple example you are meeting both tasks requirements so deadlines you don't have any let's say that what you actually want to see is that the red line doesn't go below zero and while you are not running the maximum frequency so that's basically the trade-off you actually want to achieve and it seems to be to be working. So instead I don't spend much time in this because I have to cover the next bit so group scheduling why we want this so currently scheduling works one-to-one so you have one-to-one association between task and reservation. The idea is that sometimes it's actually might be easier or better to be able to group a set of tasks inside the same reservation because for example you can post applications like a rendering pipeline where you don't actually know which beats and you can actually you can't actually come up with the runtime period for each task that belongs to the pipeline or for example you just have a legacy application composed by different threads and you can go and modify the application source code or for example you have to manage the way to reserve a portion of the CPU bandwidth using group scheduling what we have to implement that there are working progress patches also for this they're not being yet discussed on the mini-list so they're going to be posted hopefully in the next months or so it's what we call group or hierarchical scheduling support again there are a lot of references on the slide so please go and check them out the idea is that you will then have temporal isolation but between groups that contains more than one task the approach will be hierarchical in the sense that at root level so the first level will be managed by EDF like deadline already does and inside the deadline reservation you'll be scheduling considering FIFO basically you'll have in this case for example T1, T2 T3 and 4 are FIFO tasks that actually are scheduled inside deadline reservation so the the root scheduler will pick a group of tasks considering the deadlines and those deadlines are managed by the constant bandwidth server again like with simple one-to-one tasks and then you have to actually execute again the scheduler inside the group of tasks to pick one of the tasks you actually are scheduling with FIFO the idea I think Peter actually mentioned this several times is actually to remove the RT throttling and actually substitute that with this mechanism so what's the RT throttling mechanism right so it's basically it's mostly let's say the bug mechanism to actually prevent FIFO RT tasks to actually jeopardize whole the CPU and that's basically more I mean theoretically it's the same thing because here I will reserve a portion of the CPU using deadline and so that I can after I have this I can remove that mechanism then the API let's say the user space API won't change because you still have RT groups it just who manages those groups will change it won't be RT anymore it will be deadline managing those groups but you still I mean from user space you still be creating groups and then put in inside tasks and then manage the RT runtime, RT period of those groups so no changes required from an application point of view I'd say sorry yeah but that's the same thing you are doing today so you have to configure groups if you want if you want to use RT throttling let's say not the global one but the per group one you still have to create a group assign the RT runtime and period for the group and then put your FIFO task inside the group and that's basically won't change it's only managed by a different guy will be deadline instead of FIFO and it basically on a multi-process system how it will work basically you'll have so each group has only two parameters it's runtime over period and that is basically you will replicate the same amount of reserve bandwidth across all CPUs so you will have a scheduling entity on each of your CPU and then you'll basically those guys the FIFO guys will be executing inside those scheduling entities which basically represent like some kind of virtual processors because virtual processors because for example when you have to perform global scheduling so the push-pull mechanism will actually pull and push task between the scheduling entities that you have on each CPU that's basically the the idea I don't have much more details about this just because it's really more work in progress than the other one but I mean if you have any more questions just ask me also offline of course and we are mostly done so future what I didn't cover and what's still me missing so I said that the real near thing that's happening right now for me it's actually try to start experimenting with all this new feature on Android see if we can convert like the current user FIFO to actually start using Deadline and see how it goes another idea that will be probably needed to implement is that in principle you can actually think to let the task run for more than reserved at the mission control time by demoting them towards lower priority task class sorry so when the task gets trot instead of trot in the task I will demote it to run on FIFO or normal so it can actually continue executing but together with the other lower priority tasks on the system and then of course the capacity and energy awareness so the fact that for example if you have a big little system now that we actually have a notion of how much capacity actually each task needs we'll have to make modification to actually know where I have to put a task considering the spare capacity of the different CPUs also considering that they can have actually different max capacities and then using this dead information then I will also have to consider energy in the picture so the energy aware scheduler energy aware scheduler actually adds an energy model of the platform and makes it available to the scheduler so the idea is to start consuming that information also from deadline to make energy aware decisions in the sense that nobody is currently really working on this so you're more than welcome to contribute is mostly it's the priority inheritance so currently priority inheritance for deadline means deadline inheritance what we want to actually have is to implement proxy execution so I can actually execute inside someone else's reservation and then maybe some kind of the feedback mechanism in which I can still adapt the tasks reservation and that's it so thanks again for coming if you have any questions just ask me I'll be here for the whole week should we email just ask staff on the Linux mini list on Linux RT users there is also an energy aware scheduler focus mini list called yes dev we're actually organizing this going to be in Italy from Italy so we're going to organize the summit around the scheduling and power management you'll find more details at the link there so if you are it's going to be in April so it's really happening like next month so I guess for the use US based residents can be tricky to to come because just because flights are extremely expensive I guess but if you are in Europe and you want to come you're more than welcome and oh yeah whoever got prizes please come and collect us thanks