 All righty, welcome back to Operating System. So today, talking about more scheduling, which is lots of fun. So something we didn't talk yesterday that might be your first tweak is like, hey, some processes are more important than other processes. So we've probably heard the term priorities before. You can schedule with priorities. So you could assign each process a priority and then always run higher priority processes first, round robin style if they are equal priority, and then as soon as all of the higher priority processes have finished whatever they're doing, you can go ahead and run some lower priority one, and this can be preemptive or non-preemptive. So another tweak you could do to scheduling, add some priorities. So the easiest way to assign a priority is just to give it an integer. So you could pick a lower or higher number to mean higher priority. In Linux, Linux picks a lower number to mean a higher priority. So negative 20 is a higher priority than 19. For normal processes, yes, negative 20 is the highest priority, and 19 is the lowest priority, and then zero is like the default whatever your processes get. But if we just had priority and we weren't doing anything special, then we also have the problem of starvation, where our lower priority processes might never execute if we always favor higher priority processes. So starvation is always a thing you want to avoid when you're dealing with scheduling. So you could think of a solution to that, be like, okay, well, I'll just change the priority if a low priority process hasn't executed in a very long time. I'll just temporarily bump it up a few numbers and increase its priority so it at least gets a shot, and then whenever it runs for a bit, I can reset it back down. So that is a valid solution you can do, but now we have a new problem, and it is called priority inversion. So you can accidentally change the priority of a low priority process to a high priority one, and this is caused by dependencies that we might have realized before when we had processes communicating with each other one blocks on the other, while it's waiting to get some information out of it or something like that. Then that process can't continue until the other process has written some information out, and that is called a dependency. So if a high priority process depends on a low priority process, well, guess what? That low priority process is actually a high priority process, at least temporarily and should be treated as such. So solution to that is to just inherit the priority if there's any dependency. So if one process is depending on another process for information, and it's a higher priority, well, then whatever is providing the information gets temporarily bumped up to the highest priority. Of course, this could happen recursively where a process depends on the process, depends on the process, depends on the process. So you would have to identify cases like this and inherit the highest priority, and then whenever that dependency chain is broken, you can go back down to lower priority. So that is something we can do. Another thing you might encounter is, well, if you have a clear separation of what we'll call foreground processes and background processes, well, foreground processes are basically processes you are interacting with. So any process you are interacting with, you probably care about response time, and that's about your highest priority. Well, that's about the most, that's like the most, I'm trying not to say the word priority again, without saying priority 10,000 times. So whatever is receiving user input, you want to be responsive, that's the most important thing to you while, if it's a background task, something that just needs to complete, you don't really care about response time, you might just go with the most throughput. How many processes can go through at any given time? So on Unix, you can background process. So sometimes it can identify what a background process is back in the day when you just were typing at a normal terminal. So if you're just typing at a normal terminal and your process group ID is different than the terminal group ID, which is like the actual thing you are typing on, then that means that you are likely a background task and you're not connected to a terminal, which means you can't receive input. So I can go ahead, I know your background task, I can not do anything terribly complicated with you, but that's not a thing anymore. So nowadays, you just have tasks, processes running on your desktop environment and you might have 10 windows open at a given time and it's not even clear to you which ones you really care about, which ones you're paying attention to that are in the foreground, which ones are kind of in the background. So if you have a bunch of windows open, maybe you don't really care about, I don't know, your web browser that just has a static page on it that's not changing and maybe you really care about a video that's on another window or something like that. So what we could do if we had foreground and background tasks is we could use multiple queues. So if you wanted to, you could create different queues for foreground and background processes. If you could figure out which process is which. So you could do something like foreground tasks, use round robin, which is good for response time and background tasks can use first come first serve, which is good for throughput. It's not very complicated. The scheduler doesn't have to run that often and there'd probably be a lot less context, which is with first come first serve. But now you have the added problem where you have to schedule between the queues. So you have to schedule between foreground and background processes. So you could add some more round robin on top of this. So you could round robin between foreground and background tasks. Maybe you could favor foreground tasks a bit more than background tasks. There's, this is the fun of scheduling, right? There's no correct answer. I can just throw things until, you know, we essentially say good enough, this seems to work. And on top of this, you could add priority between background and foreground tasks so you can dynamically change it. The possibilities are endless here. So like that probably sounded, there's no right answers here. It can get as complicated as you would like. And we haven't talked about scheduling for multiple cores. So far we've only talked about scheduling on a single core. So you might imagine this gets even more complicated and even more fun. So we will assume something called SMP, which is symmetric multiprocessing, which means all CPU cores are connected to the same memory. They behave more or less the same. Some computer systems that is not true and it gets very complicated and that's like high performance computing. So we'll assume SMP, which is just same physical memory. They all more or less behave the same, but each CPU core has its own private cache, at least at the lowest levels, like the L0 or L1 cache. They might share L2 or L3, but at least at some level, they have their own cache. So if you have multiple CPUs, one approach is just use the same scheduling algorithm. Why would I change it? It would seem good enough. I'll just tweak it slightly. So just have our one scheduler. It could be something like round Robin and it could just keep assigning processes to CPUs as long as there's available CPUs. So if there's five processes and four CPUs, well, I would just keep scheduling those four processes until I have no other CPU. And then I have to do some round Robin as soon as one of the CPUs is available and then bring in that fifth task and just go around and around. So advantages to this is, well, it still has good CPU utilization, even if we have multiple CPUs because it's a big global schedule. It's just making one decision and throwing if there's an idle CPU, we'll just throw the process right on it. And it's also fair to all processes because of its round Robin. It's the same algorithm. We don't really have to, we don't modify anything about it. It's still fair. Disadvantages is that it is not scalable. So we'll see this blocking behind one Q is a general bad idea because there are some dependencies. So two CPUs cannot ask the scheduler the same question at the same time because you can't run the process on two CPUs at the same time if it was doing the same instructions. So you can only do one thing at a time. The scheduler has to act independently of all the CPUs. And there is also the situation of poor cash locality. And what does that mean? Well, that means if your process is scheduled on a certain CPU and then it gets round Robin out, there's no guarantee it will go back on the same CPU where it would have some valid entries probably still in its cache. So if it moves from CPU core gets round Robin and moves to a different CPU core, it has to recache everything it's starting from scratch. While you might prefer that, hey, we round Robin it out and then we would round Robin it back to the same CPU core whenever it's done still have some valid entries for cash. So that's what cash locality means. Just it's still available, still hanging around and we would use it. But that big global scheduler is what was actually used in Linux 2.4 which was like sometime in the 90s probably. So back in this day, one CPU core was common. They didn't really care about having multiple CPU cores because only like billionaires had them. So this worked well for just normal plebs like you and me. So someone had the brilliant idea of saying, hey, we could just create a scheduler for each CPU core. So this way whenever there's a new process I will just assign it to run on a CPU and during its lifetime it will not switch CPU cores. So and then whenever that process comes in I could just assign it to the CPU core that has the least number of processes at that time instant. So some advantages behind this. Wait, we got a question? Doesn't matter what CPU core gets what processed or base performance, yeah. So it does matter if the process resumes on the same CPU it was executing on before. So generally if that happens, you get better performance. So that's what this is aiming to fix. And some advantages behind this is really easy to implement. It's also scalable because each CPU core can schedule independently. They're not, they don't depend on each other. They have a completely different subset of processes. And because of this, the same process is always on the same CPU. Good performance, good cash locality. And cash locality just means that the caches are still valid. So locality is just like in the time period. So if you start executing and then you execute on the same CPU core, half a second later, while you're still locally on the same CPU, your caches would still be valid or at least most of them. Yep. So no blocking means that you're not blocking on the scheduler. Every CPU core has its own scheduler. So they don't have to communicate with each other or try and make a big global decision. So each CPU core would have its own round robin, for example. So yeah, so if the processes use shared memory or something like that, the kernel would have to figure it out, but the scheduler wouldn't be interacting with the scheduler from another CPU core. Yep. A scheduler or do they just pick from the line? Yeah, so in this case, initially when you start a process, it would just get assigned to a CPU core that just has the least number of processes whenever it started. So that would be a global decision, but after that it's just on a CPU core. So disadvantages of this is you could have some very bad load imbalances over time. So say you got super unlucky. So if you got super unlucky and say you had eight processes and four cores and each core got two processes and it just so happened that one core got two of the shortest processes and then one core got two of the longest processes. Well, that one core is with the shortest processes, it's gonna end early, it's gonna be done, it's gonna be idle and the one with the long processes is just gonna ping pong back and forth between the long ones and we're gonna have an idle CPU, which isn't great. So what you could do is have a bit of a compromise. So one thing we could imagine is that, well, we might have a global scheduler that basically rebalances between CPU cores. So if it notices that the CPU is currently idle, it could take the process from another CPU and put it on the idle CPU and this is called work stealing, which is more like process stealing but you're stealing some work. So some disadvantages to this is, well, if a process is moving to a different core, it's going to have invalid caches and if you really care about performance for that process, you probably want to tell the kernel that, hey, do not switch this process under any circumstances, keep it on the same core if it's already executing on the same core. So having that control, being able to keep a process on the same core is something called processor affinity and what that is is basically your preference to stay on the same core. So if a process has a high affinity, that means that it really wants to stay on that same core. You might have noticed that if you're in Windows and you like right click a process you really care about like your game or something and then you go to process affinity and then you go high and you get a little bit of a performance boost. Well, that's what that is. It's basically telling the Windows kernel, hey, keep this process on the same core, do not switch between cores. I want my caches to always be valid. This process is very important to me. So that's something you could do. So that algorithm, having a global work stealing scheduler is a simplified version of what the scheduler was in Linux 2.6. So we're getting more modern by the second here. So another strategy is something called gang scheduling. We like having weird names in this course for some reason so you can have a gang of orphan children, I guess. All right, so there might be a situation where multiple processes may need to be scheduled all at once. Again, this is more like high performance computing and the scheduler cannot just schedule a CPU completely independently. They often be scheduled as a gang or co-scheduled which basically just says that you have to context switch all these processes on all the CPUs at the same time. They're basically scheduled as one big unit. So it requires a context switch across all the CPUs at the same time. But for this course, we don't have to deal with something like this. It's just a fun fact. So there's also another hitch, something called real-time scheduling that we might have heard of before. So real-time means that there are some very heavy time constraints either for like a deadline or a rate. So you can think of something like audio. So audio has to be processed in a certain amount of time. Otherwise, you'll hear like crackles and stuff like that because you won't be hearing the audio as soon as like there's just some delay. The buffer is not filling up. Like audio is really sensitive. It needs to be done really, really fast. Otherwise, your audio will just be delayed or it'll be crackled or it'll be corrupted or something like that or something like autopilot. That would obviously have some real-time constraints. So I need to not hit the child crossing the road within like, I have like two milliseconds to react or something like that. Past that, if I react in a second, well, that's like an actual dead child, not like our dead child and that's probably not good. So that would be an example of like hard real-time. Like you need a guarantee that a task completes within a certain amount of time. And generally you'll find that like control systems to train systems, little embedded systems that have very simple CPUs. And to make the guarantee, well, you have to basically read your program in assembly and count clock cycles and then say that, hey, in the worst case, if I take all these if branches, it will complete within this certain amount of cycle so I can guarantee no matter what, it will always complete in this amount of time. And typically those systems are going to be much, much simpler. So a soft real-time system is something like Linux where critical processes just have a higher priority and the deadlines met in practice and you don't throw your computer at the window. So Linux kernel, very, very complicated. You can't essentially get any real guarantees with the Linux kernel because the scheduler is very complicated. It's context switching is complicated. Virtual memory is complicated. Yep. So we'll get into that at the end. Yeah, so the question was what some modern Linux scheduler use and we'll see it at the end. So we've gone through two generations of the Linux scheduler so far. Yep. So the question is, is there an industry scheduled or industry standard scheduler? And the answer to that is no. Everyone does their own thing. People have even modified the Linux kernel with a different scheduler just because it feels better to them for no real reason other than that. You could write your own scheduler if you really wanted to because Linux kernel lets you do whatever you want with it. You can look up BFS, I think it's called. The, yeah, BFS, it's another scheduler you could just do that just does random things that seems to work better with like desktop environments. So because Linux is very complicated, it can only do soft real time because no one in their right mind could ever argue about the worst case it would take to actually execute a process on Linux because it's just too much going on. Depends how many other processes are running and other things that you just might not even be able to control. But other real scheduling algorithms that are used, guess what? Linux implements first come first serve and round robin. Those are two perfectly good scheduling algorithms and they're actually used. If you're crazy enough to search the kernel source tree, first come first serve is called sketch. So anything that has to do with the scheduler in Linux is called sketch. So sketch underscore FIFO means first come first serve and sketch underscore RR means round robin. So you can even see how these are implemented in the Linux kernel if you really want. So what they do is they use a multi-level queue scheduler for processes with the same priority and then let the operating system dynamically adjust the priority. And this is done for soft real-time, only soft real-time processes in Linux. So soft real-time processes just always schedules the highest priority process first and that's it. And then for normal processes, that's where the actual Linux scheduling algorithm comes in which we'll get into later. So for the soft real-time processes, they're always prioritized. So a soft real-time process will either be scheduled as FIFO or round robin. And the reason behind this is you want real-time processes to be very, very predictable. So round robin and FIFO are both very, very predictable and you can argue how they work very easily. And it doesn't really waste time. That's what you want for real-time systems. So for soft real-time processes in Linux, they're assigned static priority levels zero to 99 where annoyingly zero is low and 99 is high. And otherwise, if there are normal processes on your machine, they are given the scheduled normal policy and the default priority for that is zero and the priorities range from negative 20 to 19 where negative 20 is high and 19 is low. So your processes can also change their own priorities with system calls. So there's this nice system call. So this priority range for normal processes is actually called niceness. So that's kind of why lower niceness is higher priority because you're meaner, I guess. So nice can set your priority and you can also do schedule set scheduler if you want to change your scheduler. So what does that look like? So Linux tries to map all the normal and real-time priorities on a single scale and it gets fairly complicated. So this summarizes it. So in this, we have Linux priorities at the bottom which range from negative 100 to 39 and on the scale, anything to the left is high priority and anything to the right is low priority. So your soft real-time tasks, their priority scale of zero to 99 where 99 is higher is mapped to negative 100 to negative one. So in the Linux priorities. So that way, if you see a priority on Linux that is negative, you know it's a soft real-time task and then how negative it is indicates its priority. And then in Linux, if it's zero or above, so if it's positive, then it means it's a normal process and it has to deal with niceness. So zero means it is negative 20 niceness and 20 means it's 10, it's just scaled a little bit and 39 is just 19. So they're off by 20. So if we go, we can do some digging. So if we do some digging, we can use H-top and we can finally read these numbers. Yeah, so there's 140 different priority levels. So this column here, this PIR is priority and that's the Linux priority that I showed here. So it will be negative 100 to 39. If it's negative, it's real-time. If it's positive, it's a normal process. So here I see that its priority is 20 which and this NI column is its niceness. So they'll generally be always off by 20 if it's a normal process. So I can see init is default nice. Everything here is default nice. I see this thing, this little interesting process which is a negative two. So if it's negative as well, that means niceness doesn't apply and it'll always be zero. So this is a soft real-time process. Negative two means it doesn't have that much priority but we'll see its name. It's the low memory monitor. So it's basically monitoring. So if you have low memory, it can decide the process to kill or something like that and free some memory. So that seems to be an important process. We can explore a bit. We can see that there's not too much interesting. We see that this process is a normal one with slightly higher priority. It's something called pipe wire, spoiler alert that has to deal with audio. So audio is generally more important. It doesn't take that long to execute and it's generally more important. So this task has higher, or this process has higher priority. If we go through, we see that, hey, this tracker minor, oh no, I'm being tracked but it has very low priority. It's 39, it's nice this is 19. So it's the lowest priority of anything. Although it has a fairly scary name. Then we see that, hey, here's some ones that are still normal processes that are fairly high priority and anything in green here are things in the kernel. So the kernel we can see has some things here that are very important. If it says RT, that means its priority is negative 100. So this migration or process in the kernel seems to be very, very important. Anything else that's important, migration. So there's a migration per CPU core we can see. So they're very, very important. Anything else that's very fun, migration, migration. Yeah, nothing else seems to be that fun but now we can read our columns. Yep, yeah, yeah. So the PIR here maps to the Linux priority in this figure. Yeah, yeah, so the niceness, so any normal process here, so any process where its links priority is a zero or higher is a normal priority and then its niceness would apply to it. So in this case, this process is zero so it's still a normal process but its niceness is negative 20 because it's a high priority normal process. So also if you want your processes on links to be very important, you could ask the scheduler, hey, give me a niceness of negative 20. I don't like the rest of the processes. I'm more important. Yep, so right now Linux will use either first come first serve around Robin for the real time processes there and then for normal processes right now we're only at the point where it uses per core scheduling but we'll see what it actually uses very shortly. Yep, yeah, so a high niceness will change your priority so if you change your niceness to 19 that corresponds to priority of 39 on this. So it means you have a lower priority. So everything to the left on that is high priority and as you go to the right it's low priority. So niceness of 19 here is very low priority, niceness of negative 20 is a high priority. No, so you're either, so these are independent either a process is a soft real time or it's a normal process. So the difference is if you're soft real time you'll be scheduled as first come first serve around Robin and you'll be scheduled in the kernel before any normal process. So you're on a, yeah, you're using soft real time it's completely different scheduling. So it's like very simple scheduling. You're always prioritized ahead of normal processes. You just go until you essentially go to sleep and then normal processes uses a normal Linux scheduler. Yeah, yeah, yeah. So when you start a process your default priority you're a normal process and your niceness is zero. So if you do the system call like if you do a nice system call you can change your priority if you want to. Yeah, you can do it in the start or you could change your priority whenever you're about to do something you think is very important. So you could change it as well. Yeah, normal is gonna be normal scheduling and right now it's like different scheduling per CPU core but we'll figure out what it actually is very shortly. Yep, sorry. So what it would look like in code is basically just context switching on that CPU core. So the kernel knows what CPU core it's currently executing on and it could just do the context switch like load registers and then it starts executing that process. So it's just context switching on a CPU core. Yeah, so how's the process categorized? So that's something you also set. So if you really wanted to have a real time process you do like the set scheduler system call you say, hey, I want round Robin and I'm very, very important, I'm real time. Otherwise you are just a normal process. Yeah, so here's where we are so far with the normal processes. So we've made it to the middle where we had the 01 scheduler which turned out to be fairly complicated to get right and interactivity had some issues. It wasn't super responsive and it also had no guarantee of fairness since it was just dealing processes between CPU cores it could make some bad decisions. So the actual scheduling algorithm that is used in Linux right now is something called the completely fair scheduler and as you might imagine, it's fair and it allows for typically good interactivity and that's what's used now. So the 01 scheduler, why did it have some issues with modern processes? Well, floor ground and processes were a good division. You could do it with it. It was really easy with the terminal but with GUI processes again, it's not always evident what is a foreground process and what is a background process. The kernel could try and detect them with heuristics. So typically like if it sleeps more, if it's waiting on IO, if it's waiting on your mouse or something like that that could mean that you're interacting with it. So the kernel could be like, okay, that's the foreground process. I'll just guess. It's basically what if you haven't seen the word heuristic before, heuristic is just a fancy word for guessing. So because it's a guess, it was fairly ad hoc which just means it was made up. So it was just made up rules. It could be unfair just depending on whatever rules they wrote at the given time. So how would we introduce fairness for different priority processes? So one thing we could do is use different size time sizes depending on the priority. And the higher priority, the larger the time slice and I could scale my time slice just depending on how fair I have been to that process in the past. Oh, and then there's a question. What would happen if someone starts setting all their processes at real time basically? What would happen? What would the kernel do? So if you do that, basically you get round robin or if it's first come, first served, I just wait for it to complete. And your system is basically just going to do however many processes you have CPU cores and it will probably be a horrible experience. So you are, you can do whatever you want to your system but you probably won't like the result of doing something like that. But hey, if there's a process you really, really care about even on Windows, if you really want you can set it as real time. So you can set your game as real time if you want. It'll get a CPU core but it might mess up something else especially if you don't have a lot of power it might mess up your audio or something like that because basically you're turning off context switching on that CPU and yeah, that's not great. So to the, if we had an ideal fare scheduler like that could dole out infinitely small time slices well if we had end processes each one would run at one over end the rate. So if we had one process on a single CPU core then it would get the entire time slice. If we had three processes each one would execute for a third of a time and switch infinitely fast. So you just, so everything is fair and as fair as possible you just divide the CPU usage among every process. And this is just on a single CPU core because that's the easiest to illustrate. So what would that look like? So consider we have four processes with different burst time. So one process P one takes eight time units. Process two takes four, process three takes 16 and process four takes four. Well, I could represent each vertical slice here as four time units and try and be as fair as possible. So if there are four processes executing and I'm completely fair each of them gets a fourth of that CPU time. So each process would get a single unit of time. So over that four time units if I had infinitely small time slices I would divide it out so that each process gets one time unit. Then I still have four processes running. I would do the same thing for essentially 16 time units and then each process will have been running for four time units. And now process two and process four are done. So if process two and four are done then I have two processes left and four time units of time. So if I'm completely fair I would give each two time units. So P one would execute for two time units bring it's total to six. P three would execute for two bring it's total to six. They're still alive so I would be completely fair again give them two time units. Then P one's done. Now I have a single CPU core. I can assign all four time units to P three until it's eventually done executing at time equals 32. So whatever I had four processes active they each got a fourth of the time on the CPU that's like the fairest thing you could do. So that's the ideal fairness but it is completely impractical because yeah we don't have infinitely we can't switch processes infinitely fast context switching takes some time. We just can't do that. It would boost interactivity and have ideal response time because you've infinitely small CPU slices the response time is basically instant but realistically we can't do anything that. And also context switching is overhead because it's not running processes and so is running the scheduling algorithm. So running the scheduling algorithm has to be fast as well. If we have the ideal fair scheduler we have to constantly scan every single process to make sure that they got their fair share which is O and which is super, super slow. So the completely fair scheduler which is actually used now for each runable process the kernel assigns it a virtual runtime. We like using the word virtual runtime and they use it in kind of a weird way. It just means it's not a real runtime that they are playing with the numbers and scaling it. So what they do at each scheduling point where the process runs for some time T it increases the virtual runtime by the actual time it was running for times some fudge factor or weight which is typically based on the priority. So it is a higher priority task with a lower number. Well, the weight would be lower and the penalty for running would be less. So that's why they use lower as number because essentially your priority is that weight factor there. So the nice property behind this is that virtual runtime monotonically increases. What does that fancy word mean? It means it only increases, it never decreases. So it only gets bigger over time. So it will never go backwards. So yep, sorry. So it can be less than one. If it's less than one then you get right. It's like if your weight is 0.5 you run for two seconds but your virtual runtime only goes up by one second. So you're penalized less, right? And the scheduler will try and make sure that your virtual runtimes are the same between all the processes. So if you actually run for two seconds and you're only penalized for one well you're gonna run twice as often as another process, right? If it's completely fair. And T is not a qualm here. T is just the actual time a process spends executing. And that time slice is dynamic. The scheduler will pick how long a process actually runs for. So how the scheduler works is when it makes the decision of what process the schedule next it will just pick the process that has the lowest virtual runtime. And then it will dynamically scale that T based off the time slice. It should have if there were ideal fair conditions. So if it hasn't run in a long time while another process has been active it would essentially get its fair slice as its T. And it's free to go ahead and go to sleep early and it wouldn't get penalized for that. It would just increase its virtual runtime based off how much it executed for and it would still get its fair share. So this allows the process to run. When the time slice ends, we just repeat the process. Yep, so the kernel is going to pick T like how long they actually run for. Yeah, yeah, so it's gonna pick T based off the ideal fair time. If this is the first process ever to start it would just have some hard coded number like I'll run it for 10 microseconds or something like that. So T changes based off how many processes there are and if things were completely fair what its time slice would be. That would kind of even out the rest of the processes. So T constantly changes but the kernel picks it. Yeah, all right. So we finally figured a use if you've taken your algorithms class for a red black tree. So that is how the completely fair scheduler is implemented. So it's a red black tree, which is nice because it's self balancing all that stuff and the key is the virtual runtime. So this is nice because to insert a new process to throw it back into the tree, well it's O log n time and to find the minimum which is the scheduling decision because they always just schedule the process with the lowest virtual runtime. Hey, guess what? That's constant time, which is great. So it's nice and fast. So what the implementation does is it uses a red black tree and that virtual runtime is counted in nanoseconds. So really, really fast. So if you don't know how quick a nanosecond is that's a nanosecond. That is like how far light will travel in a nanosecond. So think of the speed of light really fast. Nanosecond, really small. So the nice thing behind this too is this completely fair scheduler will favor IO bound processes by default. What does an IO bound process mean? That means that process is constantly doing like read system calls, waiting for some data from a file or something like that. So it's constantly getting blocked, putting itself to sleep. So if it is IO bound, it would put itself to sleep. It wouldn't use its whole time slice. And then whenever it is unblocked, well, it would have a low virtual runtime compared to the other processes because it hasn't run. And it would get scheduled. So you get some good interactivity like that. And it would constantly get a bigger time slice to catch up to the ideal too. So that's it. Scheduling gets even more complex. More solutions, more issues. We introduced priority, got priority inversion for our troubles. We also noticed that some processes, they're not all the same. Some need good interactivity, others not really. If we have multiple cores, we might need per CPU queues. We also might have real time tasks which need to be predictable. And the completely fair scheduler, which is actually used now in Linux, tries to model ideal fairness. So also remember, oh yeah, no lecture tomorrow, because I have to go to Silicon Valley. And also your midterm, November 15th at, I don't know, we have the room for like two hours, maybe starting at like 6.30 to 7.45. So if you have any conflicts with that, let me know, but we do have the room finally scheduled. So just remember, pulling for you, we're on this.