 All right, we're good? Sweet. So today's kind of the last lecture we're going to do about the CPU and scheduling. And it's intended to be fun. I don't know if any lecture that I give can ever fall into that category on some platonic level. But yeah, so today we're going to talk a little bit about Linux. We're going to talk a little bit about Linux development. Hey, excuse me. When I start talking, can I talk? The murmur just sort of sets off those little crazy voices in my own brain. I need to quiet those two. So all right, so today what we're going to do is talk a little bit about, we'll finish up talking about priorities, which stuff that sort of bleated over from last lecture. And then we'll talk about Linux and talk a little bit about a story about Linux scheduling that has some sort of interesting takeaways to it. All right, so assignment two got two weeks left. There are people that are clearly making great progress on the assignment. I think there's a couple of people that are done according to the leaderboard. So that's pretty awesome. At this point, you should probably have finished up most of the file system system calls. That's kind of like where you want to be. I had people today in office hours that were working on fork, which is great. If you're still at the point where you're trying to fill in and shut up 2.1, it's time to get cranking. You're behind schedule. Two weeks, assignment two, and then break. All right, I will unfortunately not be here very much for the next two weeks. Apologize for that. Next week, I'll be here on Wednesday. I'm just going to drop by at completely unpredictable times. And then the week after, I will be there on, no, that's not true. I'll be gone Monday, Friday, next week, and Wednesday, Friday, the week after. So I will be here next Wednesday and the Monday following. Office hours, my office hours for the next couple of weeks won't happen either, unfortunately. So again, I apologize for that. Normally I like to be here, but I have other things to do this semester that involve moving around from place to place. Carl will be giving those lectures. I would definitely love to be here because I really like to give the VM lectures. But Carl will be here, and hopefully he will be a reasonable substitute, if not better. At some point, at some point I'm going to show up and he's going to be here. And he's just going to kind of say, nope, sorry. And this is my class now, like you've been failed for not showing up. So OK, any questions about MLFQ? That's sort of where we left off last time. That was an example of a scheduler that used some feedback. Just as a review, the CPU bound threads, the goal of MLFQ is to push them to the bottom queues. And this gives threads that are doing small amounts of work and then waiting for the user or other systems resources a chance to rise up to the top queues. So the IO bound threads, those go up. Whether or not they're waiting on you or the network or something else. If I can let them run quickly, I can give them a chance to activate some other resource on the system. And then they can wait for that while the CPU bound threads are working. So last time we talked a little bit about starvation with this approach. So poorly implemented MLFQ approaches can very easily starve threads if there's enough IO bound tasks to keep the top queues full. So my ability to get down to the bottom queues depends on threads from the top queues all being blocked, all being sleeping. If that doesn't happen, then the threads at the bottom will never have a chance to run. And one thing people do, I don't know if anyone really implements this for real. It's just sort of an algorithm to talk about in class. But you can imagine just periodically dragging everybody into the bottom queue or the top queue or something like that just to give everybody a chance. All right. Last thing I want to talk about with general purpose scheduling is priorities. And this is something that we'll see when we talk about the actual scheduling algorithm we're going to talk about later today. Most scheduling algorithms want a way to sort of take an external input so that they have mechanisms for users to tell them or system administrators or the people who package your Linux distribution or somebody to say, this is a particularly important process. This is a particularly unimportant process or thread. And you can imagine sort of putting things in different categories. I have a backup task that I'm running. I don't want that to interfere with my foreground operation. Otherwise, people won't do backups, and none of you guys do backups anyway. So you guys already don't do backups. And then if the backup task made your computer really slow, then on that one golden day in the future where you decide to start taking backups, it would be slow and you would give up and go back into the state you're already in. Depending on what I'm doing, maybe if I'm YouTube, video encoding may not be a low priority task, but if I'm actually trying to do other stuff. Video playback, on the other hand, depending on how much data this is pushing, might end up as a high priority task. Because I want to make sure that the video playback that's doing decoding has enough chance to keep buffers full so they don't have interruptions in the video stream. So, and then interactive stuff in the middle. And priority systems on some level are always relative. Has anyone ever set the priority of a process or task? Yeah, so they're on, it's probably one of the dumbest named utilities in Linux. It's called NICE. Has anyone ever heard of NICE before? Yeah, NICE. I guess it means like make the process nicer to other processes. But NICE is what you use to adjust the scheduling priority of a process. So you can nice it down. You can make the process less competitive when it comes to trying to get the CPU. Or if you have certain privileges, you can also nice it up. So you can make it more competitive. You can make it more likely to run. This task has also been applied in other places. So I think a couple of years ago, maybe longer, Linux introduced I do IO priorities. So I can actually now set the IO priority for different tasks, which does pretty much what you would think it does when I'm competing to get into discus and other things I can out-compete other tasks. All right, I don't really want to talk about lottery scheduling. Okay, we talked a little bit about lottery scheduling last time. There was some cool work on lottery scheduling a while ago that looked at this as a way to address starvation. Because in a lottery scheduling system, a thread that's waiting on another thread can give that thread its tickets and make it more likely to run. Okay, we're not going to talk about this. All right, it's awesome. Not the song I would have chosen, but, I'm sorry, I don't know. I can't even hear it. Why am I saying something like, it's mean Jeff coming out again. I'm sorry, I'm really, I'm trying to be nicer. It's just like, say that inside your head, okay. Any questions on scheduling algorithms before we move on? Okay, so let's talk about Linux. This is a fun story. So how many people run Linux in some form or another? Ah, that's awesome. Thank you. Okay, how many people because of this class? Okay, good, all right. Yeah, I mean, I don't have anything against other operating systems. And certainly, there's a lot of impact that people have who are working at Microsoft and other places. But Linux is fun, right? And there's a lot of kind of cool lessons that go along with Linux. I think you guys are probably young enough that at some point in my life, Linux started to take off. I mean, I'm not that old, but there was still some astonishment about the fact that a project like this could work, and when you see some of the numbers, you might become more astonished. But I think at this point, maybe you guys are sort of used to Linux, like Linux already existed. It was always cool. There were always thousands of people working on it. So hopefully what I can do is sort of remystify Linux and reintroduce you to some of the things that are cool about it, and are really pretty astonishing. I mean, Linux is one of those things that's kind of magical. It's amazing that this works. I think before Linux tried this, if you tried to convince people that this was a development model that you could use to build software that was used by millions of people, and is frequently used in data centers and other places. So Linux on the desktop and on the laptop has a pretty tortured and sad history, but Linux on the server is pretty widespread. Again, before this product started and really took off, if you tried to convince people that this sort of thing would work, I think it would have been hard. And it does work, it works really well. Okay, so let's talk about Linux. So I updated these slides today, and this number, there's a moving target. This is from a report that you can read about Linux that the Linux community itself issues. This is from 2015. So it's already a little bit out of date, but this is the most recent one I could find. 12 million lines of code in version 3.13. 12 million lines of code, non-commenting lines of code or whatever. So it's a lot, right? It's a big project. However, about 60% of the kernel code is device drivers. Specific pieces of code that are designed to operate a particular piece of hardware, whether that's a disk or a USB mouse or whatever. So a lot of the code in the Linux code base is not sort of core operating system code. It's all this device driver code. And there's actually some interesting debates about whether or not this code should be part of the kernel or not. And there's some advantages to having it be part of the kernel. I won't bore you with the details, but. Two million lines of architecture specific. So if you look at the way the Linux is organized, it's similar but not identical to your system. The layout of your OS 161 kernel is actually inspired by BSD. So if you look at mainline BSD distributions, they share a lot of the same structure with slash kern and stuff like that. Linux is a little different, right? So Linux has slash arch where architecture specific code lives. And that's also quite a bit of it. So just supporting all of the different machines that Linux runs on, because Linux runs on like everything, right? In fact, how many people have an Android phone? Okay, so you're all running Linux. Did you know that? Android runs Linux. Anyway, so Android probably is one of the biggest appointments of Linux today, but that's pretty cool. So, how many lines of actual kernel code do you think there are? Anyone wanna guess? So we've sort of explained away about nine million of these 12 million lines. How much do you think it's actually in slash kernel? A hundred, a thousand? Yeah, I mean, order of magnitude, right? 10,000? It's about 100,000 lines of code. So a tiny, tiny, tiny little fraction of the overall Linux code base is actual kernel code that's gonna run on every device. So architecture independent, kernel code, core scheduling, core system calls, all the stuff, core memory management, all the stuff you guys are doing in this class, right? And so it's not, you know, I think when people see this number of Linux's 10 million lines of code, they think Linux is so fundamentally completely a hundred percent different than anything I would ever build in a class like this, but it's not. I mean, you guys will write about 2,000 lines of code for assignment two. So by that point, you've done about, I don't know, 1% of the Linux kernel. Like, you should feel good about that, right? I mean, a lot of people worked on this. This didn't just happen overnight. This is like 30, 40 years later, right? I mean, this project is not new. And when you build, has anyone ever, so I hope not too many hands go up, but I know a couple of the culprits in here that are gonna raise their hands when I ask this question. How many people have actually configured Linux and built it from source for a particular device? Yeah, I knew, okay. It's the usual. Yeah, I mean, so it used to be, and again, I know it's hard to convince you guys that you live in possibly the cusp of computer society. Things have gotten so much better than they used to. So when I was in college, I started messing around with Linux and Linux used to be the kind of thing where it's like, I'm gonna put Linux on my laptop. And that was a three or four week long project during which your laptop worked about 1% of the time, right? And most of the time when it worked, all you could do was use the terminal. There was no GUI environment, right? So it used to be, before Ubuntu came along and before people started really packaging Linux for real and the Linux community did a lot of work on supporting these types of devices, just getting Linux to work on a machine, particularly a laptop. I mean, I'm sure there's days where it was hard on desktops too, but when I was growing up, it was sort of hard on laptops. That was the real, if you could get Linux to work on your laptop, it was like a golden moment for you, you were like, this is amazing. I remember I had a roommate who's really smart and he worked on that for several weeks. And I think he gave up eventually. But, you know, because it was like, oh, okay, the screen's working and then you'd feel really good about yourself and realize that Wi-Fi doesn't work. It's like, okay, well, that's not a particularly interesting computer. So a lot of the code, when you configure Linux, one of the things that you do, and again, this is not completely different than what you guys do when you run, you know, config assignment two. The configuration system for Linux is a lot more complicated, but it's pretty much doing the same thing. It's selecting which pieces of the source code are going to be built as part of the kernel. And typically when you build Linux for any particular target, you don't include large amounts of this code. I don't know what the average number is, but I would suspect that most Linux kernel builds maybe use a million lines of code. You know, like clearly you're gonna get the stuff in kernel and then small chunks of the stuff from Arc and from drivers, right? Only, because you don't need drivers for every device on Earth. You don't own that weird, you know, four-fingered mouse that somebody developed one day that Linux has support for for some reason because there's one guy somewhere who's maintaining it. You don't need that, right? So, you know, you pick the parts you use, that gets built in the system. And we'll actually, we'll talk a little bit later about how Linux even gets a little bit smaller because not all of the code that gets included in Linux actually gets built into the kernel itself. Linux, like all modern operating systems, has a loadable module system that allows it to load new code that it needs at runtime. That's pretty cool. All right. Linux generates kernel releases between two, every two and three months. So I think the average is about two and a half now, right? This is pretty impressive. Think about the amount of time between Windows releases or between new versions of Mac OS. So this is fast. I mean, the people that work on this project, they do a lot of work. And it's even more impressive when you realize that there's 10,000 patches per release. So that's the amount of changes that are being made in between every two to three months. 7.14 patches per hour. So while we're having this class, there have been like seven, of course this is an average, right? I don't know how many patches they get between two and three PM on Friday, right? Maybe that's a big number. Maybe it's not. But the point is that the rate of change in Linux is really, really impressive. There's a lot of work going on. On a per release basis, there's about 1,400 contributors on average to each release. That's, so again, how many people like, I don't know, maybe close your eyes or don't look at your partner or whatever. I mean, how many people would maybe prefer to be working alone on these assignments? Oh, nobody, okay, that's good. How many, well, I don't know who your partner is. That's interesting. How many people would prefer to be working with 1,400 other people on these assignments? Really? Just the coordination overhead to get that to work is, no, no, no, you don't want that, trust me, right? By the time you guys get organized, the semester will be over. Yeah, I mean, it's really pretty impressive that they get this to work, right? And there's obviously a huge chain of command. Think of it like an army, right? There's people at the top and then there's multiple levels and lieutenants and sublieutenants and privates and whatever. There's just a huge chain of command involved in maintaining a project this big. The total number of people that have been involved in Linux is even more impressive. So like, almost 12,000 developers over a 10-year span. And think about what you have to do, think about all the hurdles that you have to do to get a patch into Linux. Has anyone ever gotten a patch into Linux? Should make that one of your life goals, right? Because this is not, you know, I really appreciate the pull requests that people have been sending us for various parts of OS-Melt 6. We've got some new tests and stuff like that. But if you sent those pull requests to Linus, he would just never talk to you again, right? I mean, there are like huge posts that you can find online about how to prepare patches that you're gonna submit to Linux so that somebody, one of the eight people that that patch is gonna be looked at before it actually gets into Linux doesn't just rant all over you on the Linux kernel mailing list about how poorly formatted this patch is and how I'm never gonna be able to apply it and why don't you read the manual, right? I mean, doing a project like this requires a lot of coordination and requires a lot of rules, right? So you can, again, you can find long diatribes and long, you know, pieces of advice about how to prepare changes to Linux. So it's pretty impressive that, you know, not only have 12,000 people had enough knowledge that they can actually fix things as part of the system, but they were actually able to contribute to. So, pretty, pretty cool. All right, so again, I've never submitted a patch to Linux. I think I should try. But, you know, all this stuff is pretty well documented. So at this point, this community is large enough that people have started to write up, you know, advice about how to do this. So part of what makes this work, again, is a high degree of structure. Every file has a maintainer. Every subsystem has a maintainer. What's, like, give me an example of a subsystem. Yeah. Yeah, yeah. So maybe the virtual file system layer that is the pass-through for all the file systems. I don't know what Linux calls that, but I'm sure it has the equivalent of that. Give me another example. What would be, what's a subsystem? Ooh, sorry. Yeah, I mean, like, all the little pieces of code you guys are writing, there's probably somebody who does scheduling, right? There is someone who does scheduling. We'll talk about him later. There's someone who does core VM. There's someone who does file, like, you know, the general file system support, and then probably every file system has at least one, if not several, maintainers. So EXD4 has a maintainer, if not a couple of them. And so, you know, the changes to Linux sort of bubble up from within these communities, with the goal being that by the time you get to the big shots, you know, Linus Torvalds and Andrew Morton and some of the people that approve patches that are kind of the final state, the final step to getting patches into the main line of Linux, there's a lot of confidence that these changes are good, right? Now this, I found this when I was Googling around today, I thought it was really funny. Linus has a reputation, I think, for being a little bit, I don't know, think of me with the volume turned up, right? And someone with a really stable job. And famous, right? Yeah, that's not fair to me. Like, I am nicer than Linus, okay? And so, if you think you know something about me, you could sort of bound where he is on the nice inspector, but he's also, obviously, extremely talented. I mean, he's built, you know, if you use either Git or Linux, which a lot of you do, you use code that, the projects that were created by Linus Torvalds. So to some degree, he's given us like, these two incredible pieces of software that are really, really useful. Git may actually turn out to be a bigger contribution of Linux, we'll see. But, you know, these guys have been working together for years and they still, you know, Linus still feels free to flame, you know, his co-maintainer for buggy changes. So apparently there was some code that got into the latest version of Linux that Linus turned out to not be very happy with in retrospect, and he just sort of publicly blames this on Andrew Morton, right? And that buggy craft is marked for staple, right? And this is the problem, right? Someone you trust you've worked with for a long time, apparently it gives you a series of patches that they've signed off on that turn out to be bad, right? And so, I mean, there's probably better ways to handle this if this happens, but again, it's Linus, like, what are you gonna do? I'm gonna work on another open source kernel project that has 12,000 developers, I'll see you later. Okay, so there are a few people that have access to this mainline Linux kernel tree. So there exists somewhere in the world a repository, a Git repository that contains the mainline sources of Linux. And there are a small group of very special people who can push to that repository. That may be one of the most exclusive code bases on Earth. And that's kind of where stuff goes. Now where do these patches come from? Well, you know, some of you I think because of how you typically have used Git, got really confused when we asked you to add a new repository to your Git configuration when you were setting up assignment zero, so you get changes from us. But the point is, this isn't like a weird thing to do in Git, this is exactly what Git was built to do. So a lot of the teams, the subsystems, maintainers from Linux have their own Git repositories that they push and pull from. Those Git repositories may contain changes from other developers that are working on the same part of the project, and that's where these things get staged. So if you want, for example, certain types of bleeding edge changes, rather than building code from the mainline Git repository that does stuff that's really stable, you could clone like the Git, there's like an MM repository that has some experimental memory management features, and you get stuff that that team is working on. Some of those things may never make it to the mainline. You know, the mainline developers that just never believe that they work or think that they're important, and so they stay marooned in these private repositories for years, but when there are a lot of tasks, and what does there being a lot of tasks mean? The system is busy. So as the system gets busier, the scheduling algorithm wastes more valuable time, right? I mean, you wanna write like an O1 over end scheduler, that's fine, right? If there's only one task, waste as much time as you want, right? It's just not gonna make a difference. But when the system gets busier and busier, this scheduling algorithm gets worse and worse, right? So it's slowing down the system further. A system that's heavily loaded does not want the scheduling algorithm in a way. That's the time where we want it to vanish. So in 2.6, Linux rewrote the scheduling algorithm. Like every other part of Linux, the scheduler has a maintainer. The maintainer is a guy named Ingo Mulmar, and he has worked on this for decades, as far as I know. He has been maintaining a Linux subsystem for a long time. And this is not unusual. So when I worked at Microsoft, there were really three core people who were responsible for Microsoft kernel development at the time. There was one guy who did VM, there was one guy who did scheduling, and there was one guy who kinda did everything else, right, worked on NTFS and some other things. And those people had incredible power. So I remember being in a meeting once and people were talking about like, oh, should we do this thing or should we not do this thing? And it involved the memory management system. And about 20 minutes into the meeting, the guy who was maintaining the memory management system, the guy named Landy Wang, walked in. It was like, we're not gonna do that. And they were like, okay, right? Like that was just the end of the discussion because if he said they weren't gonna do it, it was just never, ever going to happen, right? And then a few minutes later, he left, right? And that was the useful part of the meeting and the rest of it was just time wasting, right? So it's not unusual for a large product, even though there are 12,000 developers, to have a small number of people that are really, really important because they're the institutional memory of a system like this. So Engel's been doing this for a long time. He's written several iterations of the O1 scheduler. He came in and said, let's write an O1 scheduler. This is nice. This is a scheduler that takes constant time. And the O1 scheduler works as follows. So the O1 scheduler combines two priorities to make scheduling decisions. There's one priority that comes in from the outside. That's a static priority. That's the thing that you would set using knives or whatever. And the other thing is this idea of a dynamic priority. So there's always been a goal here in Linux to accomplish what we were trying to do with MLFQ, which is to give a boost to interactive threads. And so there was code in this scheduler that was designed to try to recognize interactive threads and boost their priority so that they would get to the CPU a little faster. Now, the problem with this is that the code that was required to implement this interactivity boost got really, really hard to understand. Nobody really understood how it worked. It was very hard to make arguments about the kernel because the interactivity estimation was so weird. There were these magic numbers built into it. And so it was hard to reason about how a particular set of tasks would run on the system. And again, if you can't model a system, if you can't say, here's what should happen when I do this, then it's very hard to tell if the system is working properly. Imagine you're writing a test and you want to assert that a particular set of tasks was scheduled properly. If I have this black magic that's part of it, it's very, very hard for me to reason about whether or not things are correct. And this is something to keep in mind when you're building systems. Building things in a way that you can reason about, even if that might be a little bit suboptimal, is frequently a really good thing to do. Even if you give up a little bit of performance because it's much easier to test the system and to think about how it works. All right, so now enters the hero of our story. And I don't know either of these people at all. I mean, maybe at some point they'll like start thumbing down the video and right flame me in the comments or something like that, but I don't know them. But, you know, so Concleavus is an Australian anesthesist. So apparently when you stand there all day over sleeping people who are being operated on, you have a lot of time to think about kernel programming. Hopefully he wasn't doing this during the operation, right? Like, hold on a sec, I know. Whatever he's sleeping, it doesn't matter. He'll, let me, I got this great idea. I'm gonna go hack a little bit. But this is day job. And there are people in the Linux community that fall into this category. I don't know how many there are. I suspect a lot of people that contribute to Linux are full-time software developers where Linux, being part of Linux is either something that they're kind of paid to do or sort of goes along with their job or something that they do on their off hours. But this person is not a software developer. At least not at the time. Maybe he quit this job. But when he started doing this, he was doing nothing. And as he learned more about Linux, he became interested in scheduling and particularly he became interested in a hard problem which is this idea of interactive scheduling. Scheduling for interactive systems, user-facing systems. And remember, the user-facing Linux community is not kind of the strong dominant part of the Linux community. These are people who are sort of sometimes more ignored by Linux than they should be. These are the people that run Linux on laptops and things like that. This may have changed because of Android. I don't know about that. So, Android came along a little bit later. Here's Concleanus. Okay, he looks like a nice guy. And one of the things that he did when he got started which I think was really valuable is he started trying to think about what does interactivity mean? How do we define interactivity? And so he started to propose these formal benchmarks. So, a responsiveness, the rate at which your workloads can proceed under different load conditions. Interactivity, the scheduling, latency, and jitter present in tasks where the user would notice a palpable deterioration. So, if the user can tell that things are slowing down because there's high background load for some reason, then this is a measure of interactivity. So, one of the things that Concleanus did was he started building schedulers for Linux. And in 2004, he released something that's called the Rotating Staircase Scheduler. And the goal of the scheduler was to maintain good interactive performance while significantly simplifying the scheduling algorithm so that people could understand it. So, in his patch that he's done a little scroll of mail list, he said it removes 498 lines of black magic, aimed at improving interactivity, replaces them with 200 lines of new code, implementing a simple rank-based approach. There's some similarities to MLFQ here, right? This is how he described it, starvation-free, strict fairness with good interactivity, at least as far as the above restrictions can provide. There's no interactivity estimator, no sleep-run measurements, and only simple fixed accounting. The task behavior can be modeled, and maximum scheduling latency can be predicted by virtual deadline mechanism that manages run queues. The primary concern in this design is to maintain fairness at all costs determined by a nice level. You have to maintain as good interactivity as can be allowed within the constraints of strict fairness. So, this is a nice mission statement for a schedule, but this gives you a really, really good idea of what this is trying to accomplish. There are descriptions of this online. We have time to go through one in class, happily, okay? There's one parameter that comes into the scheduler, which is a round-robin interval, and the round-robin interval determines how long the complete round of the scheduler runs before I reset things in the start over, and that'll make sense in a minute. There's one input that comes in from the outside, and that's this existing nice-level priority. So, the external priorities, the static priorities that were part of the 01 scheduler, those are maintained in the rotating staircase. Now, in contrast with MLFQ, the priority defines all, not just one level that a task can run at, but a number of different levels. So, high priority tasks have, by definition, more chances to run during one round of the schedule. With MLFQ, it's like I'm in that queue, and if I'm ready to run in that queue, I'm good, otherwise I'm gonna go down to the next queue. But the rotating staircase, you'll see that high-priority tasks not only have a chance to run early, but they have multiple chances to run throughout the scheduling, and I'll show you how this works in a minute. And a fixed amount of time at any particular level. And each level can also run for a fixed amount of time. One of the goals of this algorithm is to ensure that every task runs at least once within a fixed interval. Remember that was part of the mission statement. I want bounded latency for every task in this system, and it's very, very easy to bound the latency, the maximum amount of time that can occur between two runs of the same task and the rotating staircase. All right, so here we go. So, when I start the scheduling epoch, I put every thread in a queue that's determined by its priority. Then I run threads from the top queue, round robin. If a thread blocks your yields, it stays at that level. And if a thread runs out of its individual quota and moves down a level. And I keep doing this until I run out of the original quota that I had at that level, or if none of the threads at that level are runnable anymore. So, if I'm out of threads, and don't worry, I have a diagram. And then I complete this until I've moved down through all the quotas, or there's no threads that are runnable, at which point I restart another epoch. Epic, epic, sorry. Another word that Scott's been helping me with, epic. Anyway, I really think that epoch is the right way to say that word, but it's not. Okay, so here's an example. When this round of the scheduler starts, I have three threads at the highest priority, four threads at priority one, and two threads at priority zero. And one of the things that MLFQ does is it assigns these quotas to threads initially. And these quotas bound how long this level can run. So, before I start, I know that I'm gonna run for at most 15 time units at priority two, at most five 20 time units at priority one, and at most 10 time units at priority zero. Now, this may seem obvious, it may seem how could I ever exceed this, but you'll see how this works in a minute. And I can't exceed this, and I would be able to. So, what's the maximum amount of time before, if every thread will have a chance to run, but what's the maximum amount of time before I would restart the scheduling algorithm for this setup? Remember, this is something I wanna be able to model. I wanna be able to make guarantees about how long it's gonna be before a thread runs again. Let's see, yeah, yeah, 45 time units. Five, yeah, nine times five, right? 15 at the top, 20 in the middle, 10 at the bottom. Okay, so how does this work? So, I start running threads, let's say that this guy runs out of its quota. So, if it runs out of its quota, it moves to the next level, and it gets more quota at that level. So, this is what's different from ML of Q. So, if this ran for five units without blocking, I move it down to priority one, and I keep going. And there's a bug in the slide, which is the priority one does not get any more quota at this point. Priority one still has a maximum of 15, sorry. At some point, I need to go fix these. Then I run the next thread from priority two, and let's say that thread blocked, or say yield, so the yield just goes right back on the Q. This guy runs, and let's say this thread blocks, so this thread is blocked, and now I'm just gonna keep running things from this round robin until everything is blocked, or until I've run 10 time units out of priority two. So, at this point, everything from priority two is blocked, or ran out of quota. Okay, so what do I do? Go down to priority one. So, this is why it's called a staircase. The threads that finish their quota at any particular level get pushed down the staircase, that sounds mean, right, and killed. No, they just get pushed down one stair, so it's like a gentle push, and they get another chance to run at a lower level. So, on some level, being high priority not only means that I get to run early in the scheduling quantum, I get more chances to run. So now I would be at priority one, and again, I'm sorry, bug on the slide, it'd be 15 total units. So I start running threads here, let's say that this guy blocks, this guy blocks, this guy exhausts its quota, so it moves down to priority one. Let's say this one blocks, and so now what you see here is that this original thread that started at the top queue, that's a high priority thread, has another chance to run. But the reason it has another chance to run is because a bunch of threads from this queue blocked. If every thread in front of it had used its whole quota, it would not have had a chance to run, and everything would just get pushed down to another level, okay? And so let's say this guy, this is a high priority thread, but it's also CPU intensive, so it just keeps falling. So now it's down to priority zero, sorry, I'm gonna use the wrong numbers, and now I just repeat this process. And at some point I'm out of threads to run, or I finished my quotas at every level and I start over. And this is the point of this discussion. Any questions about this? Yeah. So threads never move up. When I get to the bottom, I put every thread back where it started, and I start again, right? So when a round of the scheduler finishes, every thread goes back to its original priority queue, and then I start up again at the top. Yeah, static priorities are a part of the system, right? Yeah, I would have a default, but I want inputs from the outside, right? Yeah. It remains the same, that's the bug on the slide. I'm really sorry. The goal here is if I move the thread down, and I increase the priority level, I can't model how long things are gonna take. So let me back up here. Yeah, I'm sorry. This is every, no it's not, it was 15. Oh no, okay, it is 20. Okay, no, no, no, so this is right. I think the top quota, no, this is right. So the point is that when I get here, right, I have four threads, I'm almost done. I have four threads that were originally in priority one, and I have this thread that came down from priority two. If all these threads run their entire quota, I stop, I don't run thread five in priority one. I will push it down to the next level, right? And it will have another chance of priority zero, but you're not guaranteed to be able to run at other priority levels. The ability of threads to continue running at lower levels relies on the threads in front of them blocking, yeah, Blake. Yeah, that's a good question, I don't know. Yeah, yeah, I'm not sure. I mean, it clearly matters, right? Yeah, it probably does, right? So the question is, when I start moving threads down, what order are they put in? Because it matters, right? Because I'm running them FIFO, and if I run out of quota for that level, I'm not gonna run the runs at the end, right? And it probably is at least ordered by priority. How you order within those priorities, I don't know. That's a good question. All right, I will, I think I'm out of time. You guys can start packing up. I'll answer one more question. They can be, right? If that level still has quota, right? So if the level still has quota, they can come back to the level they were at. Otherwise, they, I think they can run again at lower levels, yep? I think one of them is going to come back up again. They can run again. I think so, yeah. I'll have to review that. All right, there's more to this. I'll send a link to the video from last year where the rest of this is explained, and I will see you guys on Wednesday.