 separation in scheduling policy and mechanism, right? This is a nice example of sort of a nice clean separation between policy and mechanism that, and this is kind of a design pattern or an example of a way to build systems that we'll see examples of later in the class as well, when we talk about virtual memory and file systems. So I'm gonna give you a little bit of a, you know, an overview of some of the tensions that drive scheduling, right? So what do we expect from computers and how does that line up or not line up with the most efficient way to allocate the resources, right? Because we talked about the kernel as that being one of its goals, right? That it's doing resource allocation and trying to do that efficiently, but when it comes to scheduling, there are some interesting trade-offs to me, right? And then finally, hopefully today, we'll get through some of sort of the simple canonical scheduling algorithms. So my goal today is to, you know, to try to do this, I'm not gonna do any review this morning to save us a little bit of time, which I just lost by being late. But hopefully what will happen is on Friday we'll do kind of a fun lecture and we'll talk about a real Linux schedule, right? A real modern, fairly interesting Linux schedule and there's kind of a fun story about the Linux community that comes out of it, okay? All right, so, yeah, so we actually have some videos online now, I'll link them off the website. And in every one, I seem to be wearing the same thing. So I'm gonna work on that actually. I have some new shirts in the mail, as it turns out, and a sweet gift from my wife. And yeah, so this year's videos, so last year, I don't know why we did this, but last year the camera followed me and I walked around a lot. So people who have been watching that have probably been frustrated by that feature of those videos and sorry there's not much we can do about it now. This year we're just looking at the slides, right? Which is probably the better way to do it, right? So the slides are clearly visible in this year's videos. All right, so remember we were going through scheduling. We said, you know, what was scheduling? You know, when do we get a chance to schedule? And then how do we do schedule, right? And we can clearly divide scheduling into kind of two parts, right? So the first part is what's the mechanism for switching threads? And this should not be a new idea to you, right? Because this is what we've been talking about for the last week, which is context switching and saving thread state, et cetera, et cetera. So that's our mechanism, right? But now we've gotten high enough up in the sort of kernel stack of abstractions on top of the CPU, if you will, that we can actually start to talk about some of the policies, right? So now we have this mechanism that says I can start and stop thread safely. I can save their states and then when they run again, things like the same. I have some tools in the kernel, these synchronization primitives that allow me to allow threads to coordinate and make sure that state is modified safely. And now, so now we can talk about, you know, an interesting thing, which is what should be the thread that's running, right? How do I choose the next thread and how do I allocate the CPU or multiple cores, right? So again, we talked about this, context switch moves, context switch and blocking operations move threads between the running, ready, and waiting queues, right? And yeah, I don't know. That's one of these slides a little too cheeky, even for me. Okay, so again, so scheduling is a nice example of this nice separation between policy and mechanism, right? And I mean, who wants this, like why, I think we've mentioned this in the past, but who wants to speculate about why, why does it make sense when you design a piece of software or a software system to try to separate your policy from your mechanism, right? Why not just, you know, integrate them together? You know, why not? That sounds nice. Integrates a good word, right? We should just integrate lots of features into our new smartphone app, right? On it. Well, okay, so I'll take the first three words of that answer, right? And then go with it. Policies might change, right? So for example, we've built up these nice mechanisms of how to switch threads on and off the CPU, right? And now what that means is that I can use those mechanisms to support a large number of interesting scheduling policies, right? So when you, you know, the reason is that my policies are gonna use these mechanisms, right? But if I bake these all together, then what happens is when I run the right new policies, I have to bake in the same mechanisms, right? And then those mechanisms, you know, end up very intertwined with the policies, and if the mechanisms change, I have to change every policy, et cetera, et cetera, right? So it makes sense, you know, to create these interfaces, you know, contact switching and other tools, and then just allow the scheduler to use them, right? Okay, so let's go through this example and look at each one of these and kind of tell me whether you think it's a policy or a mechanism. So let's see, designing which thread to run. One of the victims, Simon, policy or mechanism? That's a policy, right? Performing a contact switch, Sarah, policy or mechanism? Mm-hmm. So maintaining the running, ready, and waiting queues. Seshia, policy or mechanism? Satish, what's her name? I know, but what's your name? What's that? Shri Ragh, oh, sorry, okay. That's who I wanted to answer the question. So that's his mechanism, right? Giving preferences to interactive tasks. Someone just said it, wow. They were so excited, was that you? Someone just said it, maybe it was Bart. That's a policy, right? Or maybe I just, I didn't get much sleep last night, so I'm gonna be hearing things today. You've got to be spacing on those. Oh, I know, it gives it away. Stop it, you guys are so clever. I know, I should use a fixed-width fund. Okay, well, using timer interrupts to stop running threads, maybe somebody who doesn't have such an eagle eye can answer this one. Do you know? Mm-hmm, yes, the wide letter. And if I decided to choose threads to run at random, Andrew, that would be? Okay. Somebody should be taking notes of things to fix on the slides, not like me, for example, but I'm giving a lecture, so it's difficult to do. Okay, so, and again, so the other thing I want, you guys, to keep in mind, right, is that the CPU, how I schedule the CPU matters, right? We're gonna talk about the operating system's role in dividing up other kinds of resources on the system like memory and disk IO and stuff like that, right, that's the operating system's job, right? Like it's entrusted with multiplexing resources. But the CPU is particularly important, right? Why do you think that would be? You need a victim? Tim? Yeah, okay, yeah, I was worried about your answer for the first sentence, but yeah, I mean, in order to use anything else on the system, you normally have to get to the CPU first, right? If I wanna initiate some IO, I've gotta run, right? If I wanna use memory, I've gotta be running, right? So the CPU is kind of a gatekeeper, right? It's not to say that I'm gonna initiate some IO and I'm gonna use the CPU the whole time, but frequently in order to use any other part of the system or initiate an operation, you have to start with the CPU, right? So this is kind of an important part of scheduling the system, right? And this is why we talk about CPU schedule, right? We talk about dividing other kinds of resources, right? But the CPU matters quite a bit, right? And this is the truth, right? So if you schedule resources effectively, you can make even a modestly powered system seem fast, right? And if you don't schedule resources effectively, you can make a powerful system seem really slow, right? And this is the truth, and you guys aren't exposed to this very much because most of the schedulers you guys run on your machines hopefully fall into this category. And so they make your powerful, even more powerful systems seem even more powerful, right? But I guarantee if you run bad, I don't know if this would be a worthwhile experiment to perform, right? But if you implement some sort of dumb scheduling policy, you will notice, right? All right, so let's talk a little bit about, I thought this was especially in preparation for Friday's lecture, which is gonna be about interactive scheduling. I thought this was kind of important to think about, right? So typical usage pattern for interactive systems, right? Server systems are quite different. And again, we'll get to that a little bit on Friday, right? So when I thought about this, I said, okay, well, what do we expect the system to be able to do, right? So one of the things is respond, right? Responsiveness. So when I click on something, when I provide some input, I expect the system to do some, right? And usually the system indicates that it's doing something by running a task that paints to the screen or whatever, right? But when you go and you start typing into the address bar on Firefox or you click on some part of the screen, you guys will notice, right? So this is kind of the stuff that you guys will notice about your machine if it doesn't work, right? So you guys will notice if there's a lag when you start to type, right? You'll notice if there's a lag before the menu draws, right, if you click in a particular part of the screen. This is just part of interactive use, right? Another thing I think that's increasingly important, right, is this idea of continuing to run a process at kind of a fixed rate, right? So what's an example of this? What's something that many of you guys probably do with your computers that requires this kind of like fixed interval keeping up with something, you know, where you will notice if it doesn't work well, Jeremy? Games, okay, games are a good example. I was thinking of something else, yeah, Feroon. Yeah, watching video, right? Like watching video online, right? I mean, that's when you're watching a YouTube movie, right? You will notice if it starts to stall or lag or whatever, if the frame rate doesn't keep up, but that doesn't require the full throughput of your machine, right? But it does require that your machine kind of periodically keep up with something, which is the frame rate required by the video that you're watching, right? And then there's also this, and so I thought of that as kind of active waiting, right? Like you're sitting there, you're watching it, right? You're not necessarily waiting for it, but there's this continuity associated with it, right? And then finally, there are certain things on your machine that you expect to complete, right? You expect them to happen. A lot of times these things, you're not necessarily even aware of them happening, right? These are frequently things that are to be done in the background, right? But there are features on your machine that then require this, right? So what's an example of something like this? This might be a little bit more subtle because again, this isn't something you guys necessarily always notice. Give me one. Yeah, okay, that's a reasonable example, right? When you guys launch your, when you open up a window manager or something, there's a bunch of things that get launched and Dropbox starts up and starts trying to synchronize stuff or whatever. So that's a good example. What else, Frank? That's a good example. Yeah, that's a good example. I'm laughing for another reason, but I like this fragmentation. It's really mysterious. Greg, yeah, I mean, that shouldn't be like this, but it frequently is, right? Yeah, waiting for Java, waiting for Java applet to finish loading, right? That can seem like it takes years. Yeah, but then there's all these things that your computer is doing frequently in the background. So for example, one of the things I was thinking of is building indexes, right? So if you guys use things like spotlight or search features on Windows, frequently those components are periodically scanning your system and building indexes, so when you start typing into that little spotlight box, you get answers quickly, right? It's not building that on demand, right? That would be slow, right? It does some of these things in the background, but you would notice if those processes didn't finish because search results would start to be incomplete, certain things wouldn't work or whatever, you know? So there's all these things that are kind of going on and they need to kind of finish eventually, right? But they're not necessarily associated with interactivity, right? Okay. Right, so again, so responsiveness is that idea that when you tell the computer what to do, it kind of perks up and notices, so for example, I just clicked on my caffeine icon, right? Keep the screen from going off and the caffeine icon is now showing the little brewing thing, right? So I know it's working, right? If it didn't do that, I wouldn't know that, right? And we just did some examples of this, right? And, you know, web browsing, editing, chatting, you know, whatever, things where there's this feedback loop, right? Where you do something and you guys, you know, you don't think about this, but you're expecting a response from the machine, right? And sometimes, again, that response isn't necessarily I'm done, it's I understood what you told me to do, oh, master, and I am working on it, right? But these are some examples of these responsive type tasks, right? So connoity is this idea of, again, sort of active waiting, but you're expecting to continue to do something at a fixed rate, right? And I think, especially with media, this has become really, really important, right? And we just talked about some of these things. I mean, you know, even blinking cursors, right? Which I hate, which is always the first thing, I did you the favor of disabling the blinking cursor in your, you know, your VM for this class because it drives me nuts, right? And the webpage that I go to to figure out how to do that every time has this whole ramble on about Chinese water torture and how blinking cursors don't like that, right? So I hate them, but if you do like blinking cursors, then this is an example of something. So if you're sitting there watching a machine, the cursor is like blank, blank, blank, blank, blank. You know, like what is going on? So, you know, music or movies, and then, you know, stupid web animations, whatever, right? Things that require some, you know, it's essentially asking the machine, I need you to do these things at this time, right? Playing audio is another example of this. I mean, your MP3 player is not burning a lot of cycles, but if it doesn't get to run periodically, you notice, right? And then, and these sort of completion type tasks, right? So we start things in the background and a lot of times these aren't user initiated, right? These are things that are going on, so doing backups, right? So if you guys use things like time machine, you know those things can take forever, but you expect them to complete at some point, right? So the idea with this exercise is just to get you about, think about some of the different things that you expect your computer to do and the different types of expectations you have regarding those events, right? And frequently, you know, it's not like we classify applications into these categories, right? So if we're a music player, it's like, when I click on it, I want it to change tracks, I want some feedback indicating that it understood what I told it to do, you know, watch as this idea of playing the selected track and then finish might be, you know, updating some album art work in the background or something, something that happens periodically that again, if the album art doesn't get updated, then I start to get annoyed over time, right? But I don't necessarily know that this is happening, right? So with the web browser, the idea of click is sort of follow a link, you know, watching could be playing web video and then finishing is indexing search history, syncing, if you use things like old fashioned web browsers like Firefox, Chrome probably does this too, but I don't know how to use Chrome. So these are some examples of how, you know, single applications combine multiple of these features, right? So hopefully, you know, if you think about some of those, you'll realize that scheduling is this idea of, you know, what makes CPU scheduling interesting and challenging, right? And the reason why people are still working on it, right? You know, as recently as probably 10 minutes ago, right? It's because it requires balancing these different goals, right? And this is particularly interesting on interactive systems, right? So if you imagine that your computer could predict the future, then you might, and we'll talk about some scheduling algorithms that take that as a given, right? And they're useful to think about just from a conceptual perspective, right? But if you imagine that that was true, then it's not always the case, I would argue, that you want to do even what you could argue would be a provably optimal resource allocation, right? You know, keep all the resources on the machine busy, right? Maximize throughput, right? Because maximizing throughput conflicts almost directly in frequent cases with this idea of meeting deadlines, right? So I've got this process, I've got it started up, it's using all the parts of the machine, and suddenly I need to redraw the blinking cursor, right? So I need to stop that, move it off the CPU, start up the thread, it's gonna draw the blinking cursor, right? So that's not the most efficient way to do things, right? But your expectations for how your computer works dictate that that's what happens, right? So balancing these is very tricky. And the, you know, when responsiveness and continuity, right, require this idea of meeting deadlines, right? So responsiveness is interesting because it's this idea of, you know, the computer is kind of sitting there constantly, you know, waiting for you to mess up its whole life, right? It's like, I just started this big defragmentation and everything's going good, and then suddenly, like, you wanna browse the web? You know, no, you know, come back later, you know, I'm working at a big defragmentation, right? That's gonna take like 20 minutes, right? Like, come back and bother me in 20 minutes, right? That's not how your computer works. It's gonna be interesting if that happens, actually. I mean, people already had this tendency to personify their machines, right? But that would be an extension of that, maybe. And then continuity is, I think, kind of this, but continuity is this idea of these predictable deadlines, right? So periodically, the music player needs to write data to the audio buffer, right? Or the flash player needs to repaint the screen, or whatever, right? And so continuity is these predictable deadlines where responsiveness is unpredictable, right? I don't know what's gonna happen. I need to respond to whatever you suddenly decide to do with your computer. And then again, you balance this against throughput, right? So throughput is this idea of, you know, again, I'm trying to put all the resources of the machine to work, you know, to the greatest degree possible, right? I try to get as many CPU cycles as much of the memory allocated. I want the disk, you know, going, assuming I have work to do, right? And throughput, again, kind of directly pushes against these other goals, right? Does this make sense to people? Stop me if you're confused or ask a question. I know we kind of rocket it off today. All right, so, and with interactivity, right? Humans are sensitive to interactivity, right? You guys, you know, I would argue at this moment, if people of you have your laptops out, you have no idea whether or not your system is optimally allocating resources optimally for throughput. You have no clue. And I would argue we almost never had a clue, right? You've just used the machine and assumed that things will happen in the background and eventually they'll get done, right? But you are very sensitive to responsiveness, right? Responsiveness of continuity. Those things you will notice, right? But you will, you guys, so the machine suddenly slows down, the cursor blinks a little bit weirdly. You'd be like, oh, what's going on, right? But you have no clue whether or not the machine is doing a good job of optimally allocating resources for throughput, right? So, what inverts this, right? In what case would we have a system where some of this might be flipped around a little? Well, in what case would we have a system where we might say that interactivity is not as important and throughput is more important? Yeah. Yeah, a lot of times servers, right? So, and this gets a little tricky too because certain types of servers, you know, web servers are serving interactive workloads, right? So, with the web server, it's like gotta get the page back to the user, right? A web server might be like, I'm collecting these 5,000 requests together so I can serve them all very efficiently, but in the meantime, those 5,000 users are sitting there watching the browser wheels spin around, right? But certain types of server workloads, right? Such as processing workloads or a lot of the stuff that Google fires out, these massive, you know, map-reduced jobs to do. Some of those are really just throughput-driven, right? It's like the difference, and optimal resource allocation for throughput, in those cases, can make the difference between something taking, I don't know, you know, like 24 days to run, and four, right? And four is nicer, you know? So, it's funny, a lot of times as computer scientists, we think about powers of 10, but when things get really slow, you don't really care about powers of 10, right? You care about like powers of two, right? Is this gonna take a week or three days, right? That's a feeling like a lot less time. Yeah, so again, you guys don't know, no clue whether or not your computer is doing a good job at allocating some of these resources, right? And again, I mean, so yeah, one reason for this, right, is that we don't like our time to be wasted, right? We're sensitive to our time with the machine, and we don't really care if the backup takes another hour, right? Because we just assume it's gonna finish at some point, right? All right, so again, let's go talk about, you know, bump up back to the top a little bit and talk about how to evaluate schedules, right? So one way to evaluate schedulers is how well is it, how good is it at meeting deadlines, right? How responsive is it, right? Whether those deadlines are predictable deadlines caused by continuous tasks or unpredictable deadlines caused by user interaction, right? And then the second thing, again, these are in direct conflict with each other, right? Is how efficiently does it allocate system resources, right? Because there's no point idling parts of the machine that could be in use, right? So these two things, this is essentially, you know, getting something that works well on both these axes is our goal, okay? Okay, I just talked about this, okay, great. So, I mean, I just wanna point out one, you know, so we're not gonna talk about real-time scheduling in this class, real-time scheduling could probably be the focus of an entire separate course all by itself, right? But, so, so deadlines, we don't know about deadlines or things like video, right? So if I don't paint the next frame of the video within a certain amount of time, the user will start to notice that the video is jumpy or laggy or whatever, the sound will get off or something weird, okay? What about, so there are systems where these, meeting these deadlines is actually kind of really important, right, so what's an example of this, right? Who can give me an example of a system where if I didn't meet a deadline, like, I might not have a computer system anymore? Spencer, anybody wanna help them out? Sirach? Yeah, oh yeah, okay, that's a good one, like, well yeah, withdrawing the cooling rods, it's like, well, I decided to run the D-frag at that moment and the cooling rod didn't get withdrawn, you know, but what, I mean, so yeah, so it's like, you know, what kind of device might be, like, oh man, and, you know, so like robots, right? Or a great example of this, right? Like the Mars rover, it's like, oh, you know, you're like, oh yeah, well, few milliseconds here and few milliseconds there, right? It's like, nope, you're gonna have a jumble of computer equipment at the bottom of some Martian, you know, valley, right, so it's just not a good thing, right? So in certain cases, with like, surgical robots, for example, you know, like, whoops, you know, I was doing that eye surgery and again, I ran the D-frag and things slowed down a bit and now I had to paint the blinking cursor that was really important for the doctor and whoops, you know, you don't have an eyeball anymore, right, or you have, you know, 22,000 vision. So anyway, so there are cases where this is, this is actually pretty important, there are whole approaches to doing deadline-driven scheduling that are really important for real-time, real-time systems that, again, we're meeting these goals, could hurt somebody or damage something, and your computer is just here, okay. All right, so before we talk about a few scheduling algorithms, I know this has been crappy, so any questions about scheduling principles? Guru, oh yeah, so does anyone understand what the default OS 161 scheduler does? Has anyone looked at it? Yeah, it does nothing, right, but what does that result in? What's that? Yeah, it actually turns out to result in essentially round robin, right, so the idea is when threads run, they run for a scheduling quantum and then they get put back at the back of the ready queue, right, and so by not reordering the ready queue, the behavior that results is round robin, right, and you can run some tests to confirm that this is true, right, you guys will have a chance to play with that a little bit in assignment too, but the idea is that, right, you are given a chance periodically to reorder the ready queue, right, if you don't, then the system will just keep doing what it does by default, which is putting things back on the end and taking them off the top, right, and that will result roughly if you have a series of threads that are continually ready to run, excuse me, in a round robin schedule, right, which we'll talk about just two minutes for now. That was a good question, any other questions about scheduling principles and sort of scheduling goals? Goals, okay, so we're gonna talk about, you know, schedulers that use certain types of information, right, so when you start to think about making a scheduling decision, right, you're the scheduler, you, you know, the operating system has asked you to pick the next thread to run, okay, what kind of things might you want to know or think about, might be something that you like to, Greg, okay, so I might wanna know something about priorities, right, so essentially I might have some artificial way of establishing some importance between different threads, right, that's pretty common, what else might I want to know, sure, okay, yeah, that's a good thing too, if I knew how long the thread would run without blocking, for example, that might be a useful thing to know, Alyssa, what else would I wanna know? Yeah, yeah, these are great answers, so what resources, so, okay, let's continue this exercise, it's kind of fun, but now when you answer the question you have to tell me, do you think that you'd be able to determine this or not, right, so priorities, do you think you'd be able to determine that, yeah, those are just artificial numbers, we're just gonna attach to a thread, right, do you think you would be able to guess how long a thread would run without blocking? Probably not, right, you're probably gonna have to run the thread to find that out, what about the resources that threads are going to use? Probably not, right, now these are, again, these are good things to think about, like what would you want to know, right, if you were the omniscient, you know, oracle schedule, right, yeah, okay, yeah, so you might wanna know if the thread is an interactive, doing some kind of interaction, right, so maybe the thread that needs to paint the screen or draw the blinking cursor needs to be given a little bit of extra priority, that's a good point, what else, AJ, give me one, yeah, same, yeah, like other threads that are in the running queue, right, I'm gonna look around and see what's out there, okay, let's see what I actually had up on the slide, so, right, so some of these are varieties of what's about to happen, right, what will happen next, right, if I do this, what is the next thing that will happen, and when we talk about access to this kind of information, we're talking about oracular, I like that word, or oracle schedulers, right, schedulers that have access to information about the future that real schedulers don't, right, and clearly, you know, unless, I mean, if you can implement one of these, that would be cool, but, and you know, you probably would, well, I don't know, it'd be nice if you took this class in one, that would be neat, but if you can do that, then you're probably gonna do well in life, but most of us can't, so we're gonna use them as a point of comparison, right, so we're gonna say, compared with this oracle scheduler, here's how well ours does, right, frequently, right, so this is the big, you know, this is kind of a big mantra we'll come back to in the next few slides, right, usually what systems do as a substitute for what's about to happen next is they use information about what just happened, right, so, you know, the mantra is use the past to predict the future, right, everybody say that along with me, use the past to predict the future, try that again, a little louder, use the past to predict the future, I'm not being weird about it because we'll talk about this many times in this class, right, this is a system design principle which is kind of, try to, you know, this is the best we can do, right, we can't predict the future but we know what just happened, right, and it's better than nothing, right, and sometimes it's not very good, right, because what might've just happened is the machine might've just been sitting there for 10 minutes and what might be about to happen is you're gonna come back and start to browse the web but, you know, again, it's better than nothing, right, it's better than just like being like, I know nothing about anything, okay, so let's see, oh, Firefox is an upstate ready to install, it's helpful, okay, aha, here we go, great animations for these, okay, so the first type of schedulers we'll talk about are schedulers that kind of take that approach, right, Sean, sorry. What's that? Oh, whoops, oh, sorry, what does the user want, right, so that was the priorities part that Greg got in there, right, and there's usually some way of layering on top of the input about the machine that the schedule will be collecting on the fly, again, some kind of information about what users want, right, and on Friday we'll talk a little bit about how this may or may not be anything, yeah, Remo. Yeah, I mean, yes, right, I mean, because normally what happens is the user doesn't really know what they want to happen, right, like you could tell the scheduler, I want this to have a higher priority, right, but remember the scheduler has to make decisions on a continuous basis about exactly when the thread is going to run, right, and you don't want to tell it that, right, like you could have a system that every time it needs to schedule threads popped up a little box and said, hey, what do you want to pick, right, but then you probably spend most of your time clicking on that box, right, rather than getting any useful work, right, so, right, priorities are, it's a great question, priorities are a good way of layering information on top of the system, but the scheduler is still in charge, right, because you don't want it to be otherwise, right, because the schedule, the decisions the scheduler is making, you don't want to have to make, right, okay, all right, so, imagine this is my system, these are my little thread guys, right, all color coded and nicely named, and this is my ready queue, okay, and in this, at the start of time, right, all my threads are ready to run, okay, there's nobody on the waiting queue, and you know, a simplest, what would be, so let's say again, let's say I have no priorities, I have no ability to examine the state of the machine, I have no information about the future, the past, the present, what can I do, and if all that falls down, what's left, what are some things that I can still do in order to produce some kind of ordering, I'm gonna ignore you for a few minutes, great, Gina, yeah, I could randomly pick a thread, right, and I could just repeat that process at infinite, right, what's another thing I could do, what does your system currently do, well that's what I'm gonna do, right, I'm gonna allow each of that to run for a period of time, then I'm gonna stop, right, but Masakazu, what can I do, I could pick randomly or I could just do what, wanna help him out? Should you just, in the order that we're picking? Yeah, I could just run them in some sort of arbitrary order, right, so your system is kinda, yeah, it's first in, first out, and as long as I'm on the ready queue, I'm just processing the ready queue in order, right, so yeah, so here's my random scheduler, I start T3, I let it run until it, in this, so, yeah, I remember it, this is a nice example, so in this case, I set up T3 on the CPU, I allow it to run, and at some point it finishes, right, or it's time quantum expires, right, this is, remember, this is preemptive scheduling, so I am not gonna let T3 run forever, like somebody said, I'm gonna give it a period of time it's allowed to run, and at some point, so what happens here, right, I set up T3, I let it run, and then what happens, what does that dotted line represent? Yeah, but what else does it represent, Mocta, right, so I'm gonna perform a context, but how did I get a chance to do this in the first place? Yeah, so either the thread called yield, which Gino pointed out, which in this case it didn't, and you don't know that, but I do, because I know T3 very well, but in this case what happened is that there's a timer in up the fire on the system, right, so a timer in up fire, the scheduler ran, it said T3's quantum is over, right, T3 is still ready to run, so where does it go back to? It goes back on the ready queue, okay, so now what do I do, what's the next thing I do, let's say I'm doing a random scheduler, who can predict the next thread to run? Hopefully nobody, right, so okay, I pick T5, right, just arbitrarily, I let T5, now okay, so that's the end of T5's quantum, what happened here? Thornton, why didn't T5 finish its quantum? It's got some time left, why did it stop, Tom? So it could have felt like stopping, what do we call that? Wembley, it could have called yield, it could have said, hey, I do feel like stopping, right, what else could have happened? Tom got it, well I'm gonna perform a context switch, but if it didn't finish its quantum, what did it probably do? Yeah, it performed a system call, or it blocked, right, so this thread is no longer able to run, right, if it called yield, it could go back on the ready queue, I think, if I remember what happened, it's X on the slide, it did not, right, it blocked, it performed a system call, now it's waiting for something, so it goes on to the waiting queue, okay, so okay, we can, I think you guys understand how this works, T2 runs till it's finished, where does T2 go? On the ready queue, now T4 runs, T4 also is going to block, and now I'm back to T3, right, T3, it looks like it's going to continue to run, and I don't know how long I dragged this example out for, but, oh, look, okay, there's another interesting thing, so okay, so what happened here, right, so T1 ran, didn't finish its quantum, so it means that it did what? Kevin, it blocked, right, so where is T1 going to go? It's going to go to the waiting queue, but what just happened? There were two things that happened, right, yeah, so what happened to T5, right, they don't have to be competing for resources, right, at some point, yeah, assume, yeah, whatever T5 was waiting for happened, right, so T5 was sitting there waiting for some Iota complete or network packet to arrive or whatever, and it happened, right, so now T5 is ready to run, okay, and, but I don't have to, but again, this is the point, I don't have to run T5, so I turned out I ran T2, T2 blocked, okay, so you guys, who understands this example? Okay, good, okay, so here's an example of, you know, again, what we would call, this is random scheduling, right, I choose a scheduling quantum, right, that's the maximum time I allow a thread to run, and I choose a thread to run at random from the ready pile, right, I don't really have a queue if I do random scheduling, I just have a pile of threads lying there, or queue implies some kind of ordering, and I run it until it blocks or it's scheduling quantum expires, or what else? What else could happen here? The cruel omission, yeah, I did you, it could complete, okay, that's good too, what else could happen? There's several things that are missing here, Navya, it might yield, is that what you said? Good, yeah, I heard yield or leave, yeah, so it could yield, right, and there's a few things missing here, right, and when a thread leaves the waiting state, I just put it back on the ready pile, right, and that's normally what happens, so it's just an important point thing to point out, just because I finished waiting, doesn't mean I have to run next, right, I just end up in the ready queue or ready pile, and the system then gets to make a decision about who gets to run, right? So Ron Robin's schedule is very, Ron Robin's scheduling is very simple, right, in this case I establish an order of queue, I put a thread on the tail when it's ready to run, when it gets to the head, I pop it off, I run it, and then I put it back, and if it's scheduling quantum expires, I put it back on the tail of the ready queue, again I have a queue here because I have ordering, and if it blocks, I just put it into the waiting area until it's ready to run again, and then it could, so if it leaves the waiting state, what could I do? There's a couple of different things, Robert, yeah, or I could put it at the head or whatever, I just have to find somewhere new in the queue form, right, so yeah, there's a couple, I couldn't put it at the head or at the tail, or I could, you're right, I could also put it back to where it was, right, if I had some arbitrary ordering, right. All right, so again, these are what we'll think of as kind of the know nothing schedule and algorithms, right, these require no information, and they also accept no user input, right, so there's no priorities, there's no information about the pass post in the future, there's no prediction, and these are not necessarily very interesting in schedule and algorithms, usually we just use these as kind of strong men to compare other things to, right. Okay, wow, man. Sometimes I know it's me who wrote these slides, sometimes. So one of the things that happens here, right, so when we start talking about interactivity, okay, one trick, I shouldn't say it's a trick, right, one thing that's scheduling algorithms, so somebody pointed this out before and it was a great observation, right, one thing we might wanna know is whether a particular thread is involved in an interactive task, right, at that point in time, right, threads may go from being involved in an interactive task to be involved in a more CPU, a resource-intensive task, right, but at a particular moment in time, I might wanna say, is a thread involved in an interactive task or not, right. One thing I can use to try to determine whether or not that's true is to see if the thread blocks before its scheduling quantum expires, right, why would that give me some clue about interactivity, okay. Yeah, so my argument is that an interactive thread or a thread that's involved in an interactive task is more likely to not use its entire scheduling quantum, it's more likely to block before its scheduling quantum expires, why is that? Yeah, so this is important, right, so let's say I have, for example, a thread that is watching the mouse, right, or that is watching a particular menu or something, so, or let's say, actually even better than that, I have a thread that's reading keystrokes from the screen, right, you're typing and that thread is drawing the screen, okay. What does that thread spend its time doing? Like, what's the three or four things that thread does? Yeah, Greg, give me the first one. Well, no, no, so this is just this thread, right, so don't worry about context switches, it will be context switching with other threads, but what does this thread do? It waits for you to press the key, okay, so it's blocked, and then when you press the key, what does it do? It just paints the screen and then waits again, right, so let's say, yeah, let's ignore painting to the screen as a blocking, but it would be, right, but basically it sits there, it waits for you to press something, it does a little bit of work, and then waits again, right, so a thread that does that will normally run, do a little bit of work, not exhaust its quantum, and then block again, so this is one of the tricks that old Linux schedulers use to try to identify interactive threads, right, they look for threads that don't use their scheduling clock, right, so if your thread finishes before it uses the scheduling quantum, it assumes that it's an interactive thread, so in contrast to that, and we'll finish with this today, what does a, let's say you have a thread that's computing to do this of pi, right, does it ever not use its quantum, if it finishes computing, it'll do this of pi, right, it'll stop using its scheduling quantum, right, but in general no, right, like a CPU bound or a resource bound task will just run through its entire quantum, and the operating system will always have to yank it off the CPU, right, every time, you know, it's like, oh man, that stupid pi thread, it ran out of CPU time again, right, whereas the, you know, the keyboard input thread will not do that very often, right, so this is an important thing, and this is something we'll talk about when we talk about Linux scheduling on Friday, and we'll finish these slides as well, so I'll see you guys then.