 All right. Good morning, everybody. It's Wednesday. Why don't everybody stand up and we'll, yeah. Everybody. There we go. All right. A little bit of a stretch this morning. So Friday is the exam. Today we're going to talk about we're going to finish segmentation. We just have a few slides on segmentation, and then we will go over the practice exam from last year. Or at least portions of the practice exam that are relevant. We'll talk a little bit about the solutions and how people think about things. So first of all, I just want to finish talking about segmentation, because there probably will be a question about segmentation on the exam. So on Monday, we talked about different ways of efficiently translating virtual addresses to physical addresses. We had goals for this. The goal was that the operating system would be able to tell the hardware periodically not every time, because that's too slow, but tell the hardware how to translate virtual to physical addresses. And the hardware would be able to do that translation on the fly very efficiently. And this is important, because we are translating every address, every time a user program or even, frequently, the kernel uses a memory address, the hardware has to translate it. So this needs to be fast, and there also needs to be a way for the operating system to set the policy. So we're talking about mechanisms for address translation, but also those mechanisms have to be flexible enough to allow the operating system to dictate what the policy is. So we talked about base and bounds. Base and bounds is potentially the simplest, and we talked about how that wasn't a good fit for our address base abstraction, because base and bounds gave each process one contiguous piece of physical memory, but our address-based abstraction produces a virtual address space that's very, very sparsely populated with actual data. So mapping that onto base and bounds didn't work. The way that we extended this idea and made it fit our address-based abstraction better was we introduced the idea of segments, or allowing a single user process to have multiple base and bounds. It's a pretty simple extension of the original base and bounds idea. And now what we can do is we can essentially, let's see if I have this nice picture of the address base somewhere back here. No, no, wait till I'm back. Oh, this is, oh, those. So where did it go? Here we go. So we can have one segment for the code, one segment for the heap, a segment for a stack, a segment for any libraries that might be loaded, et cetera. And we can avoid all this wasted space that we were going to have with base and bounds, all this internal fragmentation. So the idea is that, again, this is a pretty simple extension of base and bounds. There's one additional piece of information I need because in base and bounds, the implicit starting place of the one segment per process was address 0. So now I need to, for each segment, record where does it start in the virtual address space? Where is it mapped to in the physical address space, the base physical address, and how large is it? So I have a virtual address, a physical address, and a bound. In order to check if a particular virtual address is valid, I look to see, is it, does a segment exist for this process that this virtual address is inside? If the answer is yes, then the address is considered valid. If the address is no, then I have an invalid address that's being used, and I need to decide what to do, potentially that kaboom graphic. And then in order to translate this, I compute the offset into that segment. I do that by subtracting off the segment starting virtual address, and I apply that offset to the base physical address. So here is a graphic showing how this is done. So beginning of time, MMU has no state, knows nothing. Sees a process using virtual address 10,000. What happens next, Andrew? Yeah, it's going to produce an exception. It's going to tell the kernel, hey, the process tried to translate this address. I don't know what it is. So let's say this is a valid address. What does the kernel have to tell the MMU, Paul? Well, it needs to tell that it's valid, but is that enough? What else does the MMU have to be able to do before this instruction can complete? Jeremy? Yeah, I need to know how to translate it. Again, there's no failure here. I'm using memory. Either the process dies, or this is translated transparently by the MMU. So in this particular case, let's say that what the kernel tells the MMU is, yes, in fact, this process does have permission to use this virtual address. And in fact, that virtual address is inside a segment that starts at virtual address 0x10,000. The base physical address, this is where this data is stored in real memory, real actual physical memory, is 43,000. And the bounds are 1,000. So again, our check says, is this address inside a segment? So is it between the segment start and the segment end? The segment start is 0x10,000. The segment end is 0x11,000 in virtual address space. And so this is a valid address. What does it translate to? So here's a good question. Is the base a physical address or a virtual address? I hear physical. I hear virtual. I hear lots of right answers. Who wants to challenge the premise of this question? Rumbly? No. Who wants to say it's a virtual address? Frank wants to, you want to say it's a virtual address? No. It's neither. It's just a length. It's just the size of a segment. It's applied to both the physical and virtual address. This segment should have the same length in virtual address space and in the physical address space. So the bounds is just a length. It's just a scalar quantity. It's not an address. It cannot be translated. It's used to check that the address is valid and determine the size of this segment. So Frank was partly right. This does translate to a physical address. What address does it translate to? Tal, you think it's what? 10,000, so that'll be 53,000. Who thinks that's the correct answer? Anybody else want to put up a different proposal for how to translate this? Let's go back a slide. How do I translate a virtual address? I take off the segment start from the virtual address and I apply it. So think about it. There's some piece of the virtual address space and there's some piece and it maps to somewhere in physical address, the physical address space. So I need to compute how far into the segment is this address and then apply that to the base physical address. So back here, who wants to give this another shot? Yeah, Nick? 43,000. How did he do that? This is the start of the segment. This is the address I'm translating. I take this address, I subtract the start. What do I get? Zero. I add that to the base address and I get 43,000. And again, so this is designed to express the fact that this segment in virtual address maps to some piece of physical memory. How large is this piece of physical memory? Look, how large is the segment? Swetha? Yeah, talking about segmentation, Sean. 1,000, yes, OX 1,000 or 4,000 or whatever it is. Hex, it's 1,000 hex. That's how large the segment is. So the distance between the start and the end of physical memory is 1,000. So there we go, 43,000. What about this address? What happens if the process tries to translate this address? Alyssa. Yeah, so what does the MMU have to do? Well, do we know if it's valid or not? It has to ask the kernel because this does not fall into the segment that the MMU knows about. The base of the segment is OX 10,000 and the bounds is 1,000. So the segment starts at virtual address OX 10,000 and extends to virtual address OX 11,000. And this address is not inside that, so it has to ask the kernel. And now let's say the kernel says, oh, yeah. I hadn't told you this yet, but the process also has another segment. Starts at OX 100, extends for 500 hex, and is located currently at 16,000 in the physical address space. So how do I translate this address? Remo. So that gives me what? 300? OK. And then what do I do with that? And so what do I get? OX, yep. So this maps somewhere else. And in fact, the base for this was 43,000. The base for this was 16,000. So this is actually more towards the front of the virtual address space, but it actually maps. This maps towards, well, actually, you're right. These map in order. They're just not in order here. So I took the virtual address, I subtracted the starting address, and then I added that to the base. This is not, once you do a couple of these, these are not too hard. Subtract the beginning of the segment, add that to the base physical address. What about this? The valid hex address. What happens here? Jeremy? Yeah, no, I mean, is this an invalid address? No. It just happens to spell something funny, right? What happens? Bethany? No. I mean, is it inside a segment that the MMU knows about? No. So what does the MMU have to do? Well, does the MMU know whether that segment exists or not? Yeah. Ask the kernel, right? And in this case, because this is a funny address, I just killed the process. But I don't want to make you guys prejudice against dead beef, right? I mean, dead beef could be a perfectly valid address, right? On your OS161 kernel, dead beef will never be a valid user address. Why is that? Maneesh, what's that? I can't hear you. No, it's not greater than 2 to the 32. Anything with 8 hex digits is less than OXFFFF. It has 8 hex digits, right? But where would this address be? Remember the MIPS memory map, Thor? It is outside the 2 gigabyte virtual address space that user processes are allowed to translate, right? Remember, user processes could translate virtual addresses from OX0 to OX80 million, right? This is bigger than that. And so if a user process tried to translate this, it would always be somewhere in the kernels portion of the 4 gigabytes, and it would be killed, right? But again, don't prejudice yourself against dead beef, right? Dead beef on certain systems is maybe a translateable address, right? OK, I think that's the end of, yeah. OK, so that's the end of one I want to cover on segmentation, right? So to kind of, as a nice segue, why don't we do the segmentation-related question from the practice exam first? Let's see if I can get this as big as possible. There we go. OK, this is going to give me a hard time, isn't it? All right, everybody see that? OK, so we just did this. There's one little extension to this, which I think you guys will maybe be able to make sense of, right? So here's a potential segmentation question from the practice exam, right? This is the segmentation base and balance table that's currently loaded into the MMU for the process that's running, right? On exams, this is why they call me Jeffrey the beneficent. I typically use base 10 arithmetic on exams, right? So no hex addition, right? Hex is part of real life, right? But I remember sitting there on exams like trying to remember what D plus A equal, then it's just stupid, right? So hopefully this will give you guys an easier time, right? So just do arithmetic the way you were taught with your 10 fingers and you'll be fine. OK, so here's our table, right? And here are four potential queries to the MMU from the process, right? These are addresses that the process is trying to use, right? And I've divided these into loads and stores and you'll notice that the segments here are annotated with permissions, right? Which is something that I could do. I could tell the MMU this segment is marked read only or read write or whatever, OK? So a load from address 1,200. Yeah, what happens? And where does it, yeah, you also have to tell me from where from? From where from, right? What would happen, right? So you're saying this would result in a successful load and where would that load come from, right? So you did that correctly, but where physical address does this load come from, yeah? And so what's the answer? 700. How many people think that's right? OK, how did we do this, right? We took 1200. We looked in our segmentation table. We found a segment that that address is inside. So we know that this address is valid, right? It's inside this segment, right? It's between the base, which is 1,000, and the top of the segment, which is the base plus balance, or 1450. And then to translate it, we took the offset into the segment, which was 1,200 minus 1,000. That's 200. And we added it to the physical offset, right? And we got 700. Finally, what's the last thing we would have to do to make sure this instruction was able to proceed? Yeah, Nick? Check the permission. So this segment is marked as read, and this is a load. So we are all good, OK? What about a store to address 140,000 Richard? First tell me, is this a valid address? Is there a segment that contains this? Simon? Yeah, that's exactly right. So two things we had to check. We had to check whether it's valid, right? So it turns out that this address is inside this segment. It's between 130,000 and 141,000. It's right at the top, it turns out. And we mapped that to a physical address by computing the offset, which is 10,000, and applying it to the physical offset. So this is 10,000 plus 20,000. That gives us 30,000. And this is a store, and the segment is marked read, right? So we're done, all right? Let's do maybe one more. Let me see which one of these are. Well, OK, what about number three? Yeah, sure. So the offset is computed by taking the virtual address and subtracting the base of the segment, right? So 140,000 minus 130,000. Yeah, so that's why it's 10,000. What about a load to 36,788? Tom? Yeah, so essentially what would happen here is that the MMU would raise an exception and ask the kernel if there was a segment that existed. And maybe there is, maybe there isn't. But essentially, the MMU cannot translate this address. It has to ask the kernel for help, right? What about a load to 84,100? Robert, you can't really see. Yeah, well, yeah, but what was your answer based on your lack of, he thinks it fits into the third segment, and is he right? Yeah, it does, right? Wow, no visibility, right? I'll give you a blurry version of the exam tomorrow. Maybe you'll do better. OK, so 83,000. This is the offset. It fits into this segment. It starts at 82,000, and ends at 85,000. And we compute the offset. We apply it to the physical offset, and we get 2,100. Arithmetic. All right, finally, the last one, a store to address 1,000. What happens? Yeah. Yeah, so this will cause an exception, right? The MMU will say, I know how to map this address, but this operation is not permitted inside the segment, right? And again, this is another case where the MMU will actually ask the kernel what to do, right? Because there are cases where the kernel might mark a segment as read only, but later relax that and actually allow rights to it, right? So in this case, I would say, hey, kernel, there's this operation to this segment. Is it OK or not, right? Yeah. Yeah. I have a few questions. Sure. Yeah, so yeah, I will make that clear, right? And I think on the exam that I wrote for today, I was clear to make sure it says virtual base, right? Yeah, that's a good point. Yep. It was trying to look 85,000 that would not be a valid address. Oh, gosh, corner cases. Yeah, I think. So the question is, is the bounds inclusive or not, right? And I'll make sure that that's specified on the exam, right? I think it would actually have to be below inside the segment, right? So I think that would actually fail, right? So there's a corner case here, which is, is it less than or less than equal to the base plus the bounds? It's a good question. Certainly, we won't ding you for that if we don't make it clear. Yeah, Tim. Yeah, so the question is, is it strictly so, well, that's not what I want to do. If you looked at the slides, I think it said less than, right? So in this particular case, that would fail, right? Because it is not less than the base plus the bound. It is equal to, right? There certainly has to be some agreement about that between the kernel and the MMU. Otherwise, you could have addresses that could map into two different segments. That would be not good. All right, good questions. Let's do a different problem. Does anybody have any requests? What's that? Yeah, number one to seven. All right. Well, that's something I wanted to do. Let's see. So we haven't talked about page tables, so there's not going to be a question on this. This question will be there. OK, we'll come back to this question. I think that's a valid question. So what about question number six? I'll give you guys a minute to think about it. You guys have seen this practice example before. Explain how multi-level feedback queues can starve processes and propose a solution to this problem. Wembley, take the first part of the question. Right. So what, so what? I mean, that is what happens, right? But how can that lead to starvation? Right? Right, so remember with MLFQ, if I block before the end of my quantum, I go up. And if I hit the end of my quantum, I go down. That doesn't have to lead to starvation, right? So what happens, for example, if I have just a couple of CPU bound threads that are running? You know, and let's say I start them at the top queue, what happens to all of those threads? Where will they end up? Yeah, correct. Well, OK, they will get a chance to execute, but what queue will they end up in? I have three CPU bound threads in MLFQ implementation that are just never going to yield. Yeah. They're going to end up at the bottom, right? Will they starve? No, there's no other threads in the system, right? So when the MLFQ scheduler runs, it's going to look at the queues, and it's going to say there's nothing in the top queue. There's nothing in any other queue. I'm just going to keep flooding these threads right. So that doesn't lead to starvation. What if I have three IO bound tasks, right? Or three interactive tasks that are sleeping frequently, right? Where will they end up? They'll all end up at the top, right? Won't anybody starve? No, they're all running together, right? So again, describe a case where I could see starvation. I have more. OK, I have more, right? Yeah, Frank. Right, so here's the thing to keep in mind, right? When MLFQ works, it works, why does it work, right? Why does a low priority thread ever get a chance to run on MLFQ, right? I'm always picking the high priority threads first. Why does it ever get a chance to run? Yeah, so the idea with MLFQ is that the reason why a CPU bound task or thread that drops to the bottom queues ever gets a chance to run is because threads that are in the top queues are supposed to be sleeping a lot. So there are supposed to be times where the scheduler will look at the queues. And despite the fact that there are threads that will be in the top queue, they're not ready to run. They're on the waiting queue, right? They're waiting for some IO to happen, right? So it says, well, I'm going to ignore that guy, ignore that guy. They're all waiting. They're all sleeping. So that's when those CPU intensive tasks get their chance, right? When all of the IO intensive tasks are asleep, right? So it's OK. So I mean, clearly this question implies that starvation can happen. So let's keep thinking about how. And come up with kind of a concrete example of how this would work. Yeah, Albert. What's that? Right, so if there is always an IO, right. So that's exactly how starvation will happen, right? But what will lead to that condition, right? Give me an example of a case where that could happen. Nick, is your name Nick? John. Matthew. What is it? It's always one or two people. Mike, I'm not sure. So embarrassed. Still learning names. Jeremy. OK. Yeah, so again, the idea is that there's always an IO intensive task that's ready to run. OK, but why would this happen, right? What's that? No, because remember, the ideas these tasks are always going to be at the top, right? The idea is that the IO intensive task will always run for a short period of time and then block, right? Damn. OK, so there could be a lot of them, right? So if I have a lot of them, if I have many, many, many, then the idea is that the probability that there's always one ready to run is pretty good, right? But what's the other important variable here that nobody has identified, right? What do IO intensive tasks do, right? They wake up, they compute briefly, and then they sleep. And while they're sleeping, that's when the CPU bound threads potentially have their chance. So what's an important aspect of their behavior that we need to think about when we talk about salvation, Paul? That's not what I'm thinking of, Sean. How long they sleep, right? Even if I have a lot of interactive tasks, right? If they sleep for long periods of time, right? Let's say my interactive tasks wake up, they compute for one time, and they sleep for 1,000, right? Then even if there's like 100 of them, there will be periods of time when they're all sleeping, right? And when they're all sleeping, that's when those CPU intensive tasks get their chance, right? On the other hand, even if I have a small number of CPU intensive tasks, but the sleeps are very short, I can starve CPU bound tasks, right? Let's say I have two blocking tasks, right? But they compute for one time unit and block for one time unit, right? With the strict MLFQ implementation, right, I'm going to run them, they're going to block before the end of their time quantum, they're going to stay at the top, but they're going to be back from the ready queue before you can say whatever. So even a few of those, right? Even if I only had three or four tasks like that, I could still starve my CPU bound threads, because there's always an IO bound thread that's ready to run, right? So the amount of time the IO bound thread spends sleeping is extremely important into figuring out whether or not you're going to have starvation, right? Any questions about that? It's an important aspect of MLFQ, Oh, yeah, well, no, that's what we're talking about, right? So the other thing we talked about with MLFQ, right, is if I need to decide how to handle yield, right? If I yield, the problem is I'm right back on the ready queue, right? So probably I don't want a reward task for yield, right? Because then, if I'm smart, my way to gain the system is just compute, compute, compute, compute right before I get to the end of my time quantum yield, right? I'm ready to run again, so I can essentially remain in that queue forever, right? So that's not good, right? So I probably need to actually look at blocking behavior, right? So I need to reward threads that move from the ready queue into a weight channel or a weight queue of some kind, right? But again, how long they spend waiting is kind of important. All right, and we talked in class about one solution to this problem that's kind of gross. Does anyone remember what it is? Nothing. Yeah, it's like there's this concept in the Old Testament of the Jubilee year. Every seven years that everybody takes all their money and puts it in one big pile and then just redistributes it between everybody evenly again. Sound like, I don't know, today in America, that kind of sounds like a nice idea. But that's kind of what we're going to do, right? So periodically, we're just going to say, OK, we've rewarded you guys for long enough. We're going to throw you all back in the same queue, and then the CPU intensive task will get their chance again, right? Because now they're competing on equal terms with all the Iobon tasks, right? Any other questions about this? Sorry for my bit of socialist reference there. OK, let's see here. I think there was one more of these I wanted to look at. How are we doing on time? 10 more minutes. Well, let's see. It's not that interesting. What about this guy? Oh, I think that one's kind of obvious. So why don't we look at the long answer questions for the last 10 minutes? OK. So OK, why don't we look at the priority inversion question? This is a neat question, I think. It won't be on the exam, sadly, because you've already seen it. But you might want to think about the interaction between synchronization primitives and priorities and schedule it in general, right? So here's the issue, right? So we talked about how priorities are an input into the scheduling process that it's designed to give the scheduler some external feedback about what to do, right? And some schedulers, for example, like the rotating staircase we talked about, uses a priority as a way of dividing up resources, but has nice ways of guaranteeing that starvation doesn't happen, right? Because in every large time quant of every thread in the system gets a chance to run regardless of its priority, right? So that's one of these nice things we talked about, about the things like the rotating staircase, right? So but in certain types of systems, priorities are used in a much more strict fashion. So for example, as soon as a high priority thread is ready to execute, it will always preempt a low priority thread, right? And this is done for good reasons sometimes, right? So we talked about, for example, these real time systems where I might actually, you know, if the high priority thread is the thread that's going to stop the robot wheels from turning, right? And the low priority thread is the one that's sending a message back to Earth, right? Then the high priority thread might say, you know what? That message is not going to be very useful if we're at the bottom of a ditch, right? So maybe you need to stop doing that, even if it causes you to fail, and I need to run so that I can steer. So but this can have these sort of ugly interactions of the synchronization primitives. And this question is entirely, in fact, it's not at all sort of made up, right? So this kind of system actually, this kind of problem actually happened, and there's sort of a famous story about, I think it was one of the Mars rovers that suffered from this problem. What was that? Pathfinder. Yeah, Pathfinder, right? So one of the Mars Pathfinder rovers had this exact issue, right? So again, so here's our scenario, right? We have a system with three threads, right? One at high, one at medium, and one at low priority, all right? Now let's say that we have a shared resource on the system that's protected by a lock, OK? And these locks are the kind of locks we talked about when a thread is holding them. Other threads will have to wait, right? And again, this is a non-preemptible resource. So what I just said, right? If I have the lock, you've got to wait for me to finish, OK? So can anyone, I mean, again, the solution to this is out there, and people may have read the story already, but can anyone describe a situation that would cause a high priority thread to become blocked? Essentially indefinitely, right? So again, our goal here is to schedule the system so the high priority thread always runs when it's ready to run, OK? But can anyone describe a scenario on this particular system where an ugly interaction between this synchronization primitive and this strict priority scheme would cause the high priority thread to potentially be blocked, again, forever, right? Or at least for an indefinite period of time, for a period of time that is completely outside of its control? Anybody want to pose a guess? Yeah, right? OK, right, so this is exactly right. So this is what happens, right? The low priority thread acquires the lock, or holds the lock for some reason. It had a chance to run, it grabbed a lock, right? The high priority thread needs to access that shared resource, so it needs the lock as well. So now what happens is it starts waiting on the low priority thread. Now, normally, so let's say that was our situation, right? So let's say you just had those two threads, high priority thread, low priority thread, low priority thread grabs the lock, starts doing something, high priority thread has to wait. What would happen? What would normally happen, Jen? Right, so high priority threads can't run, because the low priority thread is blocking it. But assuming that the low priority thread is running, it's going to finish, right? The critical section is bounded, it does whatever work it needs to do, it drops the lock, and the high priority thread gets to run, right? What causes the problem here is this medium priority thread, because the medium priority thread will always print the low priority thread, right? This is a strictly scheduled system, right? So when the medium priority thread is ready to run, the low priority thread has to wait. And now what can happen is that the medium priority thread is running and doing a bunch of random stuff, right? The low priority thread is blocked waiting for the medium priority thread, and the high priority thread is blocked waiting for the low priority, right? So the high priority thread is now, and again, this is exactly what we didn't want, right? Because the high priority thread is now essentially blocked by two threads with lower priority. Does everyone understand how this happens? So on Pathfinder, I think, if I remember correctly, the medium priority thread was some sort of communication related thread, right? That was in charge of communicating with Earth. And it ran a lot, right? And when it ran, it did stop. It took a while, right? Because the people who designed the system thought, well, you know, if the high priority thread needs to stop it, that's OK. It'll just preempt it, right? But the high priority thread can't run, right? And so the medium priority thread's sitting there sending its long love letters back to Earth and talking about all the cool rocks that it found and stuff. And in the meantime, the high priority thread is like, I'd like to move so we can find some more rocks and can't do it, right? Does everyone understand how this happens, right? And again, keep in mind, the medium priority thread is critical here. Without it, you can't have this sort of indefinite weight. Yeah? Well, remember, the low priority thread just holds the lock, right? But it can't make forward progress because it's blocked by the medium priority thread, right? So what we want to happen here is we want the low priority thread to be able to finish whatever it's doing, right? So you could talk about ways that the high priority thread could just like whack the low priority thread, take the lock from it, and run off, right? But let's assume that that's not safe, right? So now, so people understand also what this is called priority inversion, right? Because again, what we want is the high priority thread to preempt the low priority thread. What's happening is the high priority thread is being prevented for making progress by two threads with lower priority thread. So what's a potential solution to this problem? Now all these hands go, great. Yeah. Yeah, so most solutions to this, and there's a number of ways of mechanisms for doing this. We could talk about a few of them, right? Involves some form of priority inheritance, right? Priority inheritance is the concept that if I'm waiting on a thread, right? If for whatever reason, a high priority thread is waiting on some other thread or task, then that thread that's causing it to be unable to make progress inherits the priority of the task that it's waiting on, right? And so I mean, can anyone, like, let's say we have a priority-based system, what would be one way of doing this, right? Or what are some requirements of this solution, right? What do I need to be able to detect, first of all? So it's a nice idea, right? It's the right idea. But what do I need to be able to do in order to implement it? Yeah? I love the logic. Yeah. So the first thing I need to be able to do, right, that might be a little bit subtlest, I need to be able to detect when this is happening, right? So when the high priority thread is blocked, right? Because the high priority thread could be blocked for all sorts of reasons. It might be doing IO, it might be doing other things, right? But in this particular case, right, when it goes to acquire the lock and is put on the, you know, let's put this in OS 161 terms, right? Goes to acquire the lock, it's put on a weight channel, right? I have to be able to figure out who holds the lock, what's the priority of the thread that's holding the lock? And if the priority of that thread is lower than the thread that's trying to acquire it, I need to do something to boost it, right? So we talked just very briefly, right, about lottery-based schedulers, right? Where, you know, I can avoid starvation by handing out tickets to a bunch of processes based on their priority and then holding these lotteries where, you know, I pick a ticket at random and I run that thread. So how could you do priority inheritance with a lottery-based schedule? What would be the mechanism? So with a priority-based schedule, I just say, I identify the thread I'm waiting on and I, you know, I give it my priority, right? I'd say, until you're not blocking me anymore, you are now a high-priority thread, right? Because you're forcing me to wait. How would I do this with a lottery? It's kind of an elegant solution to this using lottery schedulers. Jeremy, I'm going to ignore you. Remember, lottery-based scheduler, everybody has some number of tickets, right? The system holds a lottery. Chooses the thread to run, yeah, remember. Yeah, sometimes all of them, right? So a high-priority thread might say the low-priority thread, here. I've got all these tickets, right? I can't use them because I'm on the waiting queue, right? Waiting for you, right? So here's all my tickets, right? Go play the lottery. Now you're going to win, right? Now you have a much higher probability of running, right? Again, a lottery scheduler is not, a lottery scheduler does not suffer from starvation, but a lottery scheduler could still have this priority inversion problem, right? Or at least what could happen on a lottery scheduler, right? The medium thread would not be able to prevent the low-priority thread from running forever, right? But it would slow down the low-priority thread quite a bit, right? Because if I assume the medium-priority thread has more tickets, it's going to win more often to get a chance to run more often. Meanwhile, the high-priority thread is still waiting, right? So a lottery scheduling would make this problem a little bit nicer, right? It wouldn't lead to indefinite starvation, but it would still have this priority-inherence problem, in problem, right? Where a high-priority thread is waiting on a low-priority, right? So with lottery schedulers, one of the nice things they proposed when this idea was introduced was, yeah, this idea of transferable tickets, right? So I give you my tickets and now you can play with my, you can play with my chips, right? All right, any questions about this type of question? Yeah, Sean? No, not necessarily, right? I mean, any scheduler to some degree becomes more, yeah, as the number of threads increases, right? Any scheduler becomes more challenged, right? Because I have more, you know, if I have, again, we talked about this, right? If I have four cores and four threads, I'm in good shape, right? I don't need a schedule, right? But as the number of threads starts to go up, number of tasks starts to go up, dependencies between tasks start to emerge, and so any scheduler starts, right? But a lottery scheduler, it's the same mechanism, right? I just have more tickets, I have a big lottery. That's a good question. Any other questions about this question or any other ones on the practice exam or anything? All right, so Friday. We will start at 9 a.m. sharp just to prepare people. I don't know how to do this, but I think I'm going to ask people to kind of leave coats and book bags, like maybe in that hallway back there or in the front of the room or something, just so I don't have to worry about this. We had an incident last year at the midterm that I don't want to repeat. So please bring a pen. Please bring your UB ID card. Please be here at 9. If you miss it, sorry. If you're late, then you'll have to work a little bit faster. So we'll see you on Friday. Good luck.