 A little bit of excitement this morning. It's kind of fun. OK, so today we are, Guru, how are we doing up there? OK, so today we're going to finish talking about scheduling, synchronization primitives. Can you give me some tape to backfill off this? It's just a rough morning. Wow, I don't think we're going to get through all that today, but maybe what we'll do. Especially now that we've lost a bit of time. We're going to finish talking about semaphores, which was the synchronization primitive that we didn't get to on Friday. We'll have a couple of slides with advice about approaching synchronization problems and choosing the correct primitive. That's something that we're also going to cover this week in recitation. So you'll get some exposure to that. The synchronization problems on assignment one are designed to give you guys some practice in doing this. But they will not be the last time that you will confront a synchronization problem in the course of doing the assignments for this class and have to choose the correct primitive. So those are designed to give you some practice, but we'll give you some advice about how to do that. And then we're going to start talking about scheduling, which will be we've been working our way up from the hardware level to talking about context switching and now synchronization, which is one of the consequences of concurrency. And so finally now we're going to get to the policies that drive how threads are actually scheduled and what order they're scheduled, et cetera, et cetera. So this is kind of the end of our unit focused on the CPU. We'll finish with scheduling, and that will probably take us through the week. All right? Let's see if this is going to work today or not. It's just a discombobulating sort of day, isn't it? I don't know if this is on or not. Oh, that's better. OK, so any questions about the stuff that we covered last week? Last week we talked largely about synchronization, some of the problems, and then we presented locks and condition variables and walked through an example. So any questions on that stuff before we do a little bit of review and then go on? Any questions? Yeah? Can we actually use all 1.g negated? So this is a good question for either piazza or recitations, but I'll try to give a short answer. So some of the primitives that you're going to implement for assignment 1 are built on top of spin locks. That's not always a bad thing, but most of those primitives should use weighting as a fundamental primitive. There are spin locks that are required, but they're usually done so to allow the thread to safely get to a weight chain. So if your primitive spin, that's probably not a good thing. The spin locks that we give you will spin, but they'll spin, that's fine. They'll spin briefly in most cases, sometimes not at all. If there's no contention. OK, any other questions? While I do surgery on this. All right, so let's do some review. So concurrency is the illusion. This is kind of going back a few lectures, actually. Concurrency is illusion that what? Peng. Right, so concurrency is the illusion that multiple things are happening on the same time or more things are happening that could actually be happening given the number of cores on your system, right? And what's the sort of contrast to that? Atomicity is the illusion that what? I'm going to pick on people in the back whose names I don't know that well, you. Yeah? What's your name? Pal. You want to help them out? Kind of. Someone want to kind of help expand on that answer? I'll let you. Yep, back row. Yeah, so that there's a set of things that either happen or don't happen. And from the perspective of other threads on the system, something that requires multiple instructions looks like a single operation. So atomicity allows us to build up actions that look atomic that actually perform a number of different things that at the lowest machine level are not atomic. So requires reading and writing for multiple memory locations, multiple different adjustments to global state. But those things should look like they all happen at once. OK? So remember, is your programming assignment one? What are the things you have to keep in mind about the threads on your system? What are the types of things that you can and cannot assume about those threads? See, that's what I called on well. Back row. How? They may run. Did I hear an answer? I'm not sure. So they can run in any order. You can't make x, x, unless you do something about it, right? Unless you force a particular ordering, the threads on your system may run in any order. What's another thing you have to keep in mind, Sean? Right, they may be started and stopped at any time, and they may remain stopped for arbitrary periods of time. What's one other thing, Tim? I can't remember the last one. Nick, do you remember? It could be run in any order. It could be stopped and started at any time. They can remain stopped at arbitrary lengths of time. What else? I guess there wasn't another one. Sorry. That was a trick question. They can modify global state. That is true, right? OK, so right. They can run in any order. They can stop and restart at any time. They can remain. See, I knew I was going to do that one day, not necessarily by accident, but I did anyway. What's the interface to a lock? What are the two functions that we use when we use locks? Kevin? Acquire and release, right? Lock acquire, that will put me to sleep until the lock is available and lock release will not do that. I remember this bug from last year, and I never fixed it. Sorry. Lock release will release the lock and will not sleep, right? I don't know. I don't know how to. What are locks for? What are some of one of the one of the primary ways that we use locks done? To prevent dead locks. I would argue locks cause dead locks. Without locking, there can't be no dead locking. Yeah, but to protect critical sections. Exactly, right? So we lock at the top, unlock at the bottom. That produces a critical section. What's the interface to condition variables? What's the interface that you guys are using, hopefully, right now, Dan? Yeah, and what do those functions do, right? Exactly, excellent answer. Weight, weight on the condition variable until I am awakened. And signal and broadcast are the two ways that I have of awakening threats. Signal wakes up one, broadcast wakes up everybody. OK, good. What do I use condition variables for? Condition variables. A condition variable is involved with a blank, but not a blank. Fill in the blanks with two parts of its names. Sirach, condition variable is not a, it's not a what? It's not a variable. It doesn't hold any information. Thornton, what is a condition variable? What do I use it for? We'll tell you when to help them out. What is all this notification and signal in and waiting for? Why? Glad we did this review. It's been a long weekend apparently for not everybody. Is your name Dan, too? Yeah, what's a condition variable for? That's a good answer. The signal changes to shared state. It's almost as if he's looking at the slide. Exactly. So condition variables are a signaling mechanism. Remember we talked about two goals, thread synchronization, correctness and coordination. Synchronization condition variables are a communication primitive. Their coordination primitive allow threads to indicate safely to each other that certain things have happened. So why are condition variables always associated with the lock? On your system, when you use condition variables, the condition variable interface requires that you hand it a lock. And it might do things to that lock. It might not do things to that lock in certain cases, but it might use that lock for some checks here and there. But why? What's the lock for? Right, the lock. In what state does it protect usually? Yeah, the lock protects the variable that holds the condition or the set of variables or something about the condition. And actually, I think that we'll cover this in recitation a bit more this week. And we'll look at examples of where it's dangerous not to pass in that lock or not to have the lock. Because if you don't have the lock, the condition might have changed by the time you call a signal or broadcast. That's one of the reasons that we hold it. OK, deadlock. What is deadlock? When does deadlock occur? Sam, if you're getting close, Navya, can you help him out? OK, two threads. Sam started with one or more. You want to continue the process of clarifying the answer? Not you on a rock behind. What is your name? Yeah, so there's some. Now, what is your name? Navi, thanks. There's a circularity in my waiting graph. That circularity can be very small. It can be just one thread. But there is some circularity that's causing a series of threads to wait for each other to finish. And thus, nobody makes progress because everybody is waiting for somebody else. And that's deadlock. That means that we are stuck. What are the conditions for deadlock? There were four conditions for deadlock that must be met in order for deadlock to occur. Yeah, correct. Yeah, there has to be access to shared state. If I don't have shared state, then I probably don't have a lock. No problem, right? Josh, what's another one? Yeah, so I have to be able to request multiple resources independently when I remember the dining philosopher problem. If I had a primitive that would allow me to grab both locks at once and fail if I couldn't do that, then it's a potential solution to that problem. OK, Jen, what's another one? Yeah, and that's the one that we most frequently try to address when we look at deadlock is break in the circular dependency. There's one more, Sarah. Yeah, no preemption. I can't yank a lock away from a thread. I don't have a mechanism to do that. On your system, you do not have a mechanism for forcing a thread to release a lock that is acquired. If you had that, then you could use that to break out of deadlock. So this is good. A single thread can or cannot deadlock. How many people think yes? How many people think no? I should have asked no first. A single thread can deadlock. And we looked at a way that this would happen. It's not necessarily always a stupid programming bug. Usually happens because there's some call graph that that thread doesn't understand that leads back into the same lock. Any questions about the stuff before we go on? I think it was good to do this today. Monday is a good day for review. Some of the people like me have little thick heads today. All right. So I briefly want to talk about semaphores. Semaphores are, I would say, of the primitives that you guys have and will implement the least useful primitive. Most of the problems that you guys find yourself solving with synchronization primitives for assignment 2 and assignment 3 will not use semaphores. There's one case where a semaphore turns out to be an elegant solution to a synchronization problem that you guys will have. But so semaphores are going to essentially be thought of as shared counters with atomic decrement and increment operators. The semantics of the shared counter are that if I try to drop the value below 0, I will wait until that operation can continue without dropping the value below 0. So the values don't go below 0. That's just how semaphores were originally constructed. So there are two operations on semaphores, v and p. You can decide how you want to remember those. v and p actually are short for Dutch? I hope. Shut off the video for a minute so I can lie. Anyway, Danish or Dutch? Because these actually were invented by someone who we've talked about in the past as part of a very early multi-programming system. Anybody remember who this fine interval was? Yeah, Robert. Dijkstra. Yes, this is a contribution from a good doctor. I mean, that's a smart guy, right? He gave us a lot of things, including semaphores with names that people struggle to remember. The easiest way to remember them is to think of p as, like, I think the word is actually proberan. And I think I read somewhere it's actually not a word. I think it's a word he made up. So they're made up words in another language, right? So good luck trying to remember them. But probing the semaphores, kind of trying to decrement the count, right? And then v, you don't really have to remember if you can remember p. OK, so again, the semantics of semaphores are if the value is greater than 0, I decrement the value in return. If the value is less than or equal to 0, I wait until it becomes greater than 0, and then I decrement it. So the idea is the value of the semaphore can never go below 0. So remember, we did a producer-consumer-buffer example on Friday. And afterwards, someone came up to me with a good question, which is they said, couldn't we use a semaphore to solve this problem? Remember, we had a buffer of fixed size. We had three conditions. The buffer was full, empty, or neither. And we wanted to use those conditions to determine whether producers or consumers would be able to access a buffer where they would have to wait. So who thinks you can do this with a single semaphore? Since nobody thinks that, I can call in anybody to tell me why not. Simon, you don't know? Do you want to think about it? Well, I mean, so the producer-consumer, so what could we do? What part of it could we do with the semaphore? So imagine that I use the value of the semaphore, the count that the semaphore is holding, to represent the number of items that are in the buffer. And when I produce into the buffer IV, I raise the count. And when I consume IP to count, that sounds nice. But what's the problem here? Well, what part of this will work correctly? So remember, I want two things to happen. If the buffer is empty, then consumers need to wait. If the buffer is full, producers need to wait. So which half of this can I do with the semaphore as I've just described? What's that? Can I prevent it from being overfilled? So here's the question. Will a P operation on a semaphore ever block? Anyone want to guess? Greg? No. The P will never block. So Simon, you want to change your answer? Which part of it can I do? So I want the buffer full, buffer empty. Which half of this can I use the semaphore for? Because when the buffer is empty and the consumer comes in and tries to pee it, the consumer will wait. So I can get half the problem with the semaphore. How could I use semaphores to solve this problem? Unlucky person. Call done. Yeah, and how would I do that? You're using the words, I'm like, why is he saying signal and weight? It's because it's on the slide. Yeah, P and V. And essentially, what do those two counts now represent? One of the counts represents the items in the buffer, the number of items in the buffer, and the other counts should now represent what? Yeah? I forgot your name. I feel bad. Alyssa? Yeah, the number of not items that are in the buffer. So I have one count that counts up from empty to full and another one that comes down. And if I use those two together, and both producer and consumer now, P and V. And I would have to think about if there's an ordering issue. But I don't think there is, because the V will never block. So the P might block the V. OK, cool. Any questions about that before we go on? I heard a murmur. It's just me, maybe. OK, so a binary semaphore is a special case of a semaphore that's restricted to have values between 0 and 1. We don't give you an implementation of a binary semaphore, but you could very easily build one. Or you can use a semaphore in a way that's intended to be binary. You can have a semaphore that, based on your code flow, only takes on the values of 0 and 1. So binary semaphores can be used as a replacement for a lock. So remember, we had this great example before, where I was getting lots of money. And I was locking and unlocking around this critical section. And I can replace the lock with P and V on a binary semaphore. This piece of code, if I initialize the semaphore to 1, will work fine. We know when the first thread enters the critical section, it'll probe it. It'll go down to 0. Other threads will block. As threads come out, they'll V it and it'll go back to 1. So this will work. What is the difference? But there's one critical sort of crucial difference between binaries semaphores and a lock. Does anyone want to speculate on what that might be? One critical difference that allows you to use them somewhat differently. Tam, you're very close. The semaphore doesn't need to be released by who? Exactly. So locks have an owner. A lock has an owner. And if you try to unlock a lock that you don't hold, that should be a mistake that your lock should catch and scream at you about, probably by panicking or something. Semaphores have no concept of ownership. So a semaphore, anybody can P and V it. That's just how we think about semaphores. And there are cases where, and again, the one time that you may consider using a semaphore in this class, where it's an elegant solution to a problem, is one of those cases where the two different threads are going to P and V. The P's and V's are not matched within each thread. And in this example, they are. We have other examples where you might use a semaphore to force another thread to wait for something. In fact, this might be a hint on one of the problems for assignment one. And that semaphore can be used as a signal to another thread. Jeremy, do you have a question? Or are you going to answer the question? All right. So again, I think semaphores, partly because of the lack of ownership, they have slightly looser semantics of locks and condition variables. That can make them a little bit more challenging to use. As I said, most problems in this class don't lend themselves to semaphores. But there are several cases where you might find one that does. I'm going to skip this example because this is in your code. But you can see one example of how we use semaphores in the driver code for your signalization problems. OK. So we've presented you with these three tools. These tools have different semantics. And one of the things we're going to just touch on briefly right now and also look at a recitation is how do you choose when you're approaching a synchronization problem, you've identified shared state, you've realized that you have a correctness problem or you have some coordination that you need to enable between multiple threads that are going to be accessing that resource. You start to think, well, OK, what primitive am I going to use? And there's a number of different questions that come to mind. I mean, what primitive am I going to use? Where am I going to put that primitive in relation to the resources? How many resources will be guarded by a particular primitive? And this is something that will help you with it, and hopefully you'll get good at. But here's one way to think about this. So one of the first things you can try to do is identify the constraints. And by constraints, what I mean is things that should be true, things about your system that should remain true. So for example, with the producer-consumer buffer problem, a constraint might be that the items in the buffer never go below zero and above the size of the buffer. And you might also identify the fact that the constraints are, if the buffer is empty, producers' consumers can't proceed, and the buffer is full, producers can't proceed. Identify the shared state. Now maybe this should have been the first step. Figure out what is it that you're trying to protect. So in the producer-consumer example, the shared state is the buffer. That is the piece of memory, essentially, that multiple threads are going to be accessing. And your job is to make those accesses safe. So choose a primitive. And sometimes this is a step that you'll want to come back to. Sometimes a good way of approaching these problems is to choose a primitive. Think about how you would implement the solution using that primitive. And if it starts to get really gross, if it starts to feel wrong, if you start to feel like one clue that it can be starting to get gross and wrong is if you start having to create a lot of other shared state to hold information about the thing that you're trying to protect. Sometimes that's necessary. Other times it means you're not using the right primitive. This is also important, particularly when using condition variables. So for every lock acquire, there should be a lock release. Those are usually easy to get right because they're usually bracketing a piece of code. CVs are different. We looked at when we saw the producer-consumer example, we saw that the weights and the signals and the weights had to be balanced across different threads. In that case, actually, both threads both signaled and weighted. In other cases, one thread will weight and one thread will signal. But you have to make sure that if somebody goes to sleep that there's someone to wake them up when that condition changes. And usually the easy way to do this is to find the condition and make sure that when it changes in ways that are significant, there's some sort of signal or broadcast that's done. And also, again, be conscious of multiple resource allocations. This is not something that will bite people usually for assignment two. But once you get to assignment three, you will find that this will start to become a problem because there'll be several different pieces of potentially shared state that you need to synchronize. And in order to complete a request, you may have to acquire multiple locks. So then you have to start looking into how to avoid deadlock. And we've given you some suggestions for how to do that. And as always, it's a good idea to think about how different threads will interact before you sit down and start to write code. That's always a good idea. Don't get into the weeds before you have a plan, because no one does good planning in the weeds. It's not a good place to plan. All right. So any questions about synchronization before we move on shift gears? They're talking about scheduling. Yeah, many. Yeah. So with the semaphore, if you use a counting semaphore, a counting semaphore we used to refer to a semaphore where the count can go past one. If you use a counting semaphore, you are not creating a critical section. You may create a critical section that five threads can be executing in at once. But that's not how we usually think about critical sections. We usually think about them as a section of code that only one thread can be inside. And that's part of what allows us to create that illusion of atomicity. So a counting semaphore is not a substitute for a lock. It can be. But I would say use a lock. If you're using a binary semaphore as a lock, I would wonder why you're not using a lock, because a lock just gives you a little bit more help, particularly with correctness. So locks will, one of the things your lock should probably do is make sure that when you release them, you hold them. Yeah. Yes. Yes, yeah, I know. People want to do that. You can. I hate that solution. But I'm not going to read your code. And so if you get your locks to work and they pass our lock test and you can use them successfully, that's fine. But in general, it wasn't our intention that people build locks on top of binary semaphores. But it can be done. It just makes me throw up in my mouth a little. No, I mean, it's not a bad idea. The point of the semaphore is to give you a model for how to do the lock. But yeah, you are right. You have identified an optimization. No. Yeah, so May, that's a good question. So if your system deadlocks, particularly in the kernel where there's no real thing, there's no recourse, how many people, a lot of you have probably taken BNS class on real-time and embedded systems. So does anyone know, and frequently, especially for the Mars rover, something that you're going to send far away? So when the Mars rover breaks, it's not like a technician from Google walks into the data center and goes into the correct aisle and pushes the reset button. I mean, that thing's far away. So unless you can get a Martian on Mars to contract with who's going to maintain it for you, you're pretty much in trouble. So does anyone know there's a kind of a classic technique that a lot of these systems use to avoid this problem? Yeah, Jeremy? Well, but what if that thread gets stuck? That's the question. Yeah, frequently there's this idea of what's called a watchdog timer. And a watchdog timer on these systems is a hardware feature that will trigger or reset if you don't poke it often enough. So a watchdog timer frequently has to be reset all the time. So imagine it as a timer on your system that you can essentially set to trigger a reboot in 10 seconds. Why would you do this? Well, the reason you would do it is because if your system ever gets totally hung and nothing happens, then tech in seconds will go by. You won't trigger the timer and the system will reboot. So while the system is running, somebody has to be every 10 seconds or hopefully less resetting that time, telling the hardware, I'm awake. I'm awake. So this is a common technique that's used. A lot of hardware devices support some type of watchdog timer, especially sort of embedded systems and these sort of mission critical things. Because the idea is it's better to reboot than to be hung. Now, of course, you might reboot and hang again immediately depending on the kind of condition your system is in. But watchdog timers are kind of a classic technique to get around. It's a good question. Yeah, Brian? Yeah, OK, there we go. Very interesting. So how many people thought they'd be learning Dutch in this class? Because he's Dutch. Everything in computer science is in English, except these two things. And Dijkstra's out there. That's not true. I mean, there's clearly been lots of contributions from people outside, but I get a sense. It's a small country. They're proud of him. All right, so let's just get started with talking about what scheduling is and why we do it. So who wants to do this kind of an inductive in introduction to scheduling? So what do you guys think thread scheduling is? Go, Alyssa, front of the room. Scheduling, I mean, you guys know this word. You use this word. Maybe you call it calendaring now because you're generation Z or whatever. Wembley, what do you think it is? Yeah, I mean, that's literally what it boils down to. It's determining in what order threads will run. If I have multiple threads that are able to make forward progress, who should be allowed to use the resources on the machine? And you can actually think, when we talk about scheduling, we usually talk about the CPU because the CPU is really a time-bounded resource. But you can schedule a lot of other things. You can schedule network cards. You can schedule memory. You can schedule disk accesses. We'll talk about disk IO scheduling a little bit later because it's more fun. There's geometry and stuff like that. CPU scheduling is a little bit easier. There's a line. So scheduling the process of choosing the next thread or threads to run on a CPU or a set of cores. There's a date reference that I should have scrubbed off. I don't know if we're going to talk about multi-core scheduling. Maybe we will. We probably will a little bit. And maybe we'll get there on Friday. Maybe it'll be Monday. Most of what we're going to talk about in this class is really single-core scheduling because you learn to walk before you can fly. Multi-core scheduling is still an area of very active research. It's very interesting. And we'll talk a little bit about what makes it more difficult. But you guys can probably imagine some of the things. But we're going to mainly talk about there being a CPU and choosing the next thread to run as opposed to multiple CPUs. But we are living in the multi-core error. Here's an even more existential question. Why does the operating system schedule threads? Why is it involved? Yeah. Yeah. I mean, there's a couple reasons. So first of all, we have more threads than we need to run them on. So we have to schedule. And then also, this is our job. Like, you guys put the kernel, your system, in charge of doing this. You said, hey, it's your CPU. You figure out who to run. I'm delegating this to you. You're the boss. So we have to do a good job of this. And application performance on your system can be very heavily dependent on scheduling and other resource allocations that kernels make. So here's another question. When does the kernel get a chance to make scheduling decisions? Yeah, Jeremy. So if a thread explicitly calls yield, that's one of the reasons that we've littered some of the driver and problem code with calls to yield, because we're trying to make your system more, we're trying to force threads to give up time so other threads can run to try to expose synchronization problems. So somebody that was talking about on Piazza how they had taken a yield out of one of our tests and then their code worked. It's like, well, that's because you made the test easier. That's why your code suddenly started to work. So your code's broken. The test works. The test is trying to get in there and poke and find all your weak spots. So don't take the yields out. That's why they're there. Yeah. So when a thread starts, I have to figure out like when I call fork, I create a new process with a new thread in, or I call clone, or whatever, I create a thread in the kernel to do something. What other chances do I have to schedule threads? Particularly user space threads. This is good review. I don't know. Right, timer interrupts, and really any hardware interrupt will give the kernel a chance to run. The kernel may not decide sometimes to do scheduling. And I think on some modern systems, it's possible that the timer fires more offer than the scheduler runs. So the timer fires, we enter the kernel. The kernel just checks its watch. It's like, I'll let them run for another timer interrupt. And then we just get out of the kernel as fast as possible. There is overhead to doing that. But there's more overhead to running the schedule. What's the last time I'm thinking of? So we have threads explicitly calling yield. We have tiner and other types of hardware interrupts, yeah. Yeah, but how would I do that? What would a user space thread do? Swetha. What would I do that would cause the kernel to have a chance to potentially schedule a different thread? Jeremy. Make a system call. So there's the other case. Remember, when does the kernel get control? Hardware events like timer interrupts, which are specifically created to enable scheduling, right? Other types of hardware events like network pack, it's arriving and stuff like that. And then software interrupts or exceptions, but it's kind of a core case, right? But when I call read or write, it's possible that I will not continue to run. In fact, it's quite likely, right? Because those calls go to the disk, which is very slow, and so the kernel's gonna put me to sleep until they're finished, right? Yep, so these are the tops, right? Okay. So, and finally, you know, I wanna point out that the differences, so you'll see that, and we don't talk about cooperatively scheduled systems, right? But a good example of those can be, well, it's not even a great example, right? But the use of timer interrupts to force scheduling decisions creates what we call preemptive scheduling, right? So preemptive scheduling is this idea that the kernel can preempt or stop a user thread from running and decide to run something else, right? Have been scheduling approaches in the past that were purely cooperative, right? The kernel did not ever yank a thread, kicking and screaming off the CPU, right? It would always wait for a yield or a system call or something, and those systems potentially suffer from some fairly obvious problems, right? All right. So, we haven't really talked about yield, and yield, there is functionality that allows, so when we've talked about user threads, we've largely talked about them being descheduled because either the timer fired and the kernel decided that they had run long enough, or because they made a system call, right? But there is the ability for a thread to voluntarily say, hey, I'm good, you know? I'm done for now. Thanks for letting me run. I could run again right now, but I really don't need to, you know? Why would I explicitly make a call like this? What's the rationale behind this? Right, wouldn't every thread just always want to run? It's like, let me at it. I want the CPU, I'm calculating the digits of pi. It's very important, you know? You know, and I'm only up to five. Why would I do this? Greg, I don't know if you're necessarily free of RAM, it could, right? But what is it definitely free up? What resource does yield explicitly release? Yeah, the CPU, right? So yield is like kind of a nice thing, right? Yield can be a nice way of threads tallying the kernel, you know, like, yeah, I'm good. I don't need your help with anything. I'm waiting for any hardware stuff, but I ran for a little while, and now I'm kind of done, right? And yield, again, is this inherently cooperative mechanism, right? The only reason a thread would call yield is to, again, sort of violentarily let the operating system know that it's finished, right? There are no points for calling yield, right? But particularly when you start to think about, so again, here's another question. Why would a thread call yield? This seems like a nice thing to do, right? Mr. Nice Guy Thread is gonna get, like all the other threads are gonna like that guy because he calls yield a lot, right? But, you know, operating systems are popularity contests, right? So why would I do this? When, in what case, could this be a good thing for me? Yeah, yeah, and where might that other thread be running frequently today? Jen, maybe, I'm a user space application. What's true about almost most user space applications? GUI apps, servers, they all use multiple what? Maybe, I'm thinking about something more fun about, Tom, did you have a... So why could this be a good thing for a thread? Why might this be a selfish move instead of a purely selfless, trying to be helpful sort of thing? Yeah, Sean. I'm more thinking what could be true about the thread that's calling yield and the next thread that's about to run? Sean, they might be part of the same process, right? They've got Firefox, eight tabs open, right? You know, if those tabs realize they're out of useful work to do, they might yield to another thread inside their process, right? So just keep this in mind. I mean, not every thread on your system is competing with every other thread, right? Some of them are working together in the same process, right? So getting out of the way might mean that some other thread from some other process that I don't care about gets to run, but it also might mean that another thread in my process is trying to do something useful gets to run, right? So this can be a selfish move as well, yeah, right? What's that? Yeah, okay, good question. So remember our thread states, we had running, ready, sleeping, and dead, I think, right? So calling yield puts me into what state? It puts, I hear two answers. Well, so yield takes me from what state to what state? Running to ready, right? The thread is not waiting for anything, right? It's not, it could run, right? It could continue to run. Maybe it's just sitting there in a loop waiting for like mouse clicks or something, right? But it's saying, you know, I don't need to run, right? I'm willing to allow another thread to run now if another thread has some useful work to do. Gina, I don't think so, but that's interesting. That's a good question. You might actually, and some user space threading libraries, this call might just cause that user space library to run another thread within that process, right? So some of these yields in certain cases might not never be seen by the girl, right? So inherently co-op. All right, so I think this is a good stopping point for today, we will talk about separation of mechanism and policy on Wednesday. See you then.