 So for the first time in six years today somebody came in here and complained about the music, so clearly it's time to quit Five minutes a day dude Chill out. All right No, maybe if you guys all scream really loud, I'll come in here again All right. Anyway, we're just we're disturbing the geology department. You know the rocks don't make a lot of noise Apparently they don't like rock music Jokes are folks go on and on. Okay, so let's have class today Let's make my release your point of work. All right quick assignment one checkpoint It should be almost done at this point. The deadline is Friday at five. Remember, there's no late credit for this Get what you can out of the 50 points and move on assignment two is the first part of assignment two is to do a week later So there's just no time to like Realize that you should have started two weeks ago, right? All right. Anyway, you've seen this before Yeah, that's where we are. So we have upstowers today tomorrow Friday Please come in and get help if you need it Also, please how many people have submitted something to test 161 or At least run the verify step to make sure things are set up. Okay. Okay. Do that You got to do that. We're not gonna We're not gonna accept late submissions because you couldn't get things set up properly. That's on you to do Okay so today we're going to talk about locks and spin locks and Other simple synchronization primitives. Hopefully you guys have some comfort level with these now because you've been using them We'll go through a example that's new to you that you haven't had to solve before and at some point Maybe on Friday we'll discuss a little bit about choosing the right primitive now Again, this is something you guys are gaining some experience with so both the stoplight and the whale mating problem can be solved Using several primitives But if you've and actually if you've already finished one of those problems and you're kind of bored and you don't know What to do for the next couple days Try solving it with a different primitive try going back Redoing it. I mean save your code create a branch or something and get so you don't lose the solution you have But try try doing it with something else and see how it works you usually what you'll find is that there's one primitive that works really well and Leads to a clean solution that's very satisfying and there's another one that just starts to get gross and you feel sad and unhappy about So it's not always true sometimes there's several primitives that can work well in a particular situation But usually there's a good good choice All right, so the NLS class we started to talk about critical section So a critical section is something that we use to bring some control to this wild world of concurrency that we've created and The idea of critical section is only one thread can be executed inside the critical section at a time and On uniprocessor systems It was possible to implement a critical sections by just making sure that the thread that was inside the critical section Didn't stop running until it left and the reason for that is because there's no other cores on the system Nothing else can be happening ergo. Nobody else is running inside the critical section But that solution is clearly broken on almost every interesting computer today because every interesting computer today has at least two cores If not more Well, I shouldn't say that I mean I guess if you're interested in Arduino's I mean, but there's probably like it's in there Like a multicore Arduino Okay, there you go. Well, okay fine any interesting computer. It has at least two cores Again so so this is this is just sort of a historical interest I'm not gonna talk about this for a little bit behind So in general what we need is we need a mechanism so that other threads Will stop before they get inside the critical section and wait in some way or another And we'll talk about two different ways that we can get threads to wait both of which are Familiar to you has anyone looked carefully at the code for the spin lock that we gave you obviously that's a working primitive You're just so in curious. Come on read the code. It's so interesting So I would encourage you to do this the spin locks we gave you work obviously But you might want to get a sense of how they're how they're implemented Achieving synchronization on multicore Systems requires some type of hardware assist in most cases usually the hardware itself is going to provide you some help The it's actually kind of interesting the MIPS instruction set that your compiler tool chain generates and that sys 161 supports is At this point in time a very strange amalgam of old features that were part of the MIPS R 3000 which is what was I'm sorry if you guys are like getting nervous up here. I'm gonna like walk into your lap Okay, I'm gonna move so I have more room to wander around So at this point that instruction set is the mixture of old stuff that was from the R 3000 era And then a couple of new instructions that David added that are designed to support synchronization Those instructions did not exist in the MIPS R 3000 instruction set. Why not? Yeah, that was like I'm the 80s there There just wasn't a problem they needed to solve and so you have this like very vintage instruction set that has a couple of special Instructions that have been just ported in from the future in order to make the system work on multiple cores And the guy idea here is that these special hardware instructions give a little bit of a boost to the Software that's using them and make it easier to write the low-level primitives that provide Atomicity across multiple cores think about I've got one thread of one cord that's running and suddenly it needs to Communicate something with the threads on the other cores doing this safely and doing this efficiently you can require a little bit of hardware Help so that's what we do There's a couple of varieties of these there's probably new varieties that are out there today There's something called a test and set So when I'm talking about instructions now, I'm talking about processor instructions These are part of the architectural instruction set. These are not like see things And if you look at the code for spin locks, I think at some point there's some inline assembly there Where David is actually using the new atomic instructions that he introduced into the the MIPS R 3000 variant that OS 161 supports So test and set works by it writes a value into a particular memory location and returns the old value Now the reason that these instructions are special is that they're guaranteed to work across multiple cores So if let's let's say that there's a memory location and The value of the memory location is zero When I start and at the same time Four cores hit it with the test and set and the value that they're setting is one So what should happen? Four cores existing memory location holds value zero. I try to test and set it to one What should happen? Yeah Exactly one of them and only one now which one doesn't matter right? It's one of the four that's doing something at the same time gets back Zero the previous value and sets it to one and then all three of the other ones get back a one So you can start to see how this is something that I can use to implement other synchronization problems All right. Yeah, so this is just pseudo C code for for this operation Also something called compare and swap same thing a little bit different Compare the contents of a memory location to a value that's passed in if they're the same then set it to a new value And here's some C like code So I give you a comparison Say and so I could say if it zero set it to one And so if it's one it won't be set Right because because it didn't meet the first condition Again keep in mind This is atomic with respect to other cores that are trying to execute the same hardware instruction So for these happen at the same time you can work out what should what should go on one one person We'll get the old value back and everybody else will get a new value or or the compare and swap will fail Yeah, okay, and so this is a this is a cute variant of it that uses two instructions There's something called load link store conditional load link returns the value of a memory address and then Later on I have this store operation that will succeed only if that memory address hasn't been changed since I did the first load link So again these are just examples and you don't have to understand all these perfectly tested set is by far the easiest one to understand Because it's a simple, but the other ones lend themselves to different programming paradigms that that may make it a little bit easier to implement the things We want right this is actually an hour remember this is here because this is what test 160 This is what system 161 uses. This is what the MIPS are 3,000 architecture supports So this is code from OS 161 somewhere Has anyone ever included in line assembly code in their C code before? Ah, there we go You can do it. Here it is. Here's here's one of the possible syntaxes And here these are the two instructions here that are actually doing the work So this is the load link and this is the store conditional All right any any questions about these underlying hardware permits. Yeah, yeah So Between you guys did this for what for 341 I have no idea Okay, but keep in mind this instruction is from the future, right these were introduced so at some point Like three or four years ago David said I want to add multi-core support to OS 161 Because I want you guys to be able to do assignments with multiple cores, which is really cool Again that MIPS as far as I understand the basic MIPS are 3,000 instructions that didn't really has had support for these instructions And so David sort of had free reign to modify this 161 to support the instruction He wanted to use and this is the one he chose or the combination of them, right? So my point being simply I have no idea if this this has any basis in the official MIPS are 3,000 instructions that are whatever it was based on because that that instruction set was not necessarily a multi-core from Okay So a lot of processors provide some variant of one of these and one of these instructions And what we do is we use that as a starting point to build Simple synchronization primitives, and then we can use them to build more complicated synchronization primitives and And on other equivalents you can use this using other So essentially if you have one of these you can implement some of the others So if I have a test inside I can use that to create an atomic compare as well Okay, so going back to the bank example that we had before Let's try to use it a bear a test inside on this is never what you would actually do because remember the goal of these Atomic hardware instructions is to use them to build synchronization libraries on top of So you would never use these instructions on their own unless this was sort of a special case But let's try to modify our example to use a test and set so remember test and set Test the value of it returns the old value of a variable such it to a new value Okay, so Remember what what three lines of the critical section here that I'm trying to establish What three lines of code represent the critical section? It's the first three and you could think of this being the part of the code where the balance is inconsistent My local copy of the balance is inconsistent with the global copy once I've synchronized the two I'm done, but those three lines of code are the place where I'm making a modification that I can't get interrupted in the middle of And so what I can do is like an add a test and set at the top. I set it to one So that other threads know that I'm inside the critical section and then at the bottom I'm going to set it to zero so that other threads know that I'm finished Does this work? No, I'm just using the values of flag Right you can think about of the value here as a boolean indicating whether or not there's a thread inside this critical section So so does this correctly mark whether that there's a thread inside the critical section? Yeah, so it's a to one and then clears it. That's fine. What does it not do? Yeah, if two threads come in here the second one's just going to blow right past the test and set and keep going All right, so I need some way to wait At the top. How would I do that using a test and set? It's possible Yeah, so I can do this If a test and set is equal to one. So what does that mean if the test and set returns one? Someone's inside the critical section. So what I need to do wait Do I have any useful way to wait? here Yeah, so what I essentially end up having to do here is write a while loop that repeatedly test the condition Until it returns zero and and remember the hardware is helping me here So the hardware is guaranteeing that as you know The the point is that I will that while loop will return as soon as somebody else sets it to zero Now when I'm busy waiting it doesn't matter as much because even if the two interleaved a little bit differently I'd be okay, but in other other cases. It's important particularly when I'm when I'm waiting using sleep So this will work What's wrong with it? Okay, yeah, so so there's there's it. Well, there's a couple problems that are wrong with it What's what's the first thing that could happen here, right? What what's wrong with that top loop? What is it? What is it accomplishing? Yeah, it's doing it's doing nothing, right? And so yeah, so what you would have here is something like this, right? Now now there's two cases here, right? Oh, yeah I don't know whatever. It's a meme Oh Heath Ledger he was very good. He was good. He was good at that role Yeah, so this is you know the problems of the project waste cycles Okay, not now. There's there's two cases here in one case It's like really bad although it will probably still work and in the other case. It's just marginally bad So what's the really really bad case here? Remember the processor the the operate system is swapping threads back and forth between a smaller number of processors to create the illusion of concurrency So what's the worst thing that can happen here? Yeah Well, yeah, that's terrible, right? Yeah, so so and that's not that's a really great point And this is really really true when you guys are doing like look if a thread crashes in your kernel It's going to panic, right? That's probably what's going to happen But so on some level if the kernel is panicking and rebooting who cares that there's a lock that's held, right? It won't be held when I reboot, but you're right Whenever you use synchronization primitives in real code You have to be very careful to make sure that you handle errors inside of them properly so that the lock doesn't get stuck That's a good point. Yeah, what's something else? That's a good point. Yeah, if my if my like bi-weekly deposit gets gets held up by all the Amazon purchases I'm making right then at some point I'm gonna have a problem, right? Yeah, but but yeah, so let's assume that this is fair, right? Let's assume that once I'm in line and I start waiting. There's no real rhyme or reason to who gets the lock next. Yeah Yeah, imagine I have one core Okay, or imagine for some reason all the threads that are running this code are running on one core So one thread gets into that loop at the top What is it waiting for? It's waiting for some other thread to run and drop the lock get out of the way so it can get in But if it's sitting there on that core, what's not happening that other threads not running, right? So I am actively impeding the thing that I'm waiting for I'm waiting for something and I'm making that thing take longer, right bad, okay? So that's the worst case where for some reason either I only have one core which again I told you never happens which is true or For some reason this this thread now in general this thread is in the way the longer I'm waiting useful works not getting done now There is one case in which this is okay. So what am I hope is going to happen? Let's say I have two cores When I get into this while one loop, what do I hope is happening? The other core is running the thread that's inside the critical section So if I'm spinning on thread on CPU one and CPU two is running this other thread then he's still making progress Sorry, I always use mail pronouns when I'm referring to threads, right? Just think about that because they're dumb. They just do one thing at a time and they run sequentially until they die, right? So that's why That's why I'll use mail pronouns for the threads, okay? So that the thread is running on the other core and it's just going to run until it dies Until it finishes and then the other thread is going to make progress. So that's what I hope is happening, right? Has anyone run into this on on their when doing their OS 161 code? You may have gotten an error message about spin locks deadlock on a spin lock So the spin lock implementation we gave you actually enforces the fact that the spin lock is held by core zero and another thread on That core tries to acquire it. It'll panic and You know, it's not necessarily in every case a bug I could just spin on that core until the seat the OS decided to run something else and then it would make progress But the semantics of spin locks are that you're not supposed to hold them for very long And so if I am I'm sleeping if I get descheduled and that ends up being a problem, okay? So problems with this approach busy waiting. I'm wasting cycles And and I can actually slow down the threads that are trying to do the thing that I'm waiting for. This is not good all right We've gone to the point I mean you guys been using locks, right? And the the real Reason to use locks as a synchronization primitive is when you can associate them with one or multiple critical sections the typical use of a lock is to guard some piece of shared state So that the people that are accessing that shared state the threads that are accessing that shared state can do so safely It's pretty much the primary purpose of a lock Now I don't want to confuse you it's possible to have multiple critical sections that are associated with the same Structure let's say I have some sort of shared data structure I that might have one method that adds an item to it I might have another method that withdraws an item from it I might have a method that looks up an item imagine like the implementation of a python Dictionary for example right that has to be implemented that there's actual code that implements that I know it's magic Right, but like when you do like dictionary whatever there's actual C code or C plus plus probably That has to run to implement that and and it has to work synchrony It probably has some locking that it does to make sure that this works Okay Because there's there's operations that it's doing on this structure that need to all be done together So that the structure state doesn't get corrupt The way is a lock pretty simple you guys are have started to do this Acquire it when I enter the critical section drop it behind Now what we've described today is something called a spin lock and the idea here is that there's two ways to wait Why one is I wait actively by continually checking the condition that I'm waiting for The other way is passively where I go to sleep and I have an arrangement with the Thread that's using the resource or inside the critical section that as soon as it's done. It's going to wake me up So those are my two options Yeah, so it's a spin lock Normally spin locks are not used on their own to solve synchronization problems now It's possible actually in some later assignments that you may want to use spin locks in a few places and we'll talk about why Well by the time we're done Hopefully I'll give you a little bit of a guide about when to use spin locks and when to use a traditional Sleeping lock that you guys have implemented. All right But they're commonly used I mean you you have seen them used now in both the semaphores and your lock and limitation as a way of building more complicated synchronization purpose All right, so but let's go back and let's solve this in a different way. So now let's create this lock object and What happens if I call a lock a choir while another thread is inside the critical section? It has to wait somehow right so I have to wait and if I wait actively I'm spinning if I wait passively I'm sleeping Okay, so I think I've said a lot of this so I can I can wait actively by continuing to hold the CPU and Continuing to check this condition over and over again. So this is busy waiting or active waiting or I can have some sort of Semantics where I arrange to sleep get out of the way let the other thread that's inside the critical section make progress and It then needs to be able to wake me up when it's finished so that I have my chance to go through the critical section myself Now There are some cases where spinning is actually the right approach So let's let's think about it Why would I never want to spin on a single core system? We just talked about this Because I'm slowing down the thing I'm waiting for on a single core system There is no way that spinning can ever be useful because whatever's inside the critical section needs to run It needs to do work, and if I'm using the CPU then it can't so this is never the right thing to do on a single core system If the critical section is short and if I'm completely a hundred percent certain that I'm not going to sleep So for example if the critical section involves doing something like I owe where my thread is going to have to wait for some Really slow hardware device like a disk to do some stuff then by definition the critical section is not short anymore It's really long So the critical section being short means I cannot do any sleeping Now we haven't talked about context switches yet because we sort of swap things around a little bit We're doing synchronization first, but when we talk about threading there is work That is required to stop remember the operating system is producing this illusion of concurrency by rapidly shuffling threads back and forth Onto the CPU and off the CPU that is not done Magically there is work that has to be done. I have to do some housekeeping every time I move a thread onto the CPU or move a thread off of the CPU and So if the critical section is short, so here's an example let's say that CPU, you know, this is an example of two threads running on two different CPUs for some reason they're both called thread one that are Waiting for a resource. So thread one acquired the resource first. It's inside the critical section CPU to ran a thread That tried to acquire that resource and decided to sleep so it said okay, I'm gonna go to sleep and The what what happens is that thread stops running and there's all this work I have to do to save the state of that thread so that I can restart it safely again and in the meantime Thread CPU want this thread ones done, right? I mean in this case imagine the critical section is only a couple of instructions So by the time CPU to wakes up the amount of work that I had to do to put it to sleep and wake it up Again is actually larger than the amount of work It would have had to do to just wait actively for a couple more cycles. Does this make sense? So in one case, I'm wasting cycles by waiting actively in the other case I'm wasting cycles by going to sleep and being woken up again because these are not free operations And so if the critical section is really short then on average I actually win by waiting actively because I waste less CPU cycles I also get the resource faster, right? If I had waited actively I would have had the resource right here But by the time I get to sleep and wake up again all this time has gone by Conversely if the critical section is long involves lots of operations could ever potentially involve Going to sleep waiting for IO waiting for some sort of slow resource then it's always better to use a Sleep lock so here's an example again sort of the converse of the other one I have a long critical section the thread on CPU 2 is waiting actively and in this case at some point And it doesn't have to be very long It becomes better to go to sleep yield the CPU let somebody else do useful work and then arrange for the first thread to wake me up late Again, this is always true whenever there's any sleeping involved And that's why there are those checks in your current system So that if you hold a spin lock and do something that goes to sleep or calls yield The next attempt to grab that spin lock will will panic Questions about this at this point I Like this I feel like you guys understand all this stuff. I actually don't even have to give these lectures You just this is like I May not be physically able to continue giving these lectures if I can run into that thing all right Okay The what you guys have seen And what you guys have implemented is is pretty pretty common functionality particularly the weight channel So the notion of a weight channel is pretty powerful And you guys may not understand how general purpose it is the weight channels that you guys are using are not used Just in the context of the locks and the condition variables that you're using them for they used all over the place in the kernel and and the idea is pretty simple essentially it allows a thread to weight and Give the system an identifier that identifier is typically like a memory address that has some special significance So for example, it could be the memory address of the lock structure that I'm waiting for If I was waiting on IO it could be the memory address of the IO queue that the operate system is using internally or something like this It's just a like a it's just a name But of course it's a machine name so it's in hex and it has lots of ones and zeros in it And and this essentially allows me to wait for something to happen where that something is you know Denomified is that a word? I'm trying to use words that are outside my comfort zone now So anyway, that is named by some sort of key And then later I can wake up One or multiple threads that we're waiting on that When something happens and you can imagine again that this is a very general purpose primitive It allows me to wake up and put the sleep threads that are waiting on locks Condition variables IO You guys will use this later when you do assignment 3 to wait for particular Memory copies to be done to various parts of memory so that a process can restart whatever But this is that this is the basic operation of a weight channel And I just want to point out a lot of this stuff actually this is true for all of these things So has anyone ever used these primitives in user space? Does anyone every no one has ever used a lock in a user program? Anyway, we need better system courses. Yeah, so you can do all this in use of it I just want to make sure people understand these are not this is stuff that we're good We're teaching you because you're gonna need it in your kernel But this is something that you can use when you implement like a web server This is something you use when you implement any piece of multi-threaded code every good piece of Software today, sorry every good programming language and every good sort of framework has these types of functions, right? If you program in Python, there are Python locks mutex's condition variables If you program and go there are the equivalent sometimes the names are a little bit different go calls locks mutex's There are different terminology, but the functionality is essentially the same same with weighting all this stuff I can do it user level as well and use it to manage user level concurrency We're just happening to be used in it to manage kernel level concurrency in this particular class All right questions about locks Yeah Yeah, so the way channel happens to be implemented in a FIFO queue. I think currently, you know, it's 161 That's not guaranteed by the interface. So that's that's very important so and this is a good example of the hazards of Making assumptions about how things work So if you look at the way channel interface all wait wake one says is it wakes up one threat It does not say which that it wakes up It doesn't say the thread that was waiting the longest or whatever. It just says it wakes up a thread So if I went and changed the wake one implementation to wake up a random thread your code had better still work Now I will make an I will make a sad Admission here, which is that we our solution for reader writer locks violates that that interface contract So we make assumptions about the lower-level code and I did that and I'm okay with that I'm a grown-up now and I can make these sort of trade-offs But it would break if I changed the implementation of wake one, but that's a good point the interface doesn't say something specifically Users of the interface are not supposed to assume it, right? It makes it possible for the underlying implementation of the interface to change, right? It's a good question. Yeah Well, keep in mind spinning spinning the little bit spin locks and sleep locks Well, I guess that's true I mean, I guess I can implement a lock that allows me to wait for it either Through spinning or through sleeping, right? Whether or not I would use that to do prioritization. I don't know, right? Necessarily let me see where I am in time because I want to mention Okay, we'll come we'll come back to that when we talk about the schedule any priorities Yeah, other questions. Yeah Yep We don't want to stay away That's a good good question. So why is it safe to use spin locks inside the semaphores inside your locks? Yeah Well, okay So the way channel is going to unlock the spin lock before you go to sleep, right? But but again come back to the critical section arguments, right? There is a critical section inside of your lock implementation Okay, there's a set of things that have to happen atomically Specifically, there's a condition you have to check and then you have to go to sleep if you don't hold a spin lock there You can try this. I mean if you're done with your locks do the following That's been locked that you're grabbing move it inside of the check, right? Now the weight channel will break if you don't hold the spin lock before you call the weight channel But it's easy to fix that right just instead of grabbing the spin lock checking the condition and then going to sleep Check the condition then grab the spin lock and go to sleep And what you'll find is there's a race condition there, right? And so that spin lock is creating a very small critical section inside your lock and the fact that it's small Is what's important and obviously if you can't implement a sleep lock without needing another sleep lock You're in trouble, right? Like yeah, like I'm gonna call myself recursively and it would just be bad Yeah, but that's so this is an example of how we're using a spin lock to build these higher level points. Yeah question Yes. Yeah, that's true Yeah, it's yeah, it's busy waiting on the way to getting onto a sleep cue potentially Yeah, yeah, but absolutely. There's there's a little you're right There can be a little bit of busy waiting In the process of of checking the condition that that determines whether or not I can acquire a lock or not Yeah, right So remember I'm busy waiting to make sure that the condition doesn't change between when I check it and go to sleep Right, so if one thread grabs a lock and then goes off and computes at 100,000 digits of pi Other threads that try to acquire that lock will do a small amount of busy waiting and then they'll be on the way Q right, but you're right That's a great point. It does require a little bit of busy waiting to get on to that wake you, right? Yeah, but other questions. Yeah No, okay, so so let me make a distinction here And this is important for you guys to understand as programmers and something that we should have taught you four years ago Okay, when you guys use other pieces of code, right? So you when you guys are implementing your Pregnization primitives you're using the weight channel. Okay, we gave you the weight channel now look if you want to it's your Code you can make the weight channel do this, right? You can say that part of the weight channel interface is that it is going to use a FIFO queue But the description of the interface that you're using does not say anything else other about Does not say anything about wake one other than it wakes up one threat. So it gives you no guarantees in a more normal case you're using an external library that somebody else wrote right and So you have two options You can take that library and you can make it your own and start maintaining it and make it do exactly what you want But that's usually not what you want to do You want them to maintain the library because that's what they that's what they're doing That's how you collaborate on things But if the library maintainer says it wakes up one thread and you're assuming that it wakes up threads in FIFO order Then some day you may start failing all of your tests, right? Because the library maintainer decided it was more efficient to store the threads in like a heap or something like that And it stopped it stopped behaving the same way, right? So this is not about the fact that we can't guarantee that the threads come out in FIFO order the interface doesn't guarantee All right, but this is really important to understand when you guys read interface descriptions if the interface says acts That's it. It's all you get Actually just as a side anecdote so when people are you guys I always have to ask this question Are you guys old enough to remember back when Microsoft was getting all this trouble for anti-trust violations? You guys maybe we're in diapers. I guess okay, some of you are old enough Yeah, so at some point people are like and it's kind of funny now because Microsoft seems like a little behind the times now But back back then people were like really scared Microsoft's dominating the market and and one of the things people were concerned about was this idea that If you use Windows if you use the Windows API you don't have access to the source code So you call some part of the Windows API and you don't know how it's implemented Except if you're so let's say you're building Internet Explorer wait, so let's say you're building a Chrome Right, so you get this big book. That's like that thick that says Windows API documentation Okay, and then you read it and it says okay this function does this but the problem with the Windows API is after 20 30 years It's got like 18 functions that do any one thing you want to do okay, so it's like read a file Okay, take your pick off the menu. There's 15 different functions that do that in slightly different ways So if you're chrome you scratch your head a little bit You're like okay, I'm gonna pick that one will hope hopefully it works right if you work on Internet Explorer You pick up the phone and you call the guy who maintains that code and you're like which is the fastest like which is the best Which is the one that you're gonna keep maintaining which is the one that's gonna disappear in the next version of Windows And he's like oh, yeah, I'll check that out for you and whatever so this was one of the concerns people had about Collusion between Microsoft writing software and Microsoft writing their operating system was that the operating system? the software that Microsoft wrote can make assumptions about the way that the operating system worked that other people couldn't and You can imagine it going even further the guy who's developing Internet Explorer might be like it would be great If there was this system called it did X right Why don't I call up the guy who works in a team and be like by the way It would be really helpful to us It would really improve our performance if you would do this one thing. Oh, okay, you know, or if you're at Chrome You're like no way, you know just You don't you don't have that opportunity So this was one of the concerns back in the day about this sort of collusion with that Microsoft understood the internals of their interface everybody else had to try to use the interfaces written and wasn't able to make these sort of assumptions Interesting historical artifact That's another question I don't know how to answer that question you would have think with all the advantages they had All right, it's like, ah, but this is the thing. I mean it was like Anyway, like Imagine how imagine how bad it could have been right if they couldn't have made all those some Anyway, okay, let's Yeah, well, it's still alive I guess two people so he's an explorer other than my other than my other than my father who I've been begging for years Please stop I Will migrate all of your bookmarks to Chrome and then you can stop calling me every day because you're Alright, let's let's let's forge onward today. Okay locks are designed to protect critical sections It really is The number one reason to use a lock and if you think about the lock semantics that you guys of the locks you guys have designed You'll you'll see this Locks are not a communication or signaling mechanism. They are they are a Correctness mechanism. They're designed to make sure that a bunch of things can access one piece of shared state And do so safely and that's really and they are good at that. That's what they're for, right? That's their core purpose. They are not good for other things Now to some degree lock release is kind of a signal because what lock release is doing is signaling the other threads that are Waiting that they can come in now. Okay, but this is a very very limited form of signal Okay, so let's talk about other types of conditions That threads that are working together on a problem might want to notify each other about and we're going to do this buffer buffer example so for example, I might have one thread that puts data into a buffer for another thread to process and thread the first thread wants a way to notify the other thread that there's data waiting for it Okay, this happens a lot right happens a lot in programs that you guys use where there are multiple threads Working together to do things, you know one thread takes the input from the from the mouse click Figures out where you clicked adjust some state and then call some other thread to re-render the UI or something like that Your child is exited You know, how do I notify a process that one of the sub processes are created as exited? So in contrast to locks which are really entirely a correctness primitive condition variables are a signaling mechanism condition variables are allowed to let threads Signal each other when certain things happen and wait for things Probably the most confusing thing about condition variables is the condition. Well, it's a variable part, right? Well, maybe it's both parts actually condition and variable because there's no condition or variable in the condition variables, right? You guys how many people implemented condition variables who were sort of confused like that's it That's all I had to do right like where's the condition? Where is the variable? Where are the condition in the variable? Outside the condition very yeah So condition variables are usually used as part of some conditional expression Frequently inside a while loop and with some global variable that multiple threads are accessing And the condition variable you can think of as a way of one thread notifying other threads Speaking of noise Geology department's moving rocks around again Okay, so you can think of the condition variable is allowing one thread to notify other threads about interesting changes to the variable Right. There's some change to the variable that I want to alert other threads to because it's going to cause them to do something something interesting and This condition usually represents some change to shared state right a condition on a you know a variable has changed and Some condition that you're waiting for may have changed as well All right, how much time do we have left? Oh five minutes. This is awesome. Don't start putting your stuff away You guys are sending me these signals like it's time to stop. I'm not done All right, so condition variables can can convey information about the world and this the more information But it's really any information right all locks allow me to tell all locks allow threads to do is tell each other that They're done using a critical section. That's it condition variable the information that I convey could be a lot richer All right, so let's go through an example with a shared buffer. This is sort of a classic condition variable Example you can find code for this online whatever So here's here's what I want to do. I have a buffer that stores Information some sort of object and I have a producer that puts things into the buffer and I have a consumer that takes things out of The buffer All right, why would I want to do this? It seems dumb. Why not just let the producer thread like process the data itself? Why would I ever use a queue like this? Does anyone come up with like a realistic example? Again, it doesn't seem dumb right did I concoct this just for the purposes of this Example so that we could use a condition variable I mean an example where it might make sense to do this sort of separation of work Yeah, okay, that's it's not bad. Yeah pipe pipes like I want the first process to keep doing what it's doing I don't know how I have to wait What else? Well any sort of website, right? So so frequently on websites when you guys click something or click submit There's all sorts of stuff that has to happen like when you buy Things on Amazon when you hit buy now It's not like it completes the whole transaction while you're sitting there, right? It could spin until the the package arrived on your doorstep. It's like No, oh, it's delivered right now, okay? Clearly. I mean obviously there's parts of the latency I want to hide in those parts. I don't care But the point is I want to do just enough work so that I kind of know that things are kind of okay And then I want to draw things I want to redraw the display and finish the transaction in the background There's transaction processing going on. There's emails that get generated blah blah blah right and those can be done in the background by separate threads So the thread that's actually rendering the UI can return and give you a useful information This is like a common way of reducing the latency of an interactive application by farming stuff out to background tasks Right, I get a background task to send the email so that I can return immediately and show you a nice nice confirmation page Okay so Why are so you might think can wire condition variables as synchronization mechanisms just may talk about this I think we'll start with this example on Friday One of the things that's really important when I use CV signal CV weight And CV broadcast is making sure that the condition doesn't change between the time that I notify Other other threads and the time that the threads check the condition So this is very similar to the interior of the lock that we talked about before where there can be a small race So if I so for example if I check that the buffer is full and the buffer is still full And then I go to sleep, but I don't guard that carefully I can end up sleeping despite the fact that there is space in the buffer and for this reason condition variables of The the type that we use like to this is exactly the same this example, right? so if I don't guard this buffer state carefully if The other thread gets switched in put some things in the buffer I'm already on my way to the sleep queue and so now I have a I have a condition I don't want which is I have a consumer that's sleeping despite the fact that there's data in the buffer And so for this reason I use condition variables with a lock. I don't know how this Maybe I took that example All right, so let's stop here on Friday. We'll do the producer consumer buffer example And we'll talk about deadlock. I'll see you then good luck with assignment one