 All right, so first off an apology Apparently I set my office hours like two weeks ago. I've been confused that no one's been coming apparently. I never posted what room I'm in So sorry, I was yeah, I was very confused on Thursday when no one came there on the day of the quiz so I Posted on this cord where my office is and update the piazza post. It's BA 51 5110 again didn't mean to do that. That was I'm so yeah Someone actually told me at the quiz that I they don't know where my office was I'm like, oh check the post and it doesn't have it there. So yikes Quick so quiz we have a meeting with the TAs tonight. Hopefully it'll be graded sooner rather than later so far the quiz average is like 83% or something. So hopefully it wasn't too bad To anyone at least better than lab one. So But yeah, it's funny when we have this many students, you know the freebie question That's like what course is this someone got that wrong? Oops So hopefully that wasn't anyone here But yeah, apparently if you have a class big enough that happens control systems Yeah, yeah Yeah, yeah Unfortunately, this course is not control systems. So if you're here, sorry, he didn't get the point for that one All right, so today we'll be talking about Center for is more fun synchronization parameters So we saw locks before that ensure mutual exclusion so that you know between a lock and an unlocked call We have a critical section and we know only one thread can be there at a time So if we have accesses to the same memory location, there can't be any data races because it needs to concurrent accesses But it does not help ensuring ordering between threads, which is something we might want So how do we ensure ordering between two threads? So let's look at our example and a few silly examples So if I have this code where one thread I have two threads One thread runs print first the other is print second and then the question is how do I get one to always print before the other? So we know some way to do this already anyone think of some Solution to do this Yeah, yeah, so one thing we could do is do cooperative threading that would work and then we could Actually that would not work because we'd have to make sure that one the first thread always runs first I guess if you control your scheduler to you can schedule it first and then yield so But what's another way? Yeah, how would I use a lock? So where would I put it if I want to use a lock? So in the main I'm just going to create two threads and then join them So you want a lock around that like what? Like a lock unlock like that, okay, so At least for that So initially when we run this only one threads Active because the process is created only one threads running main in which case the lock and unlock would do nothing because there's only one thread doing this anyways and Then we create our two threads and then we're off to the races So can I just reorder these to ensure that there's some order between them? So if thread zero runs, yeah Yeah, so one thing I can do if I want to make sure they run in order is something like that So I create the first thread that runs print first I join on it So I wait for it to finish and then I create the other thread, but you know this definitely works But that is not going to work super in general If you know we wanted it to just do that line and then do the rest of the code in parallel That wouldn't work because we're essentially serializing our program there We could do it with the mutex too And let's see a clever solution we could do for that But it's like we're not going to use a mutex super effectively so we could create a mutex and Remember initially it is in a unlock state So which this looks kind of weird to unlock it and then in the second thread We can lock it and then lock it again so if This thread happens to get scheduled first It'll call mutex lock and then try and call it again Which is kind of an error but in this case it does work because it would wait for it to be unlocked again and then the first thread could Go ahead and come and unlock it and then we could pass by this lock call And that's like a super roundabout way of doing it and not really a way you should use mutexes But that actually works But it's kind of an unorthodox solution and we can do a lot better So what we want to do so this is our problem again and to illustrate as well so printf is actually thread safe because if we go ahead and Break it up into something like this Where you know we just split our system gone to or split printf into three system calls So like this is first and then I'm going second If we compile and we run safe print We'll see it all in kind of a jumbled order. So it went thread two thread one thread one Thread two Thread one thread one or thread one then thread two right it kind of got in a super jumble ordered and Every time I print it it's going to look slightly different because they're just kind of racing at that point to see Who can make the system call first so in that case? We got I'm going second and this is first luckily, but then we run it again and we get all sorts of different orders all right So printf is actually thread safe. You may need to ensure things like that. So if we wanted to make safe print Essentially thread safe. We could put a mutex along or We could put a mutex around all those three right calls So they all happen at the same time and you can't get interrupted by the other thread in the middle of it So you can go ahead and do that for practice if you want you just wrap a mutex around all the right calls But the thing we're going to use is called a Centiphor and they are used specifically for signaling and Unfortunately, this is another overloaded word because it does not correspond to Unix signals at all It's more like the actual English usage of the word where I want to tell you to do something so Centiphor's follow a few specific rules really only three rules So Centiphor's have a value that's shared between threads and optionally you can use Centiphor's as inter-process communication if you want to synchronize between two different processes as well But that's an optional thing that we won't even get into in this course But you can think of Centiphor's is just having a value and the value will always be zero or greater Then it only has two fundamental operations So the only things you have to remember is Centiphor has a value and then two fundamental operations Put weight just decrements the value atomically and if it can't go negative It will sit there and wait for another thread to go ahead and post and what post does is just increment it So just increments of value weight decrements of value and does not let it go negative So weight does not return until the value is greater than zero so If you call weight and the value zero sits there and waits and waits and waits waits for another thread to post it back up To one and then it can decrement it from one to a zero and then actually return from the weight call so the cool thing about Centiphor's is you can set the value to whatever you want and You can think of that by default. That's the number of weight calls that can pass without any calls to post so If we want to use that. Oh, sorry. I'll show the so the API of it looks like this There's an include for Centiphor and then it kind of looks like a mutex. It has an init and a destroy So it takes a address of a Centiphor which you can allocate however you want put on the stack Maloc it do whatever Then the second argument is p-shared which just specifies whether or not you want this mutex to be shared across Processes so for the purposes of this course, we'll just always set it to zero that says don't share it and then there is And the value so that's the initial value that you create for the Centiphor Then after you destroy it and then there's our two fundamental operations weight and post and there's also a version of weight That doesn't sit there and block So try weight will let you try and decrement it if it can't decrement it it will tell you if it can decrement it It will also tell you so you know whether or not you essentially have decremented the Centiphor and they're all atomic operations So that's why you center forth. They're all atomic. You won't have any data-race problems between them because post and increment are atomic All right, so if we go back to our problem to make one thread always print first and just in the slide I remove the return statement Could we make a center for to make sure that one prints before the other? Yeah, somehow. All right, let's go through it so Let's just go ahead and we'll go ahead and include semaphore Let's just declare it on There make sure I don't do anything stupid to find but not used. Okay, so we want to initialize the semaphore so semaphore Sem and knit I'll give the address of it. I'll say it's not shared and then the initial value one is one Okay, so generally it's easier to place our weight. So if we set the initial value to zero Whoops, it's called sem If we set the initial value to zero that means if something calls weight It'll sit there and wait until it goes above zero, right? So it would just sit there and block So it sounds like we should probably put the weight here, right? so If we put the weight here Right now how we have the code So we set the initial value to zero and now if this thread happens to get scheduled first because again We can't guarantee what order they run in so we have to argue about both of them This thread gets scheduled first. It will hit some weight The value of the semaphore is zero. So it will just sit there and wait for it to go above one So even if that thread gets scheduled it just sits there at that weight call and can't do anything So now if print first Goes ahead and executes it would print and then it would finish and then thread two is kind of stuck, right? So why are we missing? Yeah? And we agree with that. Okay, so Oops some post So if we have a post then that should ensure us order, right? So if this thread gets scheduled first the value of the center for is initially zero. So it'll sit there and wait Weight won't pass won't do anything like Monopoly man won't pass code doesn't collect $200 then if Whenever you schedule thread one it would print f and then print this is first and only after it's done printing would it increment it and then the value of the center for would be one and then it would kill that thread or it At least would become a zombie thread but we don't have to care about that right now and then Thread two would get scheduled and then the weight would actually return because it can decrement it from a one to a zero And then it would print f. I'm going second So if we compile that We should see Again, it's not guaranteed that you know if you run it a thousand times and get the same answer that there's no problem But we can be argued about it and if we run it a bunch of times it actually does what we thought all right, so we have ordering between our threads and Even if you know we could have code after the print f and after that print f so that they could actually run in Parallel to it's not like our join case where we just Essentially make it sequential it actually ensures some type of order So that's pretty cool. Everyone gets there for us. We can do all the hard questions now All right, so let's do something harder So again, this is just on the slide. So we always have print first execute first and then print second So this is again just so you have it on the slides So no matter what thread executes first we get the same order if the value is initially zero, right? So assume This is just what we went through right so what happens if we initialize that value from zero to one Looking at our code Yeah Yeah Yeah, so if the value is it initially one now if thread if print second gets scheduled first It can actually go past the weight and decrement it from a one to a zero and it just happily goes in prints And then the first thread would go and post it from a zero to a one And it doesn't matter. We'll just have a one as a value at the end of that program And then you know if thread first gets scheduled first It would print and then increment it would just post it from a one to a two and now Print second might get scheduled and then it would just decrement it from a two to a one so essentially if Center for is really the initial value really matters when you want to prevent things because we didn't change our post and weight calls at All we just change the initial value and suddenly it just is essentially just doesn't do anything anymore, right? Yeah Yeah, so for us even if there was a print third and we had a third thread We could ensure ordering between them But you'd have to you might have to use two center fours to ensure order. Yeah So if you want sure ordering for three threads the easiest way is just to have two center fours one that acts like this and Another one that essentially does the exact same thing. Yeah Yeah, so we you could just chain them so we could have print third and then it would Have a post here to another center for and then a weight here So that the only way to have the third thread run is for this thread to run Which needs to run that thread and then it would chain down So we could chain them as long as we want we could ensure four threads order or four threads go in the same order We could just keep on chaining them. All right So can we use the center four as a mutex and if so how because they kind of look the same, right? Yeah, it's the initial value zero, right? So if the set initial value the center four is zero and we use weight as lock and then post as unlock It behaves exactly the same as a mutex, right? So a seven four is a more You know, it's a more general thing than a mutex and mutex is actually kind of like a special case of this But it can do more because the initial value doesn't have to be zero. It can be anything we want But so if you want you can go ahead and write that data-race example We saw before instead of using a mutex you could use a center for But again, it's kind of messier Oh Sorry the initial value would have to be one not zero because if it's zero it would hit the weight And that wouldn't be able to decrement Right, so it would block immediately So sorry, so it's a mutex if the initial value is a one not a zero So that way one thread can go ahead call weight pass and then post it back up from a zero to a one So it's just going to constantly ping-pong between One to zero and then zero to one, right? So if you can go ahead and rewrite that the example if you want But can we come up to a solution to a bigger problem? So this is for a producer and consumer So assume you have a circular buffer where each slot is either empty or filled with some data We don't really care what it is right now and then The producer should be the only one to write to the buffer if only the buffer is not full and the consumer should read from a date Read from the buffer as long as a buffer is not empty And then all consumers will just share an index and all producers will share an index and then initially They are zero and the buffer is completely empty So at the beginning of time both the producer and the consumer threads as a group would all be pointing to the first index and It's essentially just going to and it's empty So it's going to wait for the producer to start filling some slots And then the produce or the consumer can come in and pull the data out of the slots and actually use them So let's look at the code for that because it is much more involved So at the basic level Let's see Yeah, so at the basic level. This is our producer thread So I'm just going to use a center for to keep track of the number of elements. I'm create I'm producing and consuming So that's just to for some accounting stuff You could imagine it would be more involved if you're actually doing this or you just loop infinitely or something like that But in my code I'm just going to sleep to simulate doing some work and then I'm going to fill a slot Right and initially it'd be point wanting to fill slot zero and then go through and wait until it's full And then in the consumer consumer It's exactly the same thing But it's going to empty a slot instead of fill it and then it's going to simulate doing some work to go ahead and consume that data So if we go ahead and run this producer consumer So even if we do that so this is the what did I say? So the number of producer threads and then the number of consumer threads So if I run that I'm going to do 15 things and my buffer size is going to be 10 and they take some time So if I run that I see that there's a lot of conflicts So I'm just printing out a red message anytime that essentially I detect a data race So initially I have a bunch of producers and consumers and I don't have any ordering between them right So immediately is trying to empty slot zero which initially doesn't have any data in it So that's an error. So the slots already empty I shouldn't have done that and then it would fill slot zero after that So I probably should have waited like a little bit for that to happen Then again, it does the same thing where it empties slot one when but right before it fills slot one and it does that over and over again and Then eventually it catches up because it just happens to fill slots a bit faster And then it starts emptying them and we have no issues here until essentially it wraps around and then it fills off fills slot zero again because it wrapped around and It hasn't emptied slot zero yet. So it just kind of overwrote something that hasn't been actually handled yet So it's already filled and we write over it a few times and then we end So we kind of screwed up a lot of times there So how would we try to ensure some type of ordering between these threads? Yeah Let's see what takes long Yeah, so the producer takes only 100 milliseconds and the consumer takes 120 milliseconds. Yeah. All right, so how would I fix my issue? Yeah Okay, so So what do we want to do we want to make sure that Slots are actually filled You never overwrite filled slots Okay, well, let's make sure we yeah check if the slot empty Yeah, well we learned Center for so let's try and use the Center for for it so Let's try and keep track of Let's see the number of empty slots So I'll have a set of four that represents the number of empty slots So one of the first questions is going to be what we initialize it to So I'll just put that as a zero for now So what thread needs to wait for empty slots? Yeah, so the producer should wait for empty slots, right? So if I want to wait for empty slots, the easiest thing to do is put my seven four wait here empty Yeah, so try wait is instead of just waiting and it blocking there It's kind of like try lock so it will try and decrement it and then tell you whether or not it did it So so that's why I have it so I just have a center for just to see how many times the loop runs for so I just essentially Say how many elements I want to produce and just decrement it every time until it's eventually zero So but that seven four is unrelated to actually ensuring some type of ordering All right, so I'm going to wait for some empty slots So if I'm waiting for an empty slot what creates an empty slot? Yeah Yeah, the consumer after it empties a slot. I should probably post right Okay So if I have that then I'm a bit closer But the question is what should my initial value be because if I run this producer consumer with say four four Some bad things happen Right, so slots already empty. I'm Which is okay Because I'm not I'm just trying to get rid of the error where I'm filling a slot That's already filled So in this case, I've filled some slots a lot of the already empty That's okay. It seems like I Did it but there's still an issue here. So should the initial value here be zero? so What so in this case? What thread rain ran first and what will always run first in this case? Yeah, so which is a consumer, right? Yeah, so in which case here because I have a weight here and my initial value is zero if a producer happened to get Scheduled first it would just sit there and wait Right, even though all the slots are already empty So I should probably change that because it's gonna wait for it to empty slots and post up every single time and it's kind of also weird because it would Go from it initially has how many slots are initially empty? All of them right, which is whatever my buffer size is so that would just increment this up to buffer size and then Start running the producer which would pretty much always guarantee some type of some type of data race So I should probably just change this to set my initial value as buffer size Oops, and I misspelled it All right. Well, that's one side of the equation. So what's the other side probably keeping track of the number of filled slots, right? So let's go ahead and do that. So we'll have a another semaphore for filled slots and It's probably easiest to place our weight So where do we want to wait to see if we have any filled slots available? Yeah, right before we empty it right so Sam wait We want to make sure that there's at least a filled slot in order to empty it And then what's the thing that fills a slot? The consumer here, right? So Sam post filled slots and Then the last question we have to answer is what should our initial value be? Zero right because initially there are zero filled slots filled slots Whoops zero so If we do that and say The producer gets scheduled, you know get scheduled Ten times in a row it would go ahead and since empty slots is zero it would It would wait for empty slots and initially sorry initially empty slots is the size of the buffer So it could allow Ten, you know, whatever the size of buffer is producers all at once So all ten could be there ten could pass the weight call and fill which is fine We just don't want to overwrite any data So we don't want eleven because they would obviously have to share the same slot and probably overwrite each other so even if Only the producer was scheduled it would be able to go through and you know fill up every single slot So it would fill up every slot and then post the number of filled slots Which might go all the way up to the size of the buffer, right? And then in the consumer is this going to wait for filled slots So if it happens to get scheduled first because filled slots, it's initially zero It's gonna sit there and wait finally, right? So it's not going to be able to do anything until it Until it's at least filled one slot up and then it would go ahead and empty it and then post up that there Is an empty slot so that the producer could go ahead and fill that again, right? Any questions about that? Hopefully it works if we compile it No, I already compiled it So if I do that right I can start any number of threads. I'll do four four So there we go. I don't have any data races in this case You can verify what happened so it filled slot zero first Empty slot zero then it filled three slots in a row again We don't get to control what gets scheduled at what time Then it empty slots which have been filled and happens to do three of them Fill slots all the way up to seven empty slots. It would have to only be up to seven Then fill slots eight nine ten and it can actually wrap around now because it's already emptied slot zero and one So it can go ahead fill those slots. It still has room Then it goes ahead empty some more Right then fill slots all the way up to four and then empty slots all the way up to four and we're done no data races whatsoever We're good, right All right, any questions about that yep if they do Yeah, so if they started different slots This wouldn't work because we'd have to keep track of you know what slots So this only works because they start at the same slot and they go around together right Otherwise we'd have to do something a bit more involved Okay, well additional question then so I can have multiple producer and consumer threads and they're all acts and the producer has an index the consumer has an index and And they're probably shared between threads so What likely is going to happen in fill slot or empty slot? That I nicely did for you So is there potential of a data race in there? Yeah, so well if all producers share the same index And they essentially fill slots going to at least increment it right Change the index to the next one. That's a data race that we already saw before. It's exactly the same as the plus plus count so in there One of the things we can guess is that it should probably look like this where it has the easiest thing to do To prevent a day of races just throw a mutex around the whole thing so I Get a mutex. I just call it mutex. I lock it and then within fill slot I go ahead and access my buffer. So I'm modifying my buffer so not only are all the producers, you know sharing an index all Producers and consumer threads have to share the same buffer, right? So I have to protect all the reads and writes to there so in here I just put a mutex around the whole thing and then I You know fill the buffer and then increment the producer index and set to fill and then here I just wrap it around in case So because we are sharing the buffer We also have to put a lock around empty slot as well So it would have a lock and an unlock around the whole thing So that I prevent data races. Yeah Yes Yeah, so I wouldn't need to get as Involved if I only had one producer thread and one consumer thread Because the only thing modifying the consumer index or the producer index would only be one thread If I only have one thread modifying it, I don't have a data race So I wouldn't need to protect that I would probably still have to protect rights to the buffer though Yeah Yeah, well that so initially it slot zero right and they're all empty So what happens if scroll back up? fill slot So initially they're both empty right or they're all empty So what happens if the producer gets scheduled first? That's fine right it can fill in that slot Because it's empty, but if the consumer gets scheduled first Right initially the number of filled slots is zero So it would sit there and wait So for the first element The first thing we know that has to happen to it is it has to get filled because it's initially empty, right? Yeah, so Yeah, no, so So there is potential of a data race for that buffer, but in Empty slot and fill slot. That's the only place I modify the buffer and both of them have a mutex around the whole thing Yeah, any of the blocks of code at the same time so so I have One mutex and I have a lock unlock call around fill slot Yeah, and I have a lock unlock call around empty thread So because of that, I know that at one specific time either one thread is going to be in this critical section Or the other and not both Because the mutex remember if I just have lock it means that only one thread is allowed You know after the lock had a one time Yeah, but they're sharing the same lock right so either empty slot Gets called first and that lock passes and then it can then all this is That's the only thread that's executing this Because if this you know if this thread was in the middle of empty slot and then say another thread comes in and calls Fill slot well the other one already has the lock So it would just hit the lock call and sit there and wait, right? So I have two critical sections here, but only one thread can be in either of them at one particular time So that's another way for the lock So you can have multiple critical sessions with the same lock and know that only one thread will be in one of those at any given time All right any questions about our producer consumer. It's kind of cool All right So on the slides, there's just Solving problem to first ensure that producers never overwrite filled slots and that consumers never consume empty slots So that's just what we did So here was our final solution to ensure proper order for producers and consumers and again Really depends on the initial value to get this actually correct So what happens if we initialize both center fours to zero? Then if we initialize both them to zero, what's gonna happen? Yeah, it's just not going to run right so if they're both Initialize to zero the first thing that both the presumer and the consumer does is weight and If the value zero, it's not going to pass weight so no matter what gets scheduled first either one's going to wait or the other's going to wait and Doesn't matter how many threads there are nothing. I'll ever call post So it'll essentially just sit here and think about the poor decisions that made Right. All right, so we set a force to ensure some proper order They contain an initial value that you choose. They essentially just have two operations Increment, which is called post and decrement, which is called weight And it blocks if the current value is zero That's the only difference for weight and then you still need to prevent data races because this is only ensuring ordering You still have to identify data races and they're the bane of your existence Okay, so let's go over some question for that set context thing that is in lab two So has anyone looked at this before? Hopefully this is new So this is a question from a midterm and Ashen went over it and it's probably a good thing to go over. Hopefully it helps a bit with lab two So, you know that implementing thread switching is a little trickly to simplify matters You start by implementing thread switching between just two threads A and B as shown below you have added A and B to the run queue and initialize their thread context So they start executing thread a and thread B at this When they run for the first time your scheduler happens to run thread thread a first So we have two that are initialized thread a wants to start running there Thread B wants to start running here and we're supposed to say circle the output you expect to see And we have a whole bunch of different options. So let's see what the output should actually be so We're going to run thread a first. So let's put active threads in Blue so we want to start executing first and thread a is going to go first and First thing it's going to do is declare a B on the stack and then set it its value to one or zero So its value of D is zero after that line Then it's going to go ahead go into the while loop So I is a global variable up there and it's set to zero and because it's a global it's going to be shared between threads So first it comes in the while loop because I is less than three Would do plus plus I so we're going to update that value of I from zero to a one And then it's going to go ahead and print F a Whatever the value of I is so it's going to print off a One first right everyone agree with that That then after that it sets D equal to zero again that sets get context for thread a So next time you set or call set context on thread a it's going to resume there So let's put a little placeholder there So this is where thread a would resume if we called set context on it again So then it would go into this if D is zero So then it would update D to be a one again. It's on its stack so we'll go ahead and Update that from a zero. That was a big eraser to a one and then it's going to call set context on Then it's going to call set context for thread whoops for thread be so thread beats just going to start executing become active So this is the state of thread a stack when we left it off So let's go ahead and just mark it down as what the value of D is So whenever something calls set context of that, we know what D is so Thread B is going now So we'll change that to blue and now we're thread B. So it goes ahead Declare a B on its own or D on its own stack and sets its value to one And then it would come into the while loop. So I is one So that's less than three. So it would come in here and then increment I So now at this point I is not one it is two and then next thing it's going to call print FD What's the value of I? Well, it should be a One or two All right, everyone with me so far All right So then it updates D from a one to a one Wow, and then it calls get contacts on D on B. So if B resumes, it's going to resume At that point. So I'll put a green marker down for that Thread B would resume here So now it goes checks this if statement if D is one it's one and then D is equal to zero So we would go ahead and update D on its stack is now a zero Right, and then it would call set context and start resuming thread a again So we'll just mark off what D is when it When it, you know switched over D was equal to zero So we're back here for thread a and D was one So now we're executing thread a again So now if we check that if D is not zero because it set it before and switch back But remember its stack is private so it didn't change So we're going to not evaluate that if to true and we're going to go here And then we're going to go up check the while loop again is I less than three. Yes It's still two. That's our global then we would do plus plus I So I goes from a two to a three and it prints off a three right and Then they would go here set D equals to Set D equal to zero call set context again So at this point we'll put a little marker just in case so thread a would resume there Then it would go into this if statement Because we're still executing that Blue one it would set D equals to one and call set context So it is now Value of D is one and we're now going to move over to thread B So let's change it back to be blue So it's going to go check if D is equal to one. It's not it's zero It would go fall through here and go back up to here while I is less than three current value by is three so it would just boop and Then it would exit and then it's done and that's it so We got a one B two a three and hopefully that's one of the answers It's the last one sweet Because it said this scheduler runs a first Yeah All right any questions about that. That's how kind of set context and get context work All right, sweet. We got two minutes early Just remember I'm pulling for you. We're all in this together. Whoops, and they drop