 All righty, welcome back to Operating Systems. So today we get to talk about centiphores and not the fun flag ones that you have to signal ships with. So first we learned about locks already. So locks ensure mutual exclusion, AKA only a single thread can execute at a given time. And we had this because we wanted to prevent data races. So that critical section, so that piece of code between the lock and the unlock call, well, only one thread at a time, that's great. But it doesn't help us that much if we want to ensure some type of ordering between threads. So we might want to coordinate a little bit. So how do we ensure ordering between threads? So let's look at this problem. So you have one thread that just prints this as first and another thread that is going to print I'm going second and ideally you want to have them always print in that order guaranteed. So let's look at the code. So here we are, our main thread we create two threads. We create one thread that wants to execute print first and another thread to execute print second and then afterwards we join them. So above here in print first we just print this as first and print second, we print this as second. Our goal is to always make them print in that order. So does this work initially? Well, let's see. So hey, it worked, we're good. Print first, the first one printed first and the second one printed second. Oh no, we have broken it by running it, what, three times? So that's not good. So do we know how to fix this? Is there any way I could make this always print this as first and this as second guaranteed with what we know already? Yep, give the first thread a lock. So I should create a lock. So I have a lock. What do you want me to do with it? Then unlock after the print. In the second thread I can join the first thread. So right now in the second thread I can't access this array from main. So only start after the other thread finishes. So how would I do that? Well, first let's execute this locking solution. So I'm going second, first, second, and didn't seem to do much. Yep, sorry. Join after the first thread create. So just do a shwoop-de-doop? That, will that work? Yeah, this is the same as executing them sequentially, but it should work right first. Always going to print this as first because, well, our problem was we created two threads. The OS could have scheduled whatever one to run first, whatever one to run second. We have no say in the matter. In this case, we are creating two thread or we're just creating the thread and then waiting for it to finish, then creating a thread, waiting for it to finish, essentially making it sequential but it's going to always work and that's about the best we can do now. Yeah, that's about the best we can do and I guess it works for this case but it might be that there's just a bunch of other stuff like, oh no, useful. Assume that this is some useful code after the prints and we just want the prints to happen in order and then both the threads to execute in parallel. So your solution works for the prints but doesn't get us quite where we want to get if we have a more complicated program. All right, what about hacking this lock? That was a good idea. Can I hack it a bit more? So, at this point, if I just call unlock, unlock doesn't do anything but let's do something fun. What about if in print second, I try and get the lock and I have print first, eventually unlock it so I'll just grab the key and then I'll just try and get it again. So, I'll grab the key and then wait for print first to execute, print this as first, then it unlocks it and then I can lock it again. Would that work? I see some confused looks. So, I'll have the case that if print second goes first for some reason, it just essentially grabs the key, locks it and then it's trying to call lock again. It can't do it because it has the key. It's not allowed to go. Then if eventually print first will run, print off this as first. So, print second didn't happen and then unlock it, which would make the second thread be able to pass through this line and get the lock and then print this as second. Yeah. Well, I need a second unlock in this case here so I can just have, do that, yeah. So, the first thread runs first and unlocks it even though it's not locked, that's fine. It just doesn't do anything. That's not super fine, but let's see. Oh, that was a fluke, don't worry about that. It was just running, thinking really hard. I'm not thinking really hard. There, worked. So, it technically works because it never prints second before first. Sometimes it just doesn't print second and it's your problem. You just need to restart it again. See, it didn't work, it worked, yeah, fine. So, this kind of works. It was an awful idea because, well, it kind of works if the second thread goes first, but if the first thread goes first, well, it'll unlocked an already unlocked lock so it won't do anything. And then the second thread will lock it, it'll acquire the key, and then it will call lock again and it can't pass because it has the key. Nothing else is going to unlock it at that point because thread one's done. So, it will sit there forever waiting and, well, that's like a very lame version of something called a dead lock. So, we all get that, that's kind of a bad solution. So, this is also full of undefined behavior so if you technically read the documentation of a mutex, what I did here, this unlock in the first thread, technically you are only allowed to unlock from the thread that acquired the lock. So, technically that's not allowed even though it kind of works sometimes. So, technically I should not be calling unlock while I do not have a lock. That is undefined behavior, that is no good. So, yep, no, so if the second thread executes first and makes it and calls lock, that lock currently isn't held by any thread like the key's still available. So, it would grab the lock and then, well, it can't pass this lock call. So, it can pass one but it can't pass the other. So, the first lock, essentially we used it to prevent, so in the case that, yeah, so we used the first lock to prevent it from going any further. So, we essentially grab the lock so that this lock will stop it from making any progress until it gets unlocked by the other thread. So, that's why we had two in a row here. But this is a bad idea. All right, it kind of works so. Like, it technically works which is the best kind of work. Print second never happens before print first, that sometimes print second never happens. It's like, was that malicious compliance or something like that? So, the locking idea was also good so you have to be able to differentiate while when you need mutual exclusion versus when you need some ordering. So, for example, if this was our print first so say it was implemented in this silly way, this would not be considered thread safe because if we execute this and they're all different system calls, well, we're gonna get, sometimes we get it in a correct order that makes sense. Sometimes we get it in like just weird orders. So, this I'm going second is first. What? So, if I wanted to make this safe and make sure I get a line at a time, that's where I would use my mutual exclusion and my locks to make sure that, well, if we set up a mutex, we'll make sure that if print first grabs a lock it does all three of those right system calls in a row without getting interrupted and without switching back and forth and similarly in print second. So, if we wanted to fix that with the mutex then we could create a mutex, make sure we lock it and then if we lock it while only one of these is going to execute at a time because now we have mutual exclusion. So, if this thread gets a lock, well, it prints all of these in a row, we will get the whole line and then if the second thread gets a lock it prints all those in a line in which case this will be like little print, I'm going first, I'm going second. We don't know what order they're in of course but we don't have the case where the line split or something weird like that. So, that's a new consideration for you whenever you call functions. So, in the documentation you will see that some functions declare themselves as thread safe. So, if they're thread safe that means you can call them with a thread and it won't mess up the value or be in like some partial order or something like that. So, luckily for you malloc is thread safe so you can call it with multiple threads and it guarantees it doesn't have any data races or anything like that. So, that will be a fun thing because now libraries might have internal data races if you don't look at it and they're not safe to use with threads and yeah, it makes your life very much, very much more difficult if that is a thing. All right, so we're good on this. Mutual exclusion, save that, yep, yep. Yeah, so we have an idea. Can I just make a global variable that I guess would have to be operated atomically that just has some value I wait for and I check to make sure that does that? Well, today's title of the lecture is a center four and you essentially just described one. So, that is what we will figure out how to use today. It is already invented and we can take advantage of it. So, center fours are used for signaling between processes so this unfortunately is an overloaded worm. This does not mean your signals which are like interrupts. This is just like used for signaling like the English word notifying someone of something. So, center fours are basically just an integer, an unsigned integer that is shared between all threads and optionally you can use a center four to share between processes if you really want and then you can have some ordering between processes if they need to communicate. So, think of the value of a center four as just an unsigned integer. So, it's going to be zero or greater and it only has two fundamental operations. They might be called different things based off some literature you read but the names of the library and the names I like to you is weight. So, all weight does is it decrements that value atomically and post increments that value atomically and the only rule here that makes this useful is weight will not return until that value is greater than zero. So, make sure that that number can never go negative. So, if you call weight and it wants to decrement the number and the number is currently zero, it won't do it until some other thread increments that number and then if it increments that number from zero to one then you can decrement it from one to zero. So, initially we can set that the initial value of the center four to whatever you want and basically that is how many the number of weight calls that may occur without any post calls. So, that many threads could run in parallel as long as we don't have any post calls. So, the API looks a lot similar to pthreadlock. So, it's its own header file. So, it's center four dot h and it's called SEMA init. You have to initialize it. There's a destroy and then there is a weight. So, that's the decrement and if it's zero it will just block until that value has been incremented by another thread and then there is a try weight which is the non-blocking version of that. So, it will tell you whether or not it decremented successfully and then post just increments. You shouldn't have to check the value for that because it should always work and all these functions just return zero on success. You might notice here in the init there is this pshared. So, it's a boolean. If you set to one it will be shared in any other process you fork in shared memory. So, that's how you can share it between forks by default. If you set this second argument as zero that means when you fork it will get a new end up the new process will get a new independent copy of it. It won't be shared between the processes. So, let's see how we solve our problem now. So, we're going to make sure that one thread always prints first. So, let us go back. Whoops. So, let's get rid of our poor, well not our poor idea but like the way I used it, probably not great. Whoops. Yeah, sure. So, let us declare a center for. Mf4, set. So, let's create one as a global variable and then in main we will initialize it. So, sem init, it's called sem will shared. Hmm, my initial value should I set it to. Hmm. Yep, all right, zero. Let's set it to zero. So, if I want print second to always go second, well I should probably do something like sem wait, right? So, that should decrement it. It's currently zero, well yeah, it's current value is zero. So, if the second thread gets scheduled first what happens when it gets to wait? It should block, right? Because the current value is zero. So, if the current value is zero it's going to wait for that value to be greater than zero. So, if I want this to proceed I probably need a post. Where should I put my post? Yeah, after the print in the first thread, right? So, sem, something like that. So, should this always work? So, let's argue about it. So, if the second thread executes first, well it will hit sem wait and it will have to block because the current value would be zero, right? Okay, so it can't make progress, it's stuck there. Even if we gave it all the time in the world can't reach that print line. So, then that means the first thread would execute first and print this is first. There's nothing stopping it from doing that and then it's going to change the value of that center four from a zero to a one and finish or do whatever and then this thread can now proceed. So, it can now change that value from a one to a zero and then print its line and it's done. So, that should work if the first thread goes first or sorry, if the second thread goes first. What happens if the first thread goes first? Well, it's going to just print immediately and then change that value from a zero to a one and then the second thread would execute and it doesn't really have to block that long it just has to decrement it, it's just one to zero and then it prints. So, in both cases I should get the order I want. So, let's see if we fixed it. First, wow, look at that, we fixed it we fixed it and I don't want to sit here and run it 10 billion times because we argued about it. We came up with all possibilities that might happen which in this case thankfully it's two and we can see that, hey, has to be in that order no matter what because of that wake call. So, we all agree with that, that works? Yeah, no, you can only use the API to access it it just keeps track of that value internally you just initialize it to some value. So, you created it. Yeah, yep, it could. So, what was the first question, sorry? Yeah, so this is, well the first thing we know to ensure ordering there's other ways if it gets more complicated we'll see next lecture but this is one of the ways, yeah. Yeah, so that's another way, like a seven four just a value you can declare it just like mutexes I can create multiple of them if I want. So, yeah, if it gets really complicated I just have separate ones for different tasks probably a lot easier to read. All right, so this worked and we're all good on it. All right, here I will make a subtle change. Watch, I can break your code with just changing one number. Does this work now? No, why? Yeah, the second thread, well it can just go past the weight change this from a one to zero, no problem. I don't, I broke it. So, that kind of sucks because the correctness of it depends on this initial value I just can't read the weight and the post calls because guess what, the initial value really matters. So, it kind of gets like a pain like that. So, do do do. Oh, yep. So, how is this any different than using join and bearing cereal? Whoops. Oh yeah, and there's a question can you order three threads like this and you sure can, yeah. We can order as many threads as we want. So yeah, the nice thing about this is, say, long calc, long calc. So, we can still get the order between the threads but those calculations could actually run in parallel. So, say the first thread prints, this is first post it and then does a little bit of work before it gets context switched. Well, then it would come to the second one, print it off and then it could work at the same time and switch back between them over and over again or they could run in parallel. Yeah, so with just joining, you have to wait until a thread is completely done, you can't just wait for it to do a little thing or make sure that only one runs at a time or whatever they're just pure parallelism if you can just go, go, go, go. For this, we need to prevent data races so that's where mutex came in and then this just lets you have some ordering between the threads without them finishing all the way, which might be useful. Yeah, sorry? No, if it hits some weight, it blocks until the other thread goes. So, the only part that is kind of serial is that the second thread will wait until this print happens, that's it. But after that, they could run in parallel, you could context switch between them, they could both make progress at the same time. Just initially, if we start executing print second, it's gonna have to wait until print first does its one print line and that's it. And then after that, it's full, go in parallel. Yeah, yeah, if I had a third thread that was waiting, it's just like mutex lock, you don't know which one. Ideally, there'd be some order, so it depends on the implementation. Ideally, they'd be in the same order they tried to wait on, but I'd have to double check the implementation, but it should be fair. Then also, if you initialize to one, can you wait? Yeah, so let's fix it. So, if I initialize it to one, well, here, I fixed it, I can wait twice. So, in this case, it would, yeah, this should work, right? If I initialize the initial value to one, should work, I just have to argue about it. In different ways. So, this weight, well, it could go from one to zero and then get blocked here until it goes from zero to one from this, or I might have the weird situation where this posts first, so it goes from one to two and then this would bring it from two to one and this would bring it from one to zero. So, generally, you don't wanna do that because I just added another weight call instead of just setting the value to what I probably should have had at that, which I still do. All right, so we're pretty good at center fours. We can do something way more complicated now, right? All right. Oh, so what's the reason for letting us pick a value? Yeah, so, well, let's get into it. So, here's what happened. So, we answered the question at the bottom that was just basically what happened if we initialized the value to one, stopped it from working, unless we add another weight call and we fix it. So, this kind of looks familiar. So, our center four is kind of like mutexes. Like, can I use a center four as a mutex? Yes, how? Yeah, changing from zero to one and one to zero is kind of like locking and unlocking. So, which one's which? Yeah, here. Yeah, one to zero is locked, zero to one is unlocked, right? So, that means I could use that and if the initial value of the center four is one and I use it like that, that lock and unlock. So, I use weight for lock and post for unlock and I always pair them. Works exactly like a center, or works exactly like a mutex. So, a mutex is actually just a special, just a constrained version of a center four that's a lot easier to use. Because, well, you can kind of more easily screw up a center four than a mutex because a mutex will just kind of stop working. So, here's how to use it as a mutex and note the value. So, if we tried to fix that counter example before, well, I can just summon it there, make sure that's initial value is one and then for lock, I use weight and for unlock, I use post. Again, this example is kind of silly because, well, if I really wanted to, I could just post the center four like 80,000 times and maybe I can read the value if I can just hack into the thing or whatever, but basically this just works as a center four. Or sorry, it works as a mutex. So, let us get into the hard problem of today. So, yeah, so also that was one reason for letting you pick the value because, hey, you might want to use it as a mutex if you really want. So, let's come up with a solution for this problem. So, this is real fun. So, assume you have a circular buffer. What's that fancy word for an array that if it goes out of bounds, it just restarts at the beginning. So, if I'm at the last element and I go to the next element, well, it'll just bring me back to the beginning. So, if there's n things in it, the index at n minus one is the last. If I go to the next one, it should bring me back to zero. So, we have multiple threads that are multiple producer threads and multiple consumer threads, which that's a lot of concurrency, which will be fun. So, we have a few rules here. A producer should write to the buffer. So, they should be filling things into it. And a consumer should be reading from the buffer and consuming that data. And there should be a few rules here. So, these are all like data in an index or in a big array. So, a producer should only write to the buffer if that slot is not already filled with data because otherwise we would be overwriting data that hasn't been handled yet. And we'd be losing some information. So, we don't want that to happen. And a consumer should also not read from a buffer that's empty because there's nothing in it that's a waste that should only read valid data. So, I set this up so my code, all the consumers share an index and all the producers share an index and I don't have any data races with trying to access the next element or anything like that. So, you can assume no data races in between the producers and the consumers, but these rules are not fixed there. We have to fix those rules. So, let's see the code. Producer consumer. All right. So, here's the code. There's some space to initialize some center for us and just don't worry about the while conditions or anything. Just think what the producers are doing. So, each producer thread will simulate doing some work and then it will try and fill one of those slots. So, I can have any number of producer threads that all try and fill a slot right now. There's no coordination between producers and consumers. All the consumer will do is try to empty a slot and that's it. And then simulate doing some work on that data. So, if I just have this and I try to run it, producer consumer, let's say I have 10 threads of each that generate 15 that are generating 15 pieces of data and I run that all I can see some issues. I have it printing a red line every time we do something bad. So, we can see initially we were trying to empty slot zero. So, initially the array is completely zero. So, we tried to empty slot zero, but it's already empty. So, we read some invalid data or just read no data. So, that's not good. Then it went on and on and on again. And then all my, it looks like all my consumer thread stopped and then, well, all my producer threads started. So, it filled slot zero, one, two, no problem. And then the other, the consumer came empty, what, five slots and then the producer filled up another five slots. So, it kind of worked. So, let's see. So, when I run this to the integer arguments are the number of producers and the number of consumers. So, I can change this a bit to have more producers than consumers. So, I can see some more interesting things. So, if I do that, I can see that, well, initially my consumers run, they each try to empty a slot that is already emptied. So, that's not good. Then I fill all the slots, I empty some more. So, now these ones are filled and now I start filling slots again. So, now I'm in the situation where I'm filling a slot that has already been filled. So, no producer or sorry, no consumer has read the data at zero yet. So, I get both errors now, which is not fun. So, if I want to fix this, this is an ordering problem. So, I should probably use centerfors, especially considering I created something called initialized centerfors. So, what, any ideas with what I should do here? Yeah. So, don't worry about the loop bounds. I basically use centerfors to produce, to count the number of things I need to produce in each thread. But, yeah, so I'm using some centerfors myself just to ensure some type of, just for producing the data, because I want it bounded, I want it to go on forever. So, yeah, don't worry about the while loops. Yeah, two centerfors, one for read, one for write, yeah, same thing. So, I should wait in a consumer? Okay, so, with centerfors typically, it's easiest just to do it one step at a time. So, usually placing the wait is the first thing. So, in this case, I want to prevent, I want to prevent reading a slot that has no data in it. So, my consumer needs to wait. So, it needs to wait on centerfors, let's give it a name. So, I will call it, what's a good name for it? Sure, filled slots. So, I'll keep, I'll basically make sure that there is at least a filled slot. So, if I call it filled slots, and get rid of that compiler warning, let's initialize it. So, initialized filled slots to what? Yeah, the size of the buffer, which is what I call it. Now, buffer size is up there, buffer size. Like that, yeah. Oh, we're getting a fight, zero, yeah, zero. So, let's see, if it was buffer size, and we have a wait on it, let's see, wait on filled slots here. So, if initially the value is buffer size, so say it's 10, well, we'll wait, and they'll go from 10 to nine, and then this thread could run first, right? So, we'd be reading a slot with no information in it. So, yeah, yep. So, in this, because they are in lockstep with each other, and I don't have any data races between all my producers and between all my consumers, it's fine because they always access it in the same order. So, as long as I'm making sure I obey the rules, I won't have an issue, because, well, if there's one slot that is filled, it's going to be slot zero first, and then I'm going to consume slot zero first, and it's always going to be in a row, and I wrote it so it doesn't have any data races, but yeah, if I didn't give you that like fill slot code that doesn't have a data race in, then yeah, you might have more problems, but because I access it in the same order in both, I don't have that issue. Okay, so, where'd we get? We got our weight here. We said that it should initially be zero, right? So, if that zero, that seems to mean that if a consumer thread runs first, it will hit this weight, and if its current value is zero, it doesn't do anything, right? It needs to wait for at least one producer to go. So, where should I do a post if I want this to go? Yep. After I fill a slot, right? You agree? Thumbs up, we're all good. So, I should do some post. So, this should work, so this will just wait and say there's like all 10 consumer threads just get stuck at that weight. Well, if I produce like up to 10 things or however many number of things, well, I would post this all the way up from zero to 10 and then 10 consumers could all go at once, which is fine. Yep. Yeah, so the question is, hey, can I go back and produce like, it's kind of overwrite a slot? So, that's also part of the fill slot, so I make sure in fill slot I don't overwrite something as long as we're playing nice between things. So, yeah. Yeah, if I try and overfill a slot that's already filled, I will get an error. Well, yeah, there, that's a good thought. So, do I also need to keep track of the number of empty slots so I don't overwrite an empty slot? Probably a good idea, right? Because even though internally it's consistent, it might be between the threads. If I don't pay attention, well, they might overrun each other and eventually you will get out of sync and you will start filling already filled slots, which is bad, because let's see. If I run it right now, yeah, I see I get a problem where I am emptying a slot that is already empty. So, essentially my, let's see. So, I emptied a slot, so yeah. I filled some slots and then my consumers weren't fast enough and got over it. So, I need another center for to make sure that I do not empty slots that are already empty. So, let's create another center for to keep track of the number of empty slots. So, here, my empty slots, is this good? Yeah. Yeah, my initial value should be 10 or I guess more abstractly my buffer size. And then here I need to make sure that while some weight on an empty slot, I call it empty slots. So, this way I could fill up to 10 of them. So, I can fill up to 10 of them because I'm making sure I have an empty slot. Otherwise, like I said before with the fill slot, it just keeps on going. So, yeah, it would overwrite that. So, make sure that I have an empty slot. Initially, I can fill up all 10 of them. So, the entire size of the buffer. And then I probably need a post here, right? So, whenever I empty a slot, then I have a new empty slot. So, does that work? So, we made sure we could see that even if we had a bunch of slots, bunch of like consumer threads waiting around. So, we filled a slot, boom, immediately it got consumed. Then we filled what is that? Six more slots, then emptied slot one and then filled slot seven because while that's still eligible, then we empty slot three, then we fill slot eight, nine and we could fill slot zero, one and two because they've been emptied already. And, oh, look at that. The consumer goes empty slot three, fill slot three. So, as soon as it's done, as soon as it's empty, we go in, we put some more data in it. Empty slot four, boom, fill some data, empty slot five because we've now generated all of our data. Empty slot five, six, seven, eight, nine, zero, one, two, three, four, boom, we're done. That's all 15 elements. So, even if I do the other way around where I have more consumers than producers, well, basically what's going to happen is as soon as the slot gets filled since I have more consumer threads, it's pretty much immediately going to be emptied. So, fill slot zero, empty, fill slot one, two, emptied, fill slot three, four, emptied, that works, right? Yeah. So, if we don't use the empty slots, what happens is we're essentially not constraining the fill slot. So, this fill slot function will always, the way it's written in is guaranteed to always go from zero, one, two, three, and then if it makes it to nine, it goes back to zero, but it won't check if zero is filled already or anything like that, right? So, it always goes in the, fill slot always goes in the same order, but can leapfrog ahead of the empty, the consumer threads that are trying to empty if I don't have this, right? So, let's see, if I run this other way around. So, in this case, it says slots already empty because, well, I, my producer thread, I had two main producer threads and it was filling slots and it essentially overwrote, oh, it did overwrite, it should have overwritten. Yeah, so slot is already filled. So, it overwrote two and then it leapfrog, and then those are just gone. So, I lost them. So, now I'm going to have slots already empty. Yeah, so in this case, I'd started losing information when I started filling and I started filling slots that haven't been emptied yet because I just keep on going with this without that empty slot center for, I just keep on going as fast as I can. I don't care if I overwrite anything. All right, we're good. Yeah. So, why the initial value of empty slots is buffer size and why do we have zero for filled? So, yeah, easiest one to argue about is filled because, well, oops. One problem is we essentially don't want to empty a slot until it's been filled at least once, right? And initially it's all empty. So, the first thing we wanna do is prevent any consumer thread. If it runs first, we don't want it to do empty slot, right? And we would put a weight in front of it and the only way it's going to wait because remember, weight will wait until that value is greater than zero. So, the only choice we have is to set it to zero to here so that a consumer thread gets blocked. So, that's why it's zero for this one because otherwise if we set it to one, then a consumer thread could just run immediately and it would empty an already empty slot. And then for the empty slot, why we set the buffer size is we don't want this producer to essentially overwrite slots that already has even if you say we have like 15 producer threads, we don't want each of them to execute once otherwise we'd be like overwriting, we'd write 10 elements and then we'd overwrite five, right? So, we can only at most fill up that entire buffer full of elements. So, that's why here that buffer like we could have at most initially buffer size number of producer threads because each one of them fills in one slot. So, if it's size 10, we can fill in 10 things. So, that prevents us from essentially overwriting information that hasn't been read yet. Okay, we're good because that's like, that's a pretty complicated example. All right, cool. Let's quickly wrap up then and I'll be around. So, yeah, here's doing that problem. So, first I kept track of the number of empty slots. So, up to buffer size and then made sure that consumers never consume empty slots and that was keeping track of the number of filled slots which is initially zero. So, all that's in your slides and yeah, if we initialize both, what happens if we initialize both of our center ports here to zero? Yeah, yeah, it's not gonna do anything. Same thing? Same thing? Yeah, if both the initial, or both the initial values are zero, well, if a producer runs first, well, it needs to wait on empty slot. It's currently zero, so it'll get blocked. Doesn't matter how many I have. If I have 10 of them or 20 of them, it doesn't matter. Then for consumer, well, if the initial value is zero, guess what? It just blocks here as well. So, all my producer threads get blocked, all my consumer threads get blocked and nothing does anything. So, your initial values are pretty important. Oh crap. We use center fours and sure proper order. Okay, we're over time, bye. All right. Just a reminder, point for you. We're on this together.