 All right. Hello. Yeah Yeah, okay, so first question is what will be and won't be on the quiz So I post on piazza a list of topics. So last night. I finally got to see the quiz So I went through it. I deleted all the stuff that we haven't covered yet So there's no surprises where you have to go look at his lecture notes or something like that I also post on piazza a list of topics. You don't need to know Just to be clear It's like you don't need to understand how to use fork exec join all of that or wait because they haven't learned it Right, I could ask lots of questions about that, but they don't know how to do it. So can't ask it on the quiz You don't need to know about zombie and orphan processes because you'd have to create processes for that, right? You don't need to know about joinable and detached threads You should however know that threads can execute if they're preemptible in any order and then there's co-optive threads Which it's like lab 2 that you know, you have to explicitly yield and we'll talk about that a bit more today You don't need to know anything about that at exit stuff. You don't need to know like any specific assembly You don't need to know about the MMU. You don't need to know about C calling conventions so if you kind of break down everything quiz ones actually really really easy like like If I can't ask anything about creating processes or managing processes that means lecture 4 Won't be on it lecture 5 won't be on it They haven't done IPC so lecture 6 isn't on it and until the very end Which is like the exception thing and then pulling verse interrupts. Yeah So they talked a little bit about MMU and talked about processes and threads But they went into more like low-level detail That's covered in the lab that you don't that you'll be doing in the lab anyways So I thought it was better to just see how things actually work And kind of combine the Unix calls and when we learn about it. So we already know it. Yeah What post on the discord about the thing about the topics? I put on piazza not discord I can put it on there, too But yeah, so there's a list of topics on piazza the quiz is pretty easy It's like three short answer questions are like max three sentences then there's just multiple choice true false and Multiple answer where you can select multiple things that apply. It's honestly pretty easy You shouldn't be that worried about it, especially because like half the stuff I talked about is an even valid task right, so I think Ashton said he'd post like a mini quiz of like five questions just to give you a taste of it But honestly, it's pretty easy like I would say worry about like lab one I was worried about that for you and lab one was kind of bad. This is not lab one like I'm You'll see after the quiz. It's not that bad. It's just unfortunate that I think this is the second time It was offered like that. So there's really nothing to study for but really if you distill the content We know and they know how many questions can we even really ask? Like even if I if I gave you an example of a short answer question It's likely that it would just be on the quiz. I'd be telling you what's on the quiz So I don't even know if I can give you an example of a short answer through three short answer and then I don't know handfuls Other ones. Yeah Can you give me an example of a short answer that's only one sentence two or three sentences So like they know conceptually what processes are you should know that to what they encapsulate at least virtual CPUs and you know their own address space and Threads are like virtual CPUs with that you can ask some questions. You can compare and contrast It's always a popular thing to ask you can I don't know Say what's different about them? Remember we access the same memory in two different processes and it was different That was kind of cool. You should be able to say why like Yeah, feel free to ask me stuff I'll kind of blast through this lecture and then we can ask general questions in a screw-around. Yeah No, so it's oh, yeah, so it's just an online quiz on quarkus A lot of students are like, oh, yeah, I don't know a good place on campus to get it So what we think we're gonna do is we're going to book a room from eight to nine and I will be there So we'll just have a big room so everyone will have internet or if everyone's internet goes out At least you will have a witness to it If the campus network goes out so I can be like hey everyone that was writing my room got screwed So we'll offer that That's optional you can of course just go home and do it or go somewhere else But there'll be a big room so you know that there'll be a spot for you And I will be there. I of course can't answer any questions, but I will be there for moral support. Yeah You can bring whatever you want Yeah quizzes open whatever because it's not like I mean you could write it at home It's not like I'm gonna be like hey everyone connect to the zoom call so I can watch you creepily write this like That's one kind of an invasion of privacy and two just kind of weird All right So let's just kind of blast through today's lecture and then we can get to you know We can hypothesize some questions if we want If and then next lecture nothing will be on the quiz and we might go over some review stuff. Yeah So everything so lecture four was creating processes all about fork and the parent-child relationship They don't know anything about that. You could all you knows that all they knows you can create new processes somehow through magic That's it Five that's process management. You have to know how to create a process to be able to manage it So five is completely not on it and then six is they didn't do IPC We should you should know like read write and open maybe but Not really and then like pulling verse interrupts and then the three exception classes on the CPU It's the only thing from lecture six. Yeah No, I don't think they really know about file descriptors Yeah, yeah, and I and luckily for you guys too I accidentally told the other class that the quiz won't contain anything from the lab so I Think we're asking wanted to put some questions on there, but because I accidentally said that they wouldn't be there So you don't have to answer those questions Nope Yeah, and you're welcome. I accidentally said that so I just deleted everything All right, so let's quickly just go through today's lecture this the first part of it may or may not help with the quiz So data races and that would not be on the quiz but Arguing about threads and what order they execute is would be on the quiz or would be valid so This will be important later on because data races are kind of the bane of Your existence when you have to program concurrently So it's good to know the definition of a data race. So a data race is when there are two concurrent Actions or accesses to the same variable. So same memory location So same variable and at least one of them is a right So if one thread is reading reading reading reading and the other thread just writes that can also just show or That can manifest in a data race where you get results that you would not expect and data races are bad So when we did that count example, we had count. That was a global variable. We had multiple threads accessing it They were all reading and writing it and that's why we got some inconsistent results And we'll break it down even further today just to make it really clear why we had some weird results So first we have to talk about atomic operations and they are indivisible Operations so anytime you see a atomic operation, you know, it either has not executed yet Or it has executed to completion. There's no middle ground, right? Plus plus count is actually a bunch of operations that it is not atomic But there are it consists of atomic operations that you know, it could context switch out of in the middle of it so Any atomic instruction again happens all at once or or it either happens or it doesn't happen No in between so that means you can't preempt in the middle of the instruction if you preempt either has happened or it hasn't happened So between two atomic instructions, of course, you can get preempted So that's all we need to know about that and then there's this thing called three address code if you get into Compilers, it's really really fun. Compilers are great. Hopefully I might teach that later But three address code is something that is used internally in compilers to kind of argue about optimizations without having to read assembly because assembly kind of sucks so Basically, it just simplifies everything for you so your program is just represented by simple statements and They all represent one just fundamental operation and you can assume each Each statement is atomic So it makes it really easy to use or a reason about data races and it's way easier to read the assembly So because everything looks like this it's just a result, right? Some variable name or something like that equals One operand operator another operand so one plus two or there may not even be an operator They may not even be another operand, right? That's as complicated as it gets So X equals one plus two is the most is the most complex this will get So Jimple is the three address code used by GCC the compiler that you've probably all been using so far This again will never be covered in the quiz. This is more for your own interest sake So if you want to see the Jimple representation, you can use that flag you can see all of the three address code generated by the compiler if you want and It's kind of easier to reason about again, like I said then just low-level assembly so we'll look at that data race example we saw a Lex let last lecture and See the Jimple code for that So Just to make it really clear that you know counts a global variable and it's in memory. I turn I Turned it into a pointer. So p count is just the pointer to that global address that represents the count, right? So if you look at the Jimple of plus plus count you can see it's actually three operations So the first operation here is our load So it will load D reference p count from memory So basically just read in whatever value it's pointing to from memory and then this is just a temporary you can think it's Abstraction is that it assumes that you just have a machine with an infinite number of registers So D1 is just some temporary storage and since it's a register and this is executing on a thread It's independent to that thread So then the next statement is our actual increment which would happen just in that thread. So it's just creating Assigning a new register to the value we just read from that memory location Which was D1 and then adding one to it and then in the last step. It just writes out that value right, that's our So that's our right So let's see all the possibilities here So this is good thing to reason about all of our possibilities. So if I write them down The only thing I really care about is accesses to the same The same variable in which case it's whatever p counts pointing to So I don't really need to label the increment because that's local to the thread. I don't really care So I'll just label the read and write because they are the important things and then Just assume we have two threads and all they do is execute those statements once and of course within a thread Threads are just like everything. We're used to they will they will execute sequentially But since we have two threads, what can we say about what thread executes first? You don't know right? So if you want to argue about data races you have to consider all Possibilities which you can see how this gets kind of hairy quick So one possibility might be that Hey, well my first operation is a read. So maybe I'll just pick thread one to go first So if thread one goes first it would do this It would read from memory which initially here p count is zero, right? So it would read zero So that's all it would do and now At this point, you don't know what's gonna happen. That was an atomic operation. So now that atomic operations done So now there's essentially two possibilities It could execute the next instruction or because we're going to assume a preemptive scheduler It could just get preempted at any time and then switch over to the other thread So if it switches over to the other thread Say we get unlucky It would do the memory read again Which p count is still zero and it would read the value zero now at this case. We're kind of screwed It doesn't matter. So say thread two just executes the increment So we would have d2 equals to whatever it just read zero plus one So d2 would be equal to one, right? And now again, we don't know what would happen It could switch back to the first thread in which case it would execute its version of this and It read zero into d1. So it would be zero plus one Equal one Now at this point, it also doesn't matter. So it would write out p count Equal to one and then it could then at this point if we assume the threads done at this point It could it would switch back to thread two and then also execute p count equals to one Right any questions about that? Yep, by preempts. So there's two types of thread scheduling, right? There's the cooperative one where you have to call yield to give up the CPU and then with preemptible scheduling it can be just taken away from you at any point and you have no control over it So that's and remember we talked about that when we talked about processes, too So cooperative verse like true multitasking or preemptive Which basically means it's just taken away from you and that's you know valid for the quiz How would an operating system be able to take away the CPU from you? Yeah, so in cooperative threading or process scheduling you have to yield the CPU to something and Preemptive is just taken away from you. You have no choice the operating system maintains control and it would the operating system would maintain control over processes because it can interrupt itself essentially It has access to the hardware while your process doesn't because you know clear separation of kernel space and user space Which is all good stuff All right, any questions about this? All right Well, so in the case of So if we got slightly luckier whoops So if we got slightly luckier the normal execution if we just execute thread one all together It would do something like this So then next step would be to write out p count Which would be one and now at this point if it context switches back over to thread two It would read p count into its you know its register which would actually be one in this case So then d2 would actually increment one plus one equals two and Then we would get the correct answer, right? So that's kind of what we would expect to see but you'd have to argue about every single interleaving So if you do all the interleavings well For the first thing that executes assuming we have two threads either Read from thread one can happen first or the read from thread two can happen first, right and then there's two options of What happens after the read from thread one either it could execute its next instruction and we'll just assume the increment Doesn't really matter. We'll just concern ourselves with the reads and writes It could either execute its next instruction or get preempted and do the other thread So if it executes its next instruction, it would execute write one But if after the read and thread one it got preempted then thread two would read right and then Going along This path here to the right If thread one writes, there's only one option so you can just Thread the sorry thread one is done at that point. So there's only one option You can read from thread two or you can write then you would have to write to thread two But after the read there's two options either Because at this point after the read of thread two we can assume we're executing thread two It would either continue executing thread two and do a write two or it would contact switch back and do a write one and Then in either case after the right one would be a right two and then after the right two would be a right one right, so Those are all your options if you start with read one if we are just arguing about that memory address, right? So any questions about all those are leaving anyone see how this can get out of hand really quick if you have like eight threads and You're doing multiple variables, right? So it gets real hairy real fast and again. These are why bugs like this kind of persist for seven years So if we go through all of our orderings, which again, we're assuming that you the compiler won't reorganize our Won't execute out of order or anything silly like that which Actually may happen which complicates your life even further But we'll be nice and say that the compiler in the CPU itself won't reorder anything So if you can't reorder the instructions within a thread it'll always read then write these are all our possible orderings And we'd see that we only get the result we expect in two cases so if it if it's equally likely to preempt that any You know after any atomic construction the odds aren't enough aren't in our favor that will get the correct answer Right So if this was used to like calculate your grade or something you wouldn't want to use this code because it's just going to be lower All right, so let's see how to combat this so this will be used later This is definitely not on the quiz, but it's a good thing to know so to prevent data races the simplest mechanism to do that with is called a mutex and A lot of this is just for reference you can either create them statically like this essentially Did just create a global variable for them and you have to initialize them like that or you can just allocate them Dynamically by calling p-thread mutex and knit which internally will you can think of as just doing the malloc call and some Initialization for you so because we're all good programmers. We know we should free our memory So you would have to call p-thread mutex destroy on it and that essentially is free and Then if you want to include attributes that it kind of looks like the p-thread stuff You'd have to use the dynamic version, but that's just for reference. You can just use mutexes. Yeah So if it's initialized globally like this you don't have to destroy at the end because it's a global variable So you can't destroy it. It just exists as long as the program exists or the process exists So destroy is to pair with a knit So it's similar to How just variables work right if I call malloc. I should call free, but I don't have to malloc every single thing So so this isn't the same block of code. It's just multiple examples Yeah Okay, so this is the important bit is so this is how you typically use locks So you have some code you write and then you call p-thread mutex lock and then within that that protected code Is also called a critical section Which has some unique properties So everything between the lock and unlock only one thread can execute it at a time And that is the purpose of a mutex that guarantees Mutual exclusion so only one thread can execute that at a time And if only one thread can execute that protected code or be in that critical section at a given time We suddenly don't have data races because we can't have two concurrent accesses if only one thread can be in there at a time So everything within the lock and unlock is protected And you have to be careful to avoid dead locks, especially if you use multiple mutexes, which we'll see later But for now we'll just assume a single mutex and also So lock is also kind of a blocking call. We'll see its implementation in the next lecture but if One thread makes it there Gets the lock and another thread tries to get the lock It will just sit there and wait until the lock is unlocked and then it can grab it again If you don't want to block you can use this function called try lock, which you probably won't need it might come up later though And the lock is just how you would expect a normal lock in your everyday life when you create one It's initially unlocked and then you just lock and unlock it and you can't lock it twice in a row And you can't unlock it multiple times. Yeah So deadlock is basically when you can't make any progress So if you forgot an unlock so if you just lock something and Then you forgot to unlock it and then another thread tried to acquire that lock it would just sit there forever because nothing is ever gonna unlock it and Deadlock is just basically you get stuck at a lock call. You can't make any progress. You can't move anymore Yeah Yeah, so between the lock and unlock The code here in the critical section protected code only one thread can be executing that at a time So if that's our only accesses to some global variable or that have a data race We now eliminated our condition for a data race because it was concurrent accesses Right to the same memory location But between lock and unlock there's going to be no concurrency there because only one thread can execute at a time So if anything context switched over to it, it wouldn't make it past the lock. It can't execute it Yeah, so it's just a mutex so mutex is just a type of lock and p thread is just the actual thread it's just Locks and threads are related so it's in the same library So it's just called p thread at the beginning, but they could have been nicer and just called it mutex lock Or something, but it's in the p thread library and sees kind of terrible. So they have to name it something unique So this is our data race example. We saw before so in order to prevent the data race Right, we just create a lock at the top of our code. It's essentially a global variable So we can initialize it like that We could destroy it, but you don't really need to destroy it It just makes it invalid, but it doesn't free any memory or anything like that And then within here so within the loop We'd have our lock call and then we do our plus plus counter and then unlock So now only one thread can execute that plus plus count that increment at a time So we don't have any data races anymore So if we go ahead and look at that Right, so here's our code. We have a lock and unlock If we compile it and run it now because there's no data races, we should get what we expect Right, we should get 80,000 every single time in which case You know I can execute a bunch of times get 80,000 pretty much every time With these data races stuff you can't be you know just because you ran it a thousand times and it worked Doesn't mean it's actually right, but we can actually see the code there that Counter is where our data races now. It's in a critical section only one thread can be there at a time So we don't have any data race and we can argue about that. Yep. Yeah, so that's like a Still an active area of research to verify that because these bugs still exist like people are still finding new ones They've blasted for seven years eight years Stuff like that like some stuff you have a data race, but it just never happens because that's the way the hardware works It just never Happens and then suddenly some new hardware comes out that has some optimization that changes something and now that's no longer true And because it actually had a data race before now you see it Yeah, yeah, so that is the kind of crux of this is that now because between The critical section only one thread can be there at a time I essentially turned this code is now just a really Convoluted serial version right because only one thread can execute at a single time Yeah, so in this case because Each thread essentially all of its work is in a critical section Then yeah, it'd be easier just to just get rid of the whole lock because it's going to take time to do the lock And unlock so I would just do it single threadedly, right? So that's what you would do, but this is just kind of to illustrate the example Yeah, but in real life, right? That's one of the things you have to consider of you know Is it even worth doing this operation on multiple threads if you know There's some dependency or something and I have to make it single threaded anyways then Right Yeah, yeah, so that's Yeah, so that's a good point even If you bear with the silly example One thing you could do to make this actually run in parallel and do it without the lock is you could just have a local counter in The ret run of here Incremented, you know, it's 10,000 times and then pass it as a return value and then have the main thread Just kind of combined all them some all the threads up together And then you'd get your answer you wouldn't have to deal with a lock so that's one of the things you can do and is a perfectly valid technique and you'll and You just have to weigh the pros and cons with doing all of that Yep. Yeah, so the question is how does lock know what thread to serve and it's atomic It's just this if they both arrive at essentially the same time just one will pass and Even if it's in parallel because the lock will probably be implemented in some memory location Something and you'll need hardware support for that which we'll talk about in the next lecture The hardware will go ahead and do all the hard stuff for you and just pick one Yeah Yeah, so So in this case now if I go back to my example So now in the example if we have a lock So if our lock works properly and our first call here is a Lock call here, so it would have to execute lock right? We had lock and then our increment and then our unlock so thread one if it gets picked to run first would first do a lock and The locks initially unlocked so it would pass through and be able to acquire the lock and execute the next instruction So now it'd be running it would do that memory read and now if we Get unlucky and we get preempted The other thread is going to see try the lock call and Because this is a properly implemented mutex. It's just gonna sit there It's either going to yield the CPU or put itself to sleep because the other thread has a lock So it's not going to be able to execute that memory read anymore It's just gonna stay here do whatever and then eventually Either it will yield it will put itself to sleep or it will just time out and just waste a bunch of time trying to acquire The lock over and over again But at any rate at some point it's going to give up and then it will context switch back over to The thread that acquired the lock and then it would go ahead and do its increment so zero plus one and Then even if it context switch back over again, right? It would still be in that lock call over and over again And it's not going to make any progress Then eventually it would context switch back over to thread one and then it would do the it would write out p count equal to one and then finally unlock and Then after it unlocks then it would pass through the lock and acquire the lock now and then it would go ahead and Read It would go ahead and read that memory location which has now been updated So it would read one and then you know it would execute what it should Plus one equals two and then write out p count Yeah, so does everyone see how that works, right anytime a content switch over it'll be stuck in that lock call Won't be able to make progress so then we don't have a data race and I guess I guess to translate it to Kind of the cooperative threading issue So if instead, you know, we just had our code as before but it was a cooperative thread and we had something like this Do we have a data race now? We got a yes and a no Why why yes? So His comment was it's possible that two threads doing plus plus count at the same time yield at the same time So Yeah, so you'd still have an issue as long as this was on multiple kernel threads and they could run in parallel Right, they could still be both doing plus plus count But if you have a single CPU this would not have a data race, right? Because it would do plus plus count and the only time it can context switch over to another thread is by calling yield So it would it would either Then you can think of the three operations plus plus counter as being kind of atomic But not really they're not really atomic they either all three happen And then I yield or I haven't executed them or I'm executing so I'm about to execute them, right? Yeah Yeah, so the question is if I just you know say I just copy pasted This I gave it the name run to and I did that right I did that and then when I created it Let's just say so you want something like that So essentially make half the threads Half the threads use a lock in the other half half the threads not use a lock Well, let's just see All right So there's still a data race then because even though there might not be a data race between the four threads that use the lock and unlock There are our definition of a data race is to concurrent accesses the same memory location So you can guarantee within four threads. There's no concurrent accesses, but there's some other threads that will access it So you still have a data race. I guess you can argue that the number is not as bad Right because we have four of them cooperating together and being a little bit nicer So it's slightly better, but it's still a data race Okay All right, so Let's quickly wrap it up So these are kind of things you want as properties in your critical section. It means only one thread executes So if you actually implement mutexes yourself These are properties you want you want safety aka mutual exclusion So if you implement a mutex You only want a single thread in the critical section at one time Which means if two threads can call lock and pass through your mutex does not work You also want liveness. So if multiple threads reach the critical section, essentially they reach that lock call You want one to proceed Otherwise, if they all sit if they all make it there and then just get stuck at the lock call They won't make any progress and your program won't work. So And then as part of that you don't want your critical section also depend on outside threads They can mess up in deadlock and we'll see deadlocks in a few lectures Also, you want bounded weighting aka starvation free where a weighting thread much must eventually get locked It just won't stay there forever so one of the good points of The good points brought up is that you know, we essentially turned our example into a single threaded So if you use critical sections, you're essentially turning some things that might be able to be done in parallel into sequential operations, so as If you want things to run as fast as possible, you want critical sections to have minimal overhead And you you want them to consume as little resources as possible while waiting You want to make them as small as possible too, right? Otherwise, we're just turning our program into some sequential thing You also want the locks to be fair So you want each thread if they make it there at the same time to wait approximately the same time you don't want one thread to skipped over a thousand times and You want your locks all to be like simple easy to use hard to misuse which is definitely something easier to say than it is to do So similar to libraries you want kind of layers of synchronization. You build them up on top of each other So at the bottom There's going to be some hardware provided low-level atomic operations that you can build on and we'll see what they are in the next Lecture and then on top of these hardware provided low-level operations You can build high-level synchronization permatives so that's where mutexes would be and other lock and other Synchronization tools that we'll see later on and then on top of that you use a combination of mutexes and other things We'll see to build your properly synchronized application that doesn't have any data races and works, you know works flawlessly So this is one instance of a lock implementation if you have a uniprocessor system, so just one CPU core and If the only source of concurrency in that case is interrupts Well, then you can implement a lock by just disabling interrupts so if interrupts are your only source of concurrency and you disable them then you just eliminated concurrency and You're the only thread that can run because you can't get interrupted Right, and then the unlock call would just be to enable interrupts again and now concurrency can happen But this isn't this obviously isn't going to work if you have more than one CPU core because the other one would be able to run in parallel or concurrently and of course because interrupts are a Hardware thing on your CPU if you're writing a normal process the operating system will not allow you To turn off interrupts because that's its domain So the operating system wouldn't let you change it anyways, but if you are the kernel and You have a single CPU you could implement locks like this. Yeah So this is just Processes or threads any execution so not even specific to threads. Oops Yeah Yeah, so this is just concurrency just executing some instructions doesn't have to represent a thread process It could be operating system code. It could be kernel code So these interrupts would be from hardware so you can't control when they come in Right, so there's no cooperative interrupts with hardware. It just happens, right? all right, so Let's try our best to implement a lock in software without any help So I will create a lock and just represent it as an integer. I'll say I'll just Choose values. I'll say if it's zero, I'll say that represents. It's unlocked if it's one I'll say that means it's locked So if I want to initialize my lock, I'll just set that value to zero and within lock Some implementation. I think of is I'll just loop infinitely while that lock is equal to zero or I'll loop infinitely while that lock is equal to one Because that means it's locked so I should just try again Of course, I'm gonna waste a bunch of time here So but I'll just try it over and over again And then if I break out of this while loop Then I know that the lock or the value of the lock is zero So I can go ahead and change it to one and then that indicates I'm locked So then when another thread comes in here, it should you know Just hit the while loop and keep on looping again and again and then to unlock. I'll just change that value back to zero so anyone See an issue with this implementation if I'm even using two threads. Yeah, the last which thread Yeah, okay, we got back there. Yeah So it's wasting time, but so let's go ahead and kind of argue about it. So let's Argue about even two threads So if I have thread one and then thread two and then say initially initially The value of the lock is equal to zero right so Say both of these threads are going to call lock So both the threads call lock You'd expect one to make it past and the other to get stuck in that loop essentially, right? so Looking at our lock code, which looks like lock looks like Essentially looks like while L equals one Then I loop forever and then when I acquire it. I set its value to one right So if thread one gets the lock, let's label our reads. So this is a read I'm reading the value of the lock and this line is a right and That's the same location, right? So if thread one say we'll just say thread one executes first it would read and it would get L equal to Zero because that's what it is initially, right? So then If it's still executing it would go ahead and pass through the while loop, right? It would just Go from the read and It would be there as its next instruction And then if it executes the next instruction it would write out that that value of the lock is now equal to one and Now if we and then it would just return right return from lock now if thread two chooses to execute It would read and it would read the value of L as being one And then it would get stuck in that loop, which is what we want, right? We want just to get stuck there, which is fine because Thread one gets to return from lock. The other thread doesn't get to return from lock So only one thread made it through that lock call. Yeah Yeah, so that's the crux of the issue So if instead before that right happened I got preempted right after that then What would happen is? Thread two because that value hasn't been written yet thread two would go ahead and then read also read The value of the lock is equal to zero and now at that point you're screwed, right? So now both thread one and thread two are at this point Where they'll just break out of the while loop because they read that Into some registers something so it's local they both see that as being zero So they would both not get into that while loop and then they would both return So two threads call lock and then two threads pass through lock in which case it was completely useless, right? The locked in do anything so this of course is not a valid implementation of a lock So it's not safe both threads can be in the critical session. We can argue oops It's not safe both threads can be in the critical section and we can argue about a Interleaving of threads that cause them both to return from lock and it's also not efficient because if one thread You know get switched back there It just pulls over and over and over and over again when it knows well when it should know It can't make any progress. So it's just wasting CPU cycles, which is called a busy wait All right, any last-minute things about the quiz that we can think of or questions Nothing, can you give an example? Yeah, the quiz honestly the quiz is not bad at all if Next week you can say Yeah, next week. Let me know either good job. You didn't lie to us or I hate you and I'm never coming back So hopefully you come back. All right. Well, we're important for you