 All right, so I saw the quiz last night That science is people quick. So it's not that bad. I deleted essentially all the stuff that We don't know. I'm meeting with Ashvin after the lecture and office and after office hours Just to confirm a few things but like for example here's some stuff I write so I'll post the list of stuff we don't have to know on Piazza and Really the quiz isn't that bad like you don't need to understand how to use exec and join because they don't know how to do it You don't need to know about zombie and off Zombie and orphan processes because they don't know what those are You don't need to know the goals of a knit because they don't know what that is You don't need to know joinable or detached threads. You don't need to know anything about that at exit thing You don't need to know anything about the MMU because we'll get to that later when we actually talk about virtual memory You don't need to know x86 assembly or like any details that you have to do in lab 2 You don't need to know any see calling conventions again anything related to lab 2 You don't need to know that hopefully you don't need to know that set context get context because that's related to lab 2 So yeah Sorry So you need to know basically oh yeah, so you basically need to know you should really know what a system call is You should know, you know the types of What jobs at the kernel or operating system are responsible for and pretty much throughout all the quiz Whenever you see operating system it essentially means kernel because they're the kernel is definitely part of the operating system And all the questions are pretty much about kernel stuff So like system call interfaces you should know a bit about processes what they represent what they do How you can kind of use them You don't need to know any coding details about them. So you don't need to like fork bomb something I won't ask how many processes get created for a piece of code or something like that What we'll be doing at the beginning of this talking about threading. So you'll need to know like cooperative Scheduling and preemptive scheduling So that's just like conceptually what we kind of talked about last Lecture, so we'll talk about that again today in today's lecture You don't have to know about the locks But any the threading examples like how threads like how threads work and how they might execute and what problems you may encounter Would be things that are completely valid And yeah anything from like the introduction like what operating system does you should know You know address spaces and all that fun stuff that we saw examples of Yeah Preemptive, so we'll see that again, so I'll say preemptive today All right, so I guess I'll try and power through today's lecture and Tree much just concentrate on stuff. That's definitely on the quiz and then you know, we'll leave the rest of it for later So lock the locks won't be on the quiz, but locks are how you actually like synchronized between threads So what threads do is definitely on the quiz, so let's go ahead and start that And yeah, I'll post a whole list on piazza after I meet with Ashfin so what we saw last lecture when Threads were battling over that kind of that global memory location. It's called a data race and There is one definition for it that I guess you don't need to know for the quiz But for quiz two and throughout all your programming with threads You'll have to be able to identify data races and also prevent them because they're a source of all the nasty bugs that we see So data races when two concurrent actions access the same variable and at least one of them is a right and in that case You can have Inconsistent data and results that you do not expect So when we talk about what data races happen We have to know kind we have to know what atomic operations are and atomic operations We touched on less last lecture, but just to be clear. They're like indivisible operations so That operation either completes Completely like the instruction executes or it doesn't there's no in between You won't see, you know, if it's a memory if it's a right to memory You won't see like a partial memory right or anything weird like that. It'll either be written to or it won't be So it either happens or it doesn't and that means you can't preempt it and remember What we were talking about with Processes and whenever their context switched in and out the kernel maintains control Preemption is just taking away the CPU from something You're basically saying hey, I can take a whoa Lee crap I can take away the CPU from you at any given point and Then maybe context switch over to a new thread or a new process. Yep Yeah, so the question is I on the last slide. It just says concurrent here What about if they're in parallel? So if they're also in parallel If you think of like a global state They also happen concurrently because one will happen right before the other one because it It's only to one physical memory location, for example, so they'll happen in some order Even if that looks like they happen at the same time Yep. Yeah, so like a move command would just be atomic. It's either Whatever you moved is going to be moved into that register or it hasn't happened yet There's no in between where like only half of that register will get loaded or something like that Yeah, so you can think of pretty much any assembly instruction you see will be atomic So it'll happen all in one go. It'll either happen or not So if you have to implement multiply yourself that has multiple assembly instructions Then you know if that was executing in a thread You could take CPU away from the thread that after any one of those instructions and start executing another thread And you might be in some weird state. Yep Yeah, so pre preemption just means that you can you don't control when The CPU is taken away from you and when you get context which doubt because remember the only other scheme We saw yesterday, which is what you're doing for lab two is cooperative threads where threads have to give up willingly the CPU so there's pretty much two strategies and we saw them when we talked about processes to either you can willingly give up your CPU which is cooperative and Problems with that of course is that if you don't give if any thread or process doesn't give up the CPU It can just run however long it wants It can just not yield the CPU at all and then the other option is preemptive Where essentially the operating system or you know in the case of lab 3 your threading library Maintains control and takes the CPU away. So you don't get to control Essentially the thread doesn't get to control when it yields Yep Yeah, so the question is what actually is a preemption. So When we talked about being able to Context switch between processes and kind of how Linux works with true multitasking where it's preemptive The kernel is going to set some timer interrupt and essentially interrupt itself to regain control the CPU It can do it with other other type of interrupt to essentially it has some access to special CPU instructions that the user process does not have access to and it can make Kind of interrupt itself and then get control of the CPU back that way Okay, so this is kind of an aside just kind of getting into compilers So there's a thing called three address code and compilers that just makes code easier to analyze and read instead of just using assembly So it's mostly used for like analysis and optimization by compilers Which you might see if you take a compiler class and compilers are really really great So this is kind of a preview for that, but the statements essentially just represent one fundamental operation and for the purposes of this course you can assume that each are Atomic so generally they'll map to instructions, but who the heck wants to read assembly so You do this because it's useful to reason about data races It actually kind of tells you what's going on in the hardware without actually having to read assembly So the statements are all going to have this form. They're going to just have some result equal to at most an operand with an operator and another operand and that's as complicated as they get so they're dead simple and You can represent the entire program just using statements that are that small So jimple is the three address code used by GCC if you dig into it You can actually use these flags if you're interested again. This isn't part of the course. It's mostly for interest sake But it's generally a lot easier to read than just handwritten assembly so if we look at that data race example that we saw yesterday if You see the plus plus count and remember I was kind of alluding to that it's three operations If you run it through jimple and see what plus plus count Actually is represented by it's represented by something like that so D dot one is just some temporary storage the way it kind the way it represents That kind of abstract machine is that it just assumes it has infinite registers So the D dot some number is just a register and it assumes that there's an infinite number So The first operation that it would do is loading from memory. So it would dereference that count From whatever because remember that's just a global variable so it would have to read it from memory and then it stores it in a Virtual register and then the next operation is the actual increment which would happen locally just on that thread So it's you know a new temporary register Equals D1 plus 1 so that's our increment operation and then the final operation is just to write out the result So D2 is a result of incrementing whatever we loaded and then this is our right operation Yep Yeah, but you don't really care because the stack is local to your thread right so you don't care Yeah, it can be interrupted, but it would be interrupted while it's you know Extracting some step value from the stack and it's just gonna resume and be fine right because it's independent Yeah, so yeah, so this abstracts away all the kind of messy details Okay, so let's Go ahead and see Oops, I probably shouldn't show that so if we So these are our instructions there at the front and then initially if the address of that is zero and we have two threads running If we assume that we have you know preemptive scheduling where we don't know which thread will run at time And they can be interrupted at any single time well then At the first point so there's two threads running those three instructions within a thread They have to run sequentially, but between two threads. You have no idea what's gonna happen So what can happen is? It's now a race. You don't know which thread is going to run first So initially we can say hey thread one is going to run and it would it would run this read instruction So if it runs that read instruction it would read You know if we dereference whatever p-count is now it's zero So that thread would read the value zero right and Then at this point either it could continue on to the nest next instruction Or it could get preempted and then the other thread runs So if the other thread happens to run right after that load it would also Remember that it's D1 is local to itself. It would also read the value zero and Now at this point it doesn't really matter who runs first right Thread one could run again, and it would just increment whatever value that it read from D1 so D1 plus one would equal just to one and then Because let's just say it executes the right command then it would write out p-count Equal to one in that so now that memory location is equal to one now It would switch back to the other thread so it can finish So in that case we'd get D2 equals to D1 plus One which would be again one so this thread would again Write out the value of p-count equals to one Right any questions about that? Because this is preemptive so at any point It could be taken away We could also have thread to execute first and then thread one could have Executed and we would got the same result if they just did both the rights and Again We could have had it slightly differently different so this could have run second and it could have executed like that But that wouldn't have changed our results either, right? But if thread one ran to completion, so say we did all of these Whoops, so if we did all of these and we actually executed thread one all the way to completion then When we read the value of p-count here, we would have written it before so it would now be equal to one And now when we execute this the next instruction where we increment the count It would actually increment it to what we expect so it would be two and in this case it would write out two Right, which is hopefully what we expect So there's a bunch of different orderings the only things we care about if we argue about data races is Accesses to the same location so what we really care about is the read and the right So if we argue about all the different combinations of things that could happen, there's not that many so Thread one could read first and then at that point Only two things can happen so either Either It would write Immediately or that it would get preempted and it would execute the next thread, right and Then if Right to happen then it would be right one or Or sorry. Yeah, it would be right one or right two So you can kind of go through that and see all the different combinations of interleavings between the two threads We would get if if we have preemptive scheduling in which case you don't control anything So between any atomic instruction it can get interrupted and then the other thread could execute So you can see how this can get out of control quite fast because you'd have to argue about any single interleaving between the instructions, right? while instead If we had cooperative multitasking and essentially, you know, we just had these instructions, which is our plus plus count and Then after them you just had a yield So you wouldn't run into this scenario if you just did all those instructions And then only yielded your CPU after you did those three operations or the whole plus plus count You wouldn't have any data races. Yeah, so it would be concurrent, right because it could Do this thread switch to the next one it could add one Then it could add it again and then it could switch back to the other thread could add a few more times Yes Yeah, because remember with cooperative multitasking you have to yield out the CPU So if you want to make sure you do those three operations before yielding and the only way to take away the CPU is by you Yielding it then you can avoid the data races and things like that But you would still get concurrency because as soon as you yield it could switch to another thread Yep Yeah, so for this to really illustrate concurrency those three operations would be in a loop, right? And both threads would be executing them like you know in our example like 10,000 times So you could switch between what thread does it but it's not really concurrency if it's only just incrementing once and then dying Yeah, so you can pause it do some work on another one pause go back and forth Yep Yeah, so the question is if I just have a single CPU anyway, what's the benefit of threading? It's doing the same amount of work, but it's just context switching all the time But sometimes it's actually easier to write threaded programs if you want to do multiple things at once instead of like Okay, I'll do this save my state and like restore it later and go on you just use a threading library It's all done for you. So you also might you know Field multiple requests do multiple things at once and just makes the programming much much simpler and And also if Even if if they're kernel threads to instead of user threads, then you could also have a thread Do some IO operation that gets blocked and then the kernel can run another thread So it can actually kind of interleave work because you're waiting for hardware or something like that Well, if you had a bunch of user threads and you're screwed anyways, right because you'd be blocked And yeah, that whole user threads versus kernel threads very good thing to know for the quiz So yeah to analyze data races in general. You have to assume all preemption possibilities. So what I'd kind of Whoops, I didn't switch So they analyze data races you have to assume all the preemption possibilities So I called the read reads are and then like the number to see what thread they were from So there was read one and are one that always have to happen in that order And then read two and write two always happen to always have to happen in that order because they're within the same thread So you can So we'll assume no reordering and threads your compiler can actually do a bunch of weird things But it wouldn't do it in this case anyways. So these are all the possible orderings so if you start with a read one the next operation you could do is either a Either a write one or a read two and then as soon as you have a and then after the right one You'd have to read two or you could write two but that point doesn't matter in that case the count would be two But if we do our two reads at the beginning, it doesn't matter what order our rights are in We're just going to get one as the answer after those two threads run And then this is the execution where thread to just completes first and then thread one goes in which case We would get count correct again, and then this is a case where Where the second thread reads first and then the first thread reads and in which case You're just going to overwrite the same value with one again and the peak the count is going to be off So if you assume like equal probability of it getting interrupted between any instructions The odds are not in your favor that you're going to get the result that you expect Okay, so this is Introduction to mutexes which won't be on the quiz, but is what will be the next topic and what? You'll need to know for lab three So essentially to prevent data races you can use something called a mutex I can just if you want to answer questions about the quiz we should have plenty of time we can kind of blast through this So you can create mutex is statically or dynamically so statically is just kind of like a global variable and Dynamically is on the heap of course So and it would be on the heap or you can do this to essentially Initialize it as a global variable and then if you want to include attributes you have to use a dynamic version We'll just see how to use a mutex Yeah Yeah, so p-thread and it would call malloc for you or sorry p-thread mutex and it would call malloc for you And then you have to destroy it So it's a bit of a different API than we're used to so this and it will under the hood call free For you and then you have to call destroy on it and it will call free for you So yeah, you can call it just global space if you want It has a bunch of weird technical names that don't make any sense You can just global variable for now or just global space or it just lives within the process Okay, so that's creating a lock. How you use a lock is you know, it's kind of like a Real-life lock you can lock it and unlock it so by default when you create it It's going to be in like an unlock state so You'd have some code and then you'd call p-thread mutex lock and after that point no other thread can make it past the p-thread lock call so This code is protected So if multiple threads came and made it to this lock call what lock guarantees is that only one makes it past that call So you don't have any data races because there's only going to be one thread in that Protected code at a single time so only one thread will be between the lock and unlock So as soon as you call p-thread unlock that means hey This thread is done with the lock and another thread can go ahead and enter that protected code All right, so everything between lock and unlock is protected only one thread can be between that or Only one thread can be running that protected code at a single time right and Topic we'll see in a later course is dead locks which are essentially you can brick your program And that's another complication in of having to deal with all this threading stuff but there's also So lock will also just kind of sit there and block or wait until you can acquire the lock If you don't want to do that There's a function called try lock that will just try the lock and say whether you got it or not But we don't need to know that for now So if I go back to my example, you know, yep So it's not an atomic instruction Just only one thread can be executing that at a time so so other threads can't execute that but you know This it wouldn't be atomic because if hey another thread was doing the same thing and then acquire a lock You would have the same problems as before Yeah Yeah, yeah until it hits unlock Yeah, so if we go back to our example before our count So we could fix the data races by doing something like this We could declare a mutex that just lives as long as the process lives and then We can protect our plus plus counter because we know that contains our data race So we could have a lock here and then a unlock here and that way We would have the same result Every single time so we can go ahead and see that too So here it is it's the exact same code except I have a lock and an unlock and now if I execute that Right, I get 80,000 like we'd expect every single time, right? So now we don't have a data race but What we did is essentially now It's kind of like single threaded because if that's our only operation we care about and between Between the lock and unlock. There's only one thread there at a time Essentially only one thread will ever execute at once and we don't have any parallelism whatsoever, right? Yeah Yeah, so the question is that there's overhead associated with that and yes, there is Yep Mutexes Yeah, so the question is I assume you can have multiple mutexes What does that do and we'll see that later because multiple mutexes gets really really well not really really but it's kind of bad Jimple no, it's just like a compiler representation Okay, so So if we go back to our demo what this does is now It's tablet So now if we back up to do to do to do oh god So now this was kind of an example of our data race, right? But now if we have locks and unlocks The first thing that thread one would do is call lock here and now If the now if a context switches to here The first instruction that the other thread would execute is lock and Because thread one now has that lock or that mutex that this wouldn't return It would just essentially wait a while and then yield the CPU so this instruction So this instruction would never be executed because it wouldn't get past the lock right So it would just sit there wait and wait and wait and then eventually it would context switch back over to thread one and Then execute this and then execute the increment and then even after the increment if it tried to switch back Right, it would hit the lock and just sit there again and still it's still waiting for the lock So it wouldn't do anything if it got preempted after that point And then it would write out the memory location and then call unlock Yep Yeah, so the question is you know if there was some code after the unlock because the other thread just kind of Do that while it's waiting for the lock and the answer is no within a thread. It's gonna run sequentially right So it hits the lock and it's gonna wait until it can actually get the lock Yeah, it could either it depends on your lock implementation So it could eat CPU time. It could be put to sleep. It could be whatever whatever your lock implementation is Yep. Yeah Yeah, so lock is you can think of lock is it's atomic Yeah, so if the threads are in parallel and both hit lock, it's atomic and Smart people that made your CPU will make sure only one of them actually gets the lock Yeah, because generally your lock will be represented by like some memory location So they're gonna have to be some ordering even if they execute that in parallel Yep Yeah, so any protected code will only have one thread executing it at a time and that way We don't have data races because there's no we're not fighting with any other threads for that protected code Or the more general name that you'll see is it's called a critical section Yeah Yeah, so the purpose of lock kind of sounds like it undoes the purpose of multi-threading but you're gonna need to Get locks to get the correct answer sometimes right if you have a data race you're just gonna get inconsistent results and You'll get if you get the wrong result faster. Is that actually better? Yeah, right, so you still need to get the correct result and locks help you get the correct result So there is a way that you can just make everything in parallel and just let it kind of go But you'll get the wrong result, but you'll get it fast, but it won't be right Right like if this would you want that the code without locks to calculate your grade for the course? Hmm it only gets lower Remember our count was 80,000 when we ran it a few times. It was just randomly way lower It's eight. It's 80,000 out of 80,000. Yeah. Yeah, sorry. Oh, I initialize count Yeah, if I don't initialize count your grade will be like some really high number Maybe that's like rolling the dice Okay All right So that's adding a lock to prevent data races. We'll see that again later. So The code between the lock and unlock again is called a critical section and that again only means one thread Executes that at a single time So it guarantees safety aka mutual exclusion, which just means one at a time So there's only one thread in a critical section at once And this is what you need to do to actually like implement locks. The other property is called liveness So if multiple threads reach the critical section in which case they're trying to call lock Then in order for your lock to actually be useful One of them and only one of them has to proceed through that call If none of them proceed past that call you won't have a data race, but then your program won't work So that's not very good And the critical section shouldn't depend on outside threads That's you can mess up really easily and cause a deadlock where threads don't make progress and we'll see that in the next lecture We don't have to worry about deadlocks today and Then the other property you must have is something called a bounded weight So if a thread makes it to a lock call while another thread has a lock You want eventually the waiting thread must eventually proceed Otherwise your program might get stuck And other properties you want to have when you program it and make critical sections So that was a good point about hey, aren't we just making our program slower? So the goal when you are writing programs like this is you want to make the critical sections as small as possible because essentially Anything in the critical section can't be run in parallel, which is going to be very very slow So you want them to have as little overhead as possible or just wasted time So you don't want to consume resources while waiting to if you're you know You're just sitting at a lock call and using a hundred percent the CPU You're not doing any useful work and you want to avoid that Also, you want kind of want your locks to also be fair So each thread should kind of wait approximately the same amount of time You shouldn't favor one thread over the other so you can hopefully get more of an even distribution of what thread is actually doing what work and Of course like with anything in programming all your interfaces and locks should be easy to use and hard to misuse Which is easier said than done So similar to libraries whenever you make Applications that use multiple threads. You'll want different layers of synchronization. So Like the comment about hey, shouldn't that p thread mutex lock be atomic? Well, yes, it would be atomic, but you can imagine that by itself any implementation we kind of Can think of to do to create a mutex Probably has issues so you need some hardware provided atomic operations in order to implement the locks and Next slide we'll see our attempt at creating a lock and then above that you'd let want some high-level synchronization Permatives like mutexes that are much easier to use and then building on those you hopefully have your properly synchronized application Yep, you know, so the question is it what happens if you just never call unlock? So if you never call unlock for this So if I just like delete this line Yeah, so if I never call unlock Then it will never prove So a lot of bad things are gonna happen So if I never call unlock any other thread would never be able to get the lock because it's unlocked So even if I have two threads They both say they both make it to the lock call at the same time and then thread one gets the lock And then thread two is gonna wait there and then if thread one never calls unlock thread two is just gonna wait there forever So and then it's not gonna finish right and also if you Lock a lock that's already locked that is undefined behavior Depending on the lock. Yeah Yeah, yeah Yeah, so in this case because it's in a loop Thread one could you know lock unlock and then it could still be running So it would just get the lock again do the operation unlock again. So you don't know how many times it Goes and gets the same lock again and again Right, and then it would context switch over and then thread two might get the lock But it could get the lock multiple times in a row If I never call unlock So if it's locks and already locked mutex, that's undefined behavior and it will do something bad Okay, so let's see What we could do to implement a lock so To implement a lock if we have just a unit processor system with just you know one single core on it Then our only source of concurrency is interrupts So if we want to lock something we just Disable concurrency and you can disable concurrency by disabling interrupts so this is what our implementation could look like if we just have a single core and Our only source of concurrency is interrupts You just disable interrupts and that's your lock because now you can't get interrupted After that lock call and then your unlock would just be to enable interrupts, right? Yep Yeah, so that's a good point So the question was so user code can disable the kernel from stopping it and the answer is no So this would be like the implementation in the kernel So this isn't a valid implementation if you're implementing locks yourself because of course The operating system or the kernel will not let you change hardware and will not let you do that So as a lowly user process, you're not allowed to do that So it's not going to work on Multi-processor systems and it's not going to work for user processes either so this would only work if you yourself are implementing locks in the kernel and You know, you could make a system call so users could access this if you really want But this is the only way of doing that. Yep So you need processor, right if you only have one core and your only source of concurrency are interrupts If you disable interrupts, you won't have any concurrency anymore. So you don't have any issues, right? Yep Okay So let's try and implement a lock ourselves So if we try and implement say we're trying to implement a lock ourselves in software So we have an init function and we have a lock and then say You know, we just assume that the user allocates for us an integer. We're going to represent our lock as an integer So initially we're going to have the value of our lock as zero to say that it's unlocked And we'll use the value of one to say that it's locked. So our lock function could look something like this. This sounds reasonable so the while loop would just Execute infinitely while that lock is zero. So basically while it's locked already It's just going to try it's just going to wait over and over again Until it sees that the lock is unlocked, right? Because the only way to pass this while loop is if the lock value is zero Everyone on the same page for that Okay, so if it makes it out of this while loop it would then Write the value one to indicate that that This lock is now locked and then that would be your lock and then hopefully no other thread could call lock and get the lock Right because it would go through this while loop and now the lock value is one So it just infinitely go through this loop. It would waste a bunch of time and Then in our unlock all we do is we set the lock to zero So anyone wants to tell me what the issue is here. Yep, and what's an example of the data race? Okay, so So Let's just argue the lock. So the lock function kind of looks like while That lock value is equal to one We spin infinitely and then We set the lock equal to one and then initially let's just assume that the lock is unlocked. So it's equal to zero So what could happen? So say we have thread one and thread To and we'll assume we're just on a single core. So only one operation can happen. So What so what's an example of our data race you want thread one to execute essentially this read, right, or sorry this Right, so this is our read operation So if say thread one executes first it's going to do the read and what value is it going to read? Zero zero, right? I said initially it's zero. So I'll say initially it's unlocked. So it would read L equal to zero right Which would mean that thread one would? So I equal equal zero so Zero does not equal one So it would break out of the loop and then the next thing that thread one would want to run is This instruction right here, right? It would want to write out one and change whatever that lock is or That it would want to change that to your value to a one Well, it's sitting there waiting and because this is a preemptible thread will assume preemptible ones It could context switch over to thread two. So now thread two would execute and it would come in and execute this read So if it does this read what value is it going to read? Zero right and Then now at this point both thread one and thread two are at the same point which is right here and Now they're both going to write the value now. It doesn't matter what executes They're going to write L equals one or L equals one here and Then both threads are going to make it past the lock call, right? So two threads call lock and then two threads return from lock and now they both be executing the critical section at the same time Which means our lock implementation is completely invalid, right? So this was our attempt at making a lock. It seemed like a good idea at the time But we can see threading is actually kind of hard and it doesn't work Right any questions about this? Yep before the while like Here equals one Yeah, so then the first one that comes in is They both try and write one and then they both read one and then they're neither is going to return from lock now Yeah, which is kind of like a deadlock although if the lock work properly it should have worked but yeah so either We have a situation where both threads can make it by which is not good or neither thread can make it by which is Also not good. We want exactly one to make it by okay So that's you know in this case this implementation is Not safe since both threads can actually be in the critical section at once So this is not a valid lock implementation And it's also not efficient because we have that while loop that just keeps on reading memory over and over again So throughout the cook so in the next lectures. We'll actually see What that's like? So any questions about anything or related to quiz one? Are we good? Yeah, yeah, I'll post like the content stuff on what we don't need to know Yep Yeah, so a good way to say for the quiz is just kind of going over the lecture notes I'm not going to put any questions on there that we haven't covered or at least talked about So like anything involving processes or all the fun stuff we did Any of the programming stuff we did you don't need to know but like any conceptual things Very valid. Yeah, you don't need to know anything about lab one. Lab one is supposed to be review see review Which Apparently is really funny, but there's no see review. There's like hash table questions. I don't know what to be on Yeah, well, don't worry about the quiz Pulling for you. We're almost like also Thursday if there's like a big panic we can do review on Thursday or We'll see. Okay. Yeah, also, I have office hours now if anyone wants to follow me back I'll be in my office till like four 20