 All right, welcome back to 353. So today we get to talk about how to implement threads because there are multiple options. One could be in the kernel. The other, well, could be full user space. Oops, that one. All right. So there's a few multi-threading models we can do. We can either do something called user threads or kernel threads. So user threads are completely in user space. So they're a part of your process. They're running in user mode. The kernel doesn't really know anything about them. And in this world, the kernel just assumes that, hey, your process just has a single thread. And then whatever you choose to do with that thread, that's up to each process. So kernel threads are implemented in kernel space. And the kernel manages everything for you. And it can treat threads specially so that, hey, if you have multiple threads in a process, maybe I can run them in parallel because I know about them. If they're just purely in user space, well, the kernel doesn't really know anything about them. It just thinks that there's only one thing executing at a time, so nothing would be running in parallel. That would be the major drawback between them. So no matter what implementation you need, like before with processes where we had a process table or process controlled table or process controlled block, whatever you want to call it, thread support requires some type of thread table. So this could be in user space or kernel space depending on whatever implementation we want. So if we had user threads, we would also need like a runtime system to determine scheduling because while the kernel doesn't know about the threads, so the threading library itself, since it runs all in user mode, would also have to implement its own scheduling. So the kernel would be controlling when to schedule that process to run and then that process would then be controlling what thread to run by itself. So it would need to implement its own scheduling algorithm. So both models, process can contain multiple threads. The only difference is whether or not the kernel actually knows about that and the major difference for you is that, well, whether or not it can run in parallel or not. So why would we want to do this? Well, for user threads, the idea behind it is simply for speed, especially back in the day when we didn't have multiple processors in our machine, we only had one core in it anyways. So if we can avoid system calls, which are slow, well, then just avoid them, they're slow. So we can just avoid system calls or just let a thread block everything. So for pure user level threads with no kernel support, they're gonna be very fast to create and destroy. Like I said, no system call, no explicit context switches, but the drawback is if one thread blocks, well then the entire process blocks because the kernel can't distinguish. So even if you have two threads, one's waiting on IO, so it's like doing a read system call or something like that. And it's a blocking system call, so it's not gonna return until the kernel has some information for it. And technically another thread could run because, well, it's not waiting, it's not blocked on anything. Since the kernel doesn't know about your other user thread, it's not gonna bother to schedule it, it's just gonna assume, hey, that process is blocked, too bad, so sad, even though that you could make progress doing something else. So if we have kernel level threads, well they're gonna be slower to create, we have to do a system call, but the nice thing about this is even with a single CPU core, if one thread blocks, well then the kernel can just schedule another one. So if we like just had a web browser and there was some blocking call that was waiting for a request, and we were just waiting for a request, we could go ahead and schedule another thread that can actually do something. So that is another big benefit of even if you have a single core, why you probably want kernel threads. But no matter what you have, there's gonna be, your threading library is going to run in user mode. So like the pthread library, that is just a normal library and it kind of abstracts this for you. And then that threading library is going to decide how to map its threads to kernel threads and there's a few different, and that's what handles that abstraction. So your threading library has three main options. So many to one, so that means many threads in the library get mapped to one kernel thread. So essentially just means the kernel just sees a process, doesn't really have the concept of threads. So all my user threads just get mapped to one kernel thread or just one process if you wanna think of it that way. The other one is one to one. So each thread in my library corresponds to a kernel thread. So the kernel will handle the scheduling for you, everything like that, it's pretty much just gets an identifier. So pthread library is an example of one to one, it uses kernel threads. And basically all that pthread library does is handle the actual system call for you. And if you dive into that, process like that pthread type, you'll find out it's just an integer. So it's also kind of just like a process ID. Yeah. Yeah. So you mentioned that these all run in user mode, but they all map the kernel. Yeah, so you're gonna use some type of library. So this is like using that pthread library or something like that. So it's still then, but like for example, in the one to one situation that's essentially creating a new kernel thread for every thread you create. Yeah, yeah. So one to one, if I do a pthread create, that creates a kernel thread. But someone could implement a threading library cough cough you, that is when you create a thread, it creates a user thread and the kernel has no knowledge about it. So this is just depending on, this is kind of what the threading library does. We're assuming we're using a threading library because using just going at it with system calls, probably a bad idea. Probably want like thread create, thread join, things like that. There's also a different like kind of hybrid approach that's many to many. So many user level threads can map to many kernel level threads. Typically you try and do this to get the best of both worlds. So if creating kernel threads is expensive, well, maybe I create as many kernel threads as I need to that like matches the number of CPU cores I have. And then after that, I'll just create user threads on top of that and try and switch between those kernel threads if I create a thousand threads, maybe I only have eight kernel threads and then I map like was that like some threads per each kernel thread. So different ways to go about it. So many to one, that's the pure user space implementation. So it's gonna be fast as outlined before and it's portable because it doesn't depend on the system. It's just a library. The kernel, you don't have to do any system calls or anything like that. So if you write it, you can use it on Windows, Mac, Linux, doesn't really matter because it's all running in user space. Again, you don't have to tell the kernel anything about your concept of threads. Big drawback is that same thing we had before. So if one thread blocks because the kernel doesn't know it could actually execute something else even if we have multiple threads, that whole process gets blocked and it can't make progress anymore. Also, because the kernel only knows about that process with its one main thread, well, can't execute anything in parallel. So kind of is a non-starter in today's world where even like my watch has two CPU cores. So yeah, so not good for today's world. And that is again because the kernel only schedules the process to run. No concept. The kernel can't tell that you've created however many threads you did. So one-to-one that just uses the kernel thread implementation. So basically this is what pthread does. It's just like a thin wrapper around the system calls to make it easier to use. So instead of doing some system call which is like is actually a clone with a bunch of weird parameters. So like it's a clone and you tell it not to create a copy of the page tables and all that stuff. So instead of just remembering all the options you have to give to the stupid system call, you just call pthread create and it does a system call for you. And that way I can get the full parallels in my machine because, and I don't have to like implement anything weird, the kernel just handles the scheduling for me. It knows about the thread so it can schedule a thread if one thread gets blocked. Well, only that thread gets blocked. I can resume other ones. Yeah. So do they schedule processes or do they schedule entirely a threads? So the question is yeah, what does the kernel do? Does it just process or schedule processes through threads? Technically it just pretty much schedules threads. So threads are like the execution unit. So it will schedule that to run on a CPU. Does that mean that if you have doubled the threads you get essentially double the execution time on the CPU because of the way that the priority works? Yeah, if you have double the threads well, then you can run for twice. Like you have two, you essentially have two processes running, right? So if you have two things running, it needs to balance that. So as many things as many threads as you give it. So the question is, is there a limit to the number of threads a computer can have? So same answer as to, is there a limit to the number of processes a computer can run? So there's going to be a limit to the processes. Some number eventually, even if you have an unlimited process identifier, you'll run out of memory at some point for page tables. So for threads, you can create a lot more because they don't need page tables or anything. But like everything, it'll have a limit because it'll take some memory. But you can even run htop and look at how many threads are currently running. Some programs like, I forget some really bad ones that create like a thousand threads because they don't really know what they're doing. But just depends on the program. So with this, again, it does system calls each time I like to a pthread create and you lose some control. So maybe you wanted to schedule your own threads. Maybe you wanted to do something above and beyond what the Linux scheduler does or something like that. You wanted to have more control. So this is the implementation for pthreads. We assume this for Linux. The other one, many to many. So the idea here is that we have more user level threads than kernel level threads. Again, like I said, maybe if they don't block that often, then you just have as many kernel threads as you have cores on your machine. So you can get all that parallelism, but you don't have to do slow system calls every time. So idea again, just try and get the full parallelism with having as few system calls as you can. But this leads to a very, very, very complicated threading library. So for some God forsaken reason, Java has decided to launch this great new feature called Java virtual threads where they say it's way faster than pthreads and it uses this technique. But it's typically a bad idea. So people have tried this before. This isn't the first time. Typically the library gets really, really complicated and depending on your luck, you may see very bad performance. So the rules still apply. So like if you have, say you have, I don't know, like four threads that are like super or yeah, four threads, all mapped to the same kernel thread and three of them just like really, really want to use the CPU and can always run and never block. And one of those threads does a blocking system call. Well, guess what? That one thread that blocks since the kernel doesn't know it could run any of the other three, it blocks every single thread that got mapped to that kernel thread. And guess what? Your performance sucks. And then you say Java virtual threads are the worst idea I've ever heard in my life. And that would be one of those things where it just depends on the luck of your mapping. So like sometimes you run your program, it's slow as hell. And then sometimes you run it and it's fine. And then you try to spend a year of your life figuring out why that is. And then the answer is you got unlucky because you use Java virtual threads because, I don't know, people tend to do this like every 20 years where they think it's a good idea. So if you have to use Java and Java virtual threads, it uses this so beware. Yeah, you can tell I don't really like Java, which is true, yeah. So the threads, they do complicate the kernel a little bit, especially with, so a process can have multiple threads in it. So it kind of has to interact with things we have seen before. So for example, if we have a process with eight threads in it and it forks, what the hell should happen? Do we get a new process that also has eight threads or does something else happen? Do, like what state are they in? Just like we call fork, we pause the process and then whatever state all those threads are in at the time of the fork, the new thread gets them in that state and then how the hell are you gonna control what they're doing? You have no idea what state they're in. That could get out of hand really quickly. Also, it could make your fork bomb very effective if it also copied all the threads. So the rule is on Linux, if you fork, you will create a new process with only a single thread in it and that single thread will be a copy of whatever the thread is that called fork. So because of this, your new process again only has a single thread and if you did a fork in that thread function like in your thread run function, well, eventually when it exits it will just hit P thread exit because that's what happens implicitly and since it's the only thread it would just exit the process and the default thing is it just exits with status zero to signify everything's all good. So if you want to control what happens to other threads when you fork there's this really complicated thing. So there's some P thread at fork not covered in this course but if you had some weird other threads that modified memory while you were forking in a different thread and you needed to make sure they like reach some consistent state you can use P thread at fork to like register some functions to happen whenever you call fork so that hopefully you can get some consistent view of memory so that when you fork something is at least consistent but in this course not gonna bother with that because we're not gonna do anything terribly complicated. All right, other fun thing is if I have a process with like six threads in it or eight threads in it and that process gets sent a signal, what happens? So another fun thing. So if that happens should each thread receive the signal should just the main thread receive the signal? Well, the main thread can't receive the signal because the main thread isn't guaranteed to exist. So the solution in Linux is if a process gets sent a signal it will pick a random active thread and then that will receive the signal. So this makes concurrency hard because you can't control which thread actually gets interrupted and will run the signal handler. It's just anyone at random, so good luck. So also, so if you thought signals kind of sucked for was it lab two? Yeah, try using signals with multiple threads. Yeah. So control C is interrupt. So you sent that, would that only kill one of the threads or would that kill the entire process? Yeah, so the question is what's going to happen if you do like control C and send it a sig in it? If you had a signal handler. So if you were using the default signal handlers and just one, like I had a process with eight threads I send it a control C or send it an interrupt signal. One of those threads is going to handle the signal and run the signal handler. And if I run the default signal handler, the default signal handler will just exit the process. So process is dead, nothing else exists anymore. And if I choose to ignore it, then nothing's going to happen. But that's what will happen. So one thread, so like any thread could just call exit two at any time, right? So if one thread just exits the process, boom, process is done, can't really do anything about it. All right, any other questions? Wow, we're steaming through. Okay, oh yeah. Yeah, so if the signal handler changes some memory then because all threads are in the same address space it'll change it for everyone. So like any thread can be interrupted at any time to do the signal handler, right? So you have no idea you have to handle like you have to be prepared that at any time any thread can just switch to the signal handler. So especially if you're sharing memory between threads or something like that and then the signal handler also shared memory then like really weird things can happen. And the solution for most people is using multiple threads, they just don't use signals because they're terrible. So there's a way to use signals through like file descriptors without having to use like an interrupt handler and you can actually like just do a read on it, like you can read anything else. So typically that's what people do because yeah if you ask anyone that does system programming signals we're like a mistake and everyone hates them. And for lab two you didn't have to use signals. Yeah, if you have, for user threads, so if you have user threads all of the user threads are mapped to a single kernel thread. So if you do one to one, yeah, then it's one thread is mapped to one kernel thread. Yeah, so in that case the kernel could schedule them concurrently, they can switch back and forth between threads or it could do it in parallel. Yeah, even in this case. Yeah. All right. So other one, so instead of that stupid like Java virtual threads, which again, why? Instead of main to many what some people will do which is a more explicit thing to do is to create something called a thread pool. So again, remember the goal of many to many is to like just avoid the creation costs and kind of share the threads and reuse them. So what most sane people do is create a thread pool. So create a certain number of threads and then a queue of tasks. So maybe as many threads as CPUs in the system, maybe more if they block a lot. And then as a request comes in you just like wake up a thread, give them work to do from the queue of task. And then whenever they're done that queue then they get put back to sleep. Or yeah, you can sleep them or something like that. There's no work, just put them to sleep. And then whenever they have something else to do you can wake them up, reuse them over and over again. Yeah. So this would be the user. So either you implement this or it's like part of a library. And the queue of tasks is like just kind of a queue of functions to run. Yeah. Yeah, so the idea here is instead of having like a bunch of user threads that pretty much you just want to do a task. You just like define the task in like a function or something like that. And then you have like a work queue and you say, okay, I want to do this function. Please run it for me. And then it would run it for you using, it would just pick one of the threads in that thread pool and use that. And whenever it's done it goes back to the pool and we reuse it over and over again. So you just create the threads once and then they just keep on working while there's some work to do. Yep. Yeah, question is how slow is the create thread system call? And the answer is it's not that slow. But if this is like, so this is usually done if your tasks are really small and you have like millions of them. So might also hear this in like AWS and stuff called, like, what the hell do they call it? Like not microservices, but it's, I don't know. The web service thing where it's like the serverless crap. Yeah, that thing. Basically this, EC2 is just virtual machines. So as a preview for what we have to look forward to in lab four after a midterm and after a reading week and all that fun stuff is, yeah, like I've been alluding to, you get to implement many to one. So you get to implement user threads. So I gave them a cool name. So we have a cool prefix. So it's called what, why did I name them? I named them wacky user threads. I don't know, I just wanted to start with what cause I have, it doesn't take much to amuse me. So in this, you'll be essentially, instead of pthread create, you'll be doing what create. So you're going to implement user level threads. So they're going to kind of follow this same chart. So you create them with what create and then you'll have a queue of threads that are waiting or able to run. And your scheduler, which will just be simple round Robin, will just pick one to run when it's running. They're going to be cooperative user threads. So they're going to have to give up their CPU. And in order to give up the CPU, that's where that yield will come in. So if they call yield, they will essentially get put back onto that queue and the next thread in line will run. And then it will run until it either yields or it calls what exit, which should do the same thing as pthread exit and terminate the thread. And there's only one state we have where it can get blocked. So we'll have like a pthread join. So we can join on another thread, which means we'll wait for that thread to terminate. In which case, since we're waiting for a thread to terminate, we'll be blocked. We can't execute anymore because again, we're waiting for another thread to terminate. And then whenever that thread that we're waiting for actually terminates, we can go ahead and go right back into the ready queue. We're ready to run again. So we get to implement this. And like I said, scheduler, just going to be simple round Robin. You'll create a queue. I will show you a nice queue implementation so you don't have to implement your own link list. Although you guys probably want to implement your own link list. So yeah. So don't have to use mine if you don't want, but I suggest it, but how it will work. You just run the thread in the front of the queue. When it yields, it gets thrown to the back. And when you do a context switch, so you'll have to save the registers and all that. I will show you a nice way to do that. So you don't have to manually, you know, save register. Well, it'll also depend what CPU you have. So if you have like x86, like you don't have to manually save RAX, RBX, blah, blah, blah, blah, blah, blah. I'll show you a way to do that through some library function. So you, but what it will do under the hood is save all the registers for you. And they're going to be cooperative threads. So they have to be nice. You could extend this and do preemptive threads. So to do preemptive threads, you could just say, hey, Colonel, interrupt me every so often, like every like, I don't know, a few milliseconds or something like that. And then you can just call yield for the thread and force it to yield. But it doesn't really make it much more complicated. Speaking of complicated, all right, any other questions from today? Cause wow, I've gotten through this fast. I don't know if I'm just talking fast or what. All right, so our next complication is going to be more fun. So let's just have a little example and we can go through it for the rest of the lecture cause it will, why is that there? All right, whatever. All right, so here is our fun program using threads. So we're going to have a main. So we're going to create an array of threads up to num threads, which in this case is going to be eight. And then here, why I'm just reusing I. So I don't know why I'm doing that because I wanna be micro super efficient. So for loop that goes for zero all the way up to num threads. So we just create the threads. And then after I create all threads, there's eight active threads. I don't know in what order they're going to run or if this is going to run. But I'm going to just in the main thread, I'm going to join on all of them. So this whole loop will not complete until all of the threads have done executing and they're going to execute this run function. I don't give it any arguments. So let's see what this run function does. So I have a static encounter. All right, so let's see, did I go over this before? Who knows what static means there? Did I go, did I rant about this and like someone? So any guesses to what static means yet? Yeah, so it's a global variable. And static here means I can only access that global variable in the C file. This kind of protects me a little bit. And the compiler is also smart enough that if I don't use it anymore and it's like a static global variable, it'll tell me unused variable. Otherwise, if it was just int counter, it would just assume I might use it somewhere else. And it wouldn't warn me and all that fun stuff. All right, so I essentially create a global variable called int set it equal to zero. And then in the run, I will have a for loop that goes 10,000 times and just increments that global variable. So let's think for a second. So I create eight threads and I tell all my eight threads to go through and for 10,000 times I'm going to increment this counter. And then I wait for them all to finish. And then in the main thread, I print off the value of this counter. So, what should the final value of counter be for this program? 80,000, right? Yeah, 80,000? It's undefined because I plus plus is not atomic. Yeah, we got undefined. We got 80,000. We got, what's your guess? All right, it's in any ways. Yeah. So, if I run this, let's see 61, wait, that's weird. No, 63, 59, 63, 65, oh, 74, all right. Let's see, if I run this enough times, oh, there, 80,000. See? No bugs. No bugs. Totally works. So, that is our next one complication. So any guesses as to why that happened? And so, yeah. Well, one has read the counter and it's trying to increment it before it writes back. Maybe another thread has already read the counter. So maybe the true writes, because these two increments are kind of intermeet, they infect the only increments at once in the end. Yeah, yeah. So basically, so remember all threads share memory, right? So they're all in the same address space. So they'll have their own individual stack. So each thread will have its own copy of i or something like that. Well, not something like that. Each thread will have its own copy of i. So they'll all only run the loop 10,000 times. But the problem comes into effect when we have int counter. So that's a global variable, right? Which will be somewhere in memory and each thread is going to use that memory. So if I use the global variable counter, well, plus plus counter actually does a few things. So it looks like it just increments a value, but if we have to talk about what happens on the CPU, what it would have to do is, well, counter is somewhere in memory, okay? So it has to load that value from memory, see what the value currently is. And then, okay, it could increment the value and do that like in a register, which would be independent for each thread. And then after it's incremented that, well, then it's going to have to update that memory, the value at that memory address, right? So I essentially have eight threads here, all fighting to increment that counter. So they'll be reading, updating it independently, and then writing it back out. So there could be situations where like, hey, one thread reads the value. Let's say even for two threads, maybe one thread reads the value zero, okay? Then we switch to another thread, it reads the value zero. And then, okay, they both independently update it to one, and then they both write out one. So I would expect a two, but both of them update it to one, yeah. If that's the case, how did we end up with 80,000? That seems unlikely. True. So it turns out that computers are really, really fast, and we kind of just got lucky. So it took, it either completed all of them sequentially, it probably, yeah, it probably created all of them sequentially. So what likely happened in that case is, like I created one, and before I did the system call to make another thread in the main thread, the other one was already done, and then another one got created. It was done before I made the next one. That, that, that does likely what happened, yeah. If it, if, all right, so let's, let's see. So if we change it to 10, it's 80 more consistent. 80, 80, 80, 80, 80, 80, 80, 80. So yeah, it's more consistent, but still underlying problem, but you look at this, right, and you're like, I am God's gift to programming. I've, I've, I've fixed it. Nothing's, nothing's a matter, right? Which is why this is like a concurrency bug, because I'm using memory that's shared between two threads. I don't know they're ordering. These kind of bugs have lived in software, especially like the kernel for, I think longest, well, longest one I've seen that I remember is like seven years. It took someone to fix it. So these are like the hardest things in programming that you will ever have to do, and I get to introduce you them to you, so fun. And it gets worse the more complicated you have it, right? So like in this, I only had it go up to 10. I got 80 every time. Very unlikely. Very unlikely I see anything other than 80, right? Run it over and over again. I write this, I don't think it's a problem. Later, I don't know, I go viral, something like that. Boom, I even go up to like a thousand. Then it's kind of consistent. It's kind of not, it kind of gets close. And like, yeah, that'd be maybe okay for some things, but if I'm calculating your grades, you probably don't want me to do that. Yeah. So I'm assuming you just did the, you call yourself, you still have the same problem. Read an increment is in a system called, like you mean the instructions? The instructions. So is the only way to fix this essentially just have your own local counter and then update it once every thousand in which case you could still have that problem if you have a large number. It gets all we left, is there any way to fix this? So there's a few ways to fix this. Can we think of a way to fix this to make it consistent 80,000 all the time such that each thread actually increments something 10,000 times? Yeah. So like I create a local counter and then what's that equal to zero and then do that. And I update counter at the end. Yeah, so like that, that's the same problem, right? But all right, so same underlying problem, but I can almost guarantee if we run this it will probably be okay. So like, yeah, 80,000 all the time. And also this is why it makes it hard because this has the exact same problem as before. It's now just unlikely. Great, I love depending on unlikely things. So this is why we take this course because eventually I'll get old sooner than you and if I have a pacemaker or anything and you're programming it, then I don't wanna die. No, so in this case, whatever I change this number to not really gonna matter because I only do one update per thread to the shared variable at the end. So it doesn't matter what anything local does. Yeah. All right, so yeah, so other one global array then everyone updates their own individual one and then, yeah, so we could like static ant for like numb friends. So like, we're not equal equal. So like something like that, and then here, I have to find my ID and all that stuff. Okay, well, we saw how to like pass function calls to threads before, how do we do it? We do like malloc and all that stuff, right? So what about a solution that takes advantage of we can return a pointer? Yeah, yeah, so I could have my local, here I'll just get rid of this because it's ugly. So just have my local and then whenever I'm done updating it, let's just say I'll just malloc, I don't know, something called return malloc size of int. Oops, and then I'll just write the value of local counter there and then just return that pointer, something like that. Okay, so now I'm just returning a pointer to whatever that last value was and I have to include standard limb. So now here to get that pointer back, oh, I can actually use the return. So I could just say here, what got returned to me? It's an int pointer and let's just call it ret. I'll initialize it to null for some reason. And then remember, I need to give it the address of the pointer to update. So I have to give it the address of ret. It will give me some warning because it wants a void star star, not an int star star, but I can tell it, see, I know what I'm doing. So sure, here's a void star star to make you happy. And then after that, I can get its return value and then I could update counter plus equals something like that. We agree with that. So looks okay. Have I scared you enough? Is this actually okay? So my problem before was like, I had a bunch of threads all updating that same global variable that was shared in memory between all threads. Do I have that anymore? Well, we can argue about it. So here, the main thread creates eight threads and then waits for them all to complete. In each thread, it just updates its local counter, so it's its own local variable. So no problems with that. It's not actually shared. And then at the end, it just mallocs some memory. So I just get some memory and then I just return a pointer to that. So that should be fine. Then in the main, I'll be able to get that pointer value back by writing it to the address of return because that's C, that's how we have to do it. And I go ahead and only my main thread is incrementing this or is adding to this counter. So I have no problems with two threads trying to do access the same memory at once in this case. So each of them updates it and I have no problem whatsoever. So this will never fail. This is foolproof. And essentially what we implemented was has anyone ever heard of MapReduce? So it's like some big distributed thing that Google came up with and like was it 2008 to just massively run a bunch of workloads without having to deal with any of that problem with memory and all that. So essentially this is what we have implemented. So the map is like some independent thing that each essentially thread or in their case, computer can do. And then at the end, we just tally all the results and then we just add them all together at the end. So map, I just do some function and then I combine them all at the end. So if you came up with this in like 2008 or something and applied it to computers instead of threads, you would be probably not rich because Google would probably... Google would be richer, not you. All right. Any last thing I did that I should probably fix in this program I didn't free. Exactly. Where should I free? Huh? Yeah, after I add. So here I should free. All right. That would be our complete program, which is good. It always works. Yeah. Nope. So we aren't allowed to argue that C freeze force... We were only allowed to argue that C freeze force because that memory may be used in the future you can't predict it. In this case, I know after I read the value I know I don't have to use it anymore. So, yeah. You can't argue away that one. So if you use the libc argument you have to be creating a library. So when you implement your thread library you can make that argument. For this, writing a program can't make that argument. And yeah, also on the Java rant garbage collectors don't also free for you. So just saying you have a garbage collector also doesn't free you from that responsibility. All right. Any other questions before we end the day? All right. We're almost at reading week. So next lecture we'll just be review. So nothing new. And then, yeah, it'll be reading week and then you'll have an exam. Yay. All right. I have to write it too. All right. Just for a point for you. We're all in this together.