 Oh, whoops. All righty, doo, doo, doo. All right, holy crap. So I guess people have taken Reading Week already, so that's good. So this one will be, let's just blast through the, catching up on the content. We'll get to Lab 3 stuff, so everyone that's here can ask questions and get really ahead in Lab 3. So let's do it quick. So where we ended off with virtual memory, we were talking about how slow it is to actually translate addresses, and we essentially put a cache in front of it. So if we assume a single page table, so there's only one additional memory access, this is how you calculate the effective access time of memory. So there's the TLB hit time, which is just the time it takes to look up something in the cache, because that will take a little bit of time, and then the time of the original memory access. So that's the overhead of adding a cache, and that's the best case you can get. If you were just accessing physical memory, it would just be the original memory access. So because we're using virtual memory and a cache, the cache is going to have some penalty, but the TLB search time is usually quite small. And then in the case of a miss, where the entry is not in our cache, so we can't just straight up look up what the virtual page number to physical page number is, then we have to incur the cost of searching the cache and then not finding it, and then we have to do as memory accesses as we need in the page tables and then the original memory access. So since this is a single page table, it would be one additional memory access for the page table and then one for the original memory access. So you can see quickly how this number would grow if we had multiple levels of page tables. So if we had three levels of page tables, well, we need three memory accesses for each level of page table and the original memory access. And then the effective access time is just alpha, which is the rate of the hits expressed in like a fraction. And then times the amount of time it takes for a hit minus the miss rate times the missed time, nothing terribly surprising. So if your hit rate is like 80% and it takes 10 nanoseconds to search the translation look aside buffer, and memory accesses themselves take 100 nanoseconds, well, you could just plug all those numbers in and calculate the effective access time for just one big page table. So it would be 0.8, which is our hit rate, times 110 nanoseconds, times 20% of the time, while I have to look it up and look it up, do the original memory access, and also search. So it's 280, so our effective access time for memory would be like 130 nanoseconds, which is only like a 30% overhead as opposed to like 4x times slower. So not too bad. So the other thing we have to talk about is how this relates to context switches to. We kind of talked about, but we'll say it again quickly, that if you context switch, well, because if you're context switching between processes and not threads where you have to change the address space, well, you also have to change the root page table, and that's how you change address spaces. So now if we have this cache, you have to do something with it, because if you change address spaces, well, the cache is not going to be valid for that new process because that's its memory, and you're switching to a new processes memory. So the easiest solution is when you context switch. Yep, translation look-aside buffer. Yeah, so TLB is basically just a page table buffer, yeah, or page table cache. So yeah, so in that case, since it's just a cache of like virtual page number to physical page number, well, if we switch address spaces, the easiest thing to do is just to flush the cache and just invalidate all of it, and then the new process would just keep on repopulating it. In some architectures, you have to explicitly flush the cache. So like risk five, there's a special instruction for that, which would be only available in kernel mode, and that would flush the TLB. On x86, which is like the old server stuff, mostly people use it if you have a Windows laptop. If you change the base page table pointer, it will automatically flush the cache for you, so it's done for you. So let's see the effect of TLB testing. So this is like a little test application that was actually written by the same guy that made the Linux kernel just to test that your TLB is the same. And all it does is allocate a size chunk of memory, and then the stride is it accesses an int every that many bytes. So in this case, if I do test TLB, like 4096, that's one page, so I'm only going to access things on the page, and then I'm going to access every four bytes. So I'd access byte 0, byte 4, 8, 12, so on and so forth. So if I switch over and run that, well, if I just do this, so in this case, I'm allocating a page, and then the first time I tried to access memory on that page, it would have to go through the page tables to figure it out because it wouldn't be in the cache. But only the first access would have to be looked up, and then the rest of them essentially would be TLB hits, so I only look it up once. And then for all the other accesses here, the 512 of them, they're all going to be on the same page, so I don't have to redo the translation. So I can see every time I access it, it's only 1.6 nanoseconds. In this case, I allocate 16 megabytes of memory, so that's like 2 the power of 26, I think, and I'll access every 128 bytes. So whenever I miss on a page, well, I'm going to get a few hits after that. So remember, a page is 4,096. So if I go over 128, the first one will be a miss, and then I'll get a bunch of hits until I reach the next page. So in this, this will be a little bit worse. So we have 9 nanoseconds for memory accesses, which is six times slower. And then the biggest case that hopefully doesn't crash my laptop is this. Then this is worst case scenario, where I allocate 512 megabytes of RAM, and I access every page. So every time I look up or every time I access a new address, it would have to do the translation again. And essentially, my hit ratio is like I miss 100% of the time. So if I do that and wait a bit, hope it doesn't crash. And you see it's way, way, way worse. So on the good case, I'm like 1.6 nanoseconds, where it's a lot of hits. And then in the absolute worst case, I'm 35 nanoseconds, which is significantly slower. So any questions about that? And then, OK, cool. So there's another system called there's sbreak. Don't ask me why it's named that. And that's basically what malloc uses. So that's, if you've heard the term heap, this is how you change your heap. So this will grow and shrink your heap at limit whenever you want. And for stacks, they generally have a limit. And we saw that yesterday. So for growing, how the kernel grows this heap is it will just grab pages from the free list and fulfill the request. And then in the page table, all it's going to do is set those pages to point to the allocated pages and set a valid bit and any other permissions like they can read and write to them because it's a heap. For memory allocators, this situation kind of sucks because you're just growing it downwards or resizing it upwards. So if you're a memory allocator, rarely you'll have the opportunity to shrink the heap because everything at the top would have to be unused in order to release it. So generally, whenever you use malloc, it kind of just keeps the heap intact and will just only continually grow it as you need and probably never shrink it. And memory allocators, if they want to bring in big blocks of memory, they'll use something like mmap. And in lab three, I mmapped a stack for you so you don't have to deal with it. And that way, Valgrind won't be very angry at you. So that's already done for you and that's basically what that does. So mmap lets you play with the memory mapping. So the kernel initializes the processes address space. It looks something like this. It will have some region text. It's just a way to say code. That's where basically the L file will be. It'll use part of the address space for data and then which would generally be your stack. And then one of the things it could do are your data, which is like, sorry, that's your global variables. And then it could do something like this where it puts a guard page right under the stack. So the stack is generally something that grows downwards and a trick to see if your stack actually overflows is instead of just getting a general segfault and not being terribly useful, well, they put a guard page at the end of the stack that is invalid and because it's a known page, if you overflow your stack and you access that guard page, well, the kernel can actually see, hey, you page faulted at the guard page, which means you probably overflowed your stack and I can give you a better error message than just saying segfault. And the rest of these, and then the heap would live above the stack and then you might notice this weird trampoline and trap frame. We'll just ignore it for now and maybe come back to it at the end of the course, but basically some mechanisms that kernels have to do to actually implement switching from user space to kernel space because it has to actually change address spaces and the details actually get quite tricky. Yep, the heap would probably also grow downwards, I believe. Not quite. So stack frame is pretty much a snapshot of your current stack that tells you all your, like all the calling functions, all the functions you called and everything, but it's kind of related. Okay, so fun things the curl can do with this is, well, if you want things to be fast and remember system call interfaces are slow, well, the kernel can actually provide a fixed set of addresses which it can map to whatever it wants. So instead of doing a system call, one of the things the kernel does that you might want to be fast is something like git time. And if you do git time on the order of nanoseconds, then your system call is gonna be way too slow to get an accurate nanosecond time. So what the kernel will do is pretty much is map its global variable of how many nanoseconds have happened and then put that on a page like that would live in physical memory and it would let your process read its page. So every process would be mapped a page that can read that timer and then every process can now read that timer without having to do a system call and it's nice and fast. So other things, page faults can allow the operating system to handle virtual memory. So page faults, we've been talking about them, type of the exception that happens for virtual memory accesses. So they're going to be generated if you can't translate or if a permission check fails and the operating system can handle it and because the operating system handles this, well, it could do some tricky stuff like implement something called copy on write or actually swap pages to disk and swap them back which we'll go into more details as the course goes on but the kernel doesn't have to forward a page fault to your process as a seg fault. It could go ahead and handle that if it wants to do something more interesting. And that's the job of page tables. Basically translate virtual addresses to physical addresses. We saw the MMU is a hardware to do that and then we saw some techniques for doing it a big large trans lookup table which was wasteful because it would be too big and saw how the kernel allocated pages, multi-level page tables, all that fun stuff but you probably don't care about that. So let's talk about the midterm. So the midterm will cover topics up to and including today. So now because we did the effective access time you can do that sample midterm. That sample midterm is a good indication of how I write exams. So if you can do that now, you're good and you should be able to do that all and threads will not be on the midterm. So why I talked about Monday, Tuesday and then later today which would be related to your lab not on the midterm, don't worry about it. So expect a similar style than to the one we saw in class or just go to my website, the CS111 course is generally the first other ones were like COVID ones which were kind of bad and then the fall one that I showed you guys is pretty indicative. But I wouldn't worry too much about the midterm. But we probably need to worry about lab three. So lab three, there's this weird thing called U-context and you can implement your own link list or queue if you want to. I would generally recommend against it unless you have it from like another course and you really want to reuse it which again you probably have a bug in it so I generally wouldn't do that. So I'll show example of how you use like built in queues and also this U-context thing. So you probably want to understand in the midterm wrap up examples, I have example of both of them that you should probably use. The U-context one is going to be a lot trickier and that's what we'll see now. And basically all that U-context is, is it represents like a whole virtual CPU. So it holds all the registers and it also gives you a few niceties. So it lets you set a stack and it will mess with the registers for you wherever we call something called make-context that we'll see and I also provided a stack for you. So let's get to it because this is so fun. Oh, sorry, there's questions. This arrangement in virtual or physical memory. So it's all in virtual memory for processes. Whoops, okay. So let's get into it here. Okay, so you'll see a bunch of crap at the top. So these are like the header files that are also in your lab three that you're free to use and they say what variables you'll probably want to use from them or ones that I use. So like something like this, I use it in the new stack function. You probably shouldn't use it. So we have a nice little helper function called die which will just save the error no and print a message and then exit. Hopefully that will be only for debugging because you shouldn't run into that situation for your program because you will write it perfectly and it'll always work. So we have a new stack function. Don't have to read it, don't care about it. It will allocate you a stack of size, this seg stack size. So it's just kind of the default stack size on your system, I believe it's 16 kilobytes. So that's your default stack size. So for this example, I'm going to create three contexts and you can think of this as like three virtual CPUs. So I'll make one for thread zero. Something I want to represent is thread zero, thread one and thread two and then I'm going to need two stacks. So thread zero like in the lab handout, we're going to assume that whatever thread is running main is your thread zero. So I don't have to allocate a stack for it because it's already done for me. The kernel gave me a stack because my code's running. So I need two stacks, one for thread one and one for thread two. So let's ignore these functions for now and then we can read this side, hopefully. Okay, so the first function you want to use is something called get context and that initializes a context with wherever, whatever the calling thread is. So when I start main, I have a thread that's this kernel thread provided for me that has its own stack. If I call get context, it saves the current state of that thread or virtual CPU, however you want to think of it, into the variable pointed to by you context. So it would save the current state of the registers, everything like that and the goal of that is because it snapshotted that call, you can actually switch back to it whenever you want. So after the set context, after I do get context, this structure is initialized and if I do another call called set context, it will essentially resume where that call happened. So if I uncomment this, which is going to be, we'll see a bad idea. So we have our main thread, which is the kernel provided one, it will call get context and then write essentially a snapshot of where it is to that thread zero you context and then I'm going to do a print F that says oh no, which might be a sign of bad things to call come and then a set context. So what set context does, oh yep. Okay, yeah, what set context does is whatever was saved to that user context, it will start executing wherever that left off. So my thread initialized that as wherever it was in the get context call. So if I set context, it's essentially going to jump back here and then it will call oh no again and then set context and jump back and then jump back, yeah, yeah, yeah. So this will be an infinite loop and this will be like hopefully not one of first of many, but you know, we try. So if I do this, if I compile it, so now yeah, it just spams me over and over again because it's just resuming from where the last get context happened. So any questions about that? Yep, so you context is basically like a snapshot of like a virtual CPU. So it's like all the registers, yeah, it's pretty much all the registers and then some other information like where it stack is and some other nice things we'll see. But it's basically yeah, all the registers. So it would have saved, like you can think of it as when it called get context, it would have saved the program counter at the get context call and then set context would just resume whatever you saved. So it would just keep on jumping back to that get context call over and over again. So any questions about that fun infinite loop? Yep, yeah, so you're going to want to save checkpoint because you're going to implement a threading library, right? So every time you yield to a different thread, you wanna switch back and forth, right? So you use this to actually switch between threads and we'll make a thread next. So that was a bad idea. And if we just have one you context, it seems like you said really bad, we can just kind of jump back and forth to wherever we called get context. And basically that just puts us in an infinite loop. So we'll create a new thread using that you context. So we need to create a stack for it. Otherwise you will get into like, you won't be able to debug it because if you have like two threads that are executing in different places and sharing the same stack memory. Woo, you're not going to be able to debug that. So every thread should have its own stack and this is how you initialize it. So we're eventually going to do a get context call, or sorry, make context call, but make context, if you read the documentation, it only works if you set the context with get context just to initialize some state, even if it won't, even if you're not going to like resume back from that context, you have to call it anyways just to initialize some stuff. So for this, we're going to call get context and it would save the main thread still executing. So it would save it to that T1 context, but make context is going to transform it for us. So there's a few fields that are important whenever you do make context. And the first is a stack. So for thread one, there's this UC stack field and we tell it the SP, which is the stack pointer, which is just the memory location of our stack. And that's what was returned to us from the new stack. So I just gave you a stack and it's of this size. So it's just a macro, it's just like the default size. It's 16 kilobytes, you don't have to worry about just all the sizes need to match. And then we'll have this make context call. So make context, the first argument it needs is the context it wants to modify. So in this case, I want to make thread one. So I will make thread one and then the next argument you give it is the function you want to run. So in this case, I have a function just called T1 run and in T1 run, all it's going to do is print F, hooray and then return and that's it. So and then you give it the number of arguments that function has. In this case, it takes no arguments so I don't really care. And then at that point, we have a T1 context that if we do a set context on it, it will start executing T1 run because that's, we set it in this make context call. So hopefully that's okay. So let's see what happens. So this is generally your, we won't swap. So if we don't swap, we can do something like set context again, instead of doing set context on T0 and getting into that infinite loop, our T0 is going to set context to T1 context. So that will switch to executing like that virtual thread one, which has its own stack and it's set up to start executing that T1 run. So after we call set context, it now starts running thread one, which should just call print F, hooray and then return and we'll see by default if that thread returns, which isn't going to be useful for you, if that thread returns, the process just exits. So if we do this and compile it again and run it, we see hooray and then we see nothing else. So after in main, after the set context, I have a print that says like mains back in town and we never reach it because it starts executing T1 run, hits return and then it just exits. So any questions about that flow? Yep, so the question is, what's the difference between a thread and a context? And in this situation, like that you context pretty much represents all you care about in a thread. So you can think of them as the same thing, but in your actual lab three, you're gonna have more bookkeeping with a thread, right? That's like the nuts and bolts, the you context is like the nuts and bolts of a thread, but you're gonna put some extra stuff on top of it, like a status and stuff like that. Yep. Oh yeah, so like, where do you want me to do a set context? Yep, okay. So in T1 run, you want me to set context back to T0. Okay, and then hopefully I can print main as back in town, right? All right, yep, yeah. What was that what you were gonna say too? Sorry? Yeah, so this will kind of look like an infinite loop, but it'll do more stuff. So it would, after T1 prints hooray, it would switch back to T0 and the last time we set T0's context was here at this first git context. So it would go create a new stack for T1, set that all up again and then also create a new T2 thread and then set context back to T1 again. So it was about to die, it switched, we overwrote it, and then we went back to a new thread that has new stack. So we, I mean, we can see it just for fun because you will encounter this in your lab. So yeah, now I just put in a big, oh, it even crashed Visual Studio. So yeah, that's fun. So yeah, so your idea is good that we wanna essentially go back to where we just left off, right? So if you just had set context in git context, it might hurt your head a little bit to like, how do I resume but not accidentally call set context again? So luckily, which you weren't allowed to use before, but now you are because, yeah, I'm nicer I guess, is you can use this function called swap context. So what swap context will do is whatever called it, like whoever is calling swap context, it will save the current context to this first variable here. So originally I wrote it in git context at the very top, but now if I do swap context and give it t0, since t0 is the one that's calling this, it will save the state of t0 to that context. So if I set context back to it, it'll be like I just returned from swap context. And then in this case, the second argument here is the one I want to swap to. Yeah, so in this case, yeah, in this case, if I actually swap context, well, I don't actually need this call because I never use it. So I could actually remove it, but we can experiment with that in a bit. So if I move back to swap context, what's going to happen now is, well, it'll do the same thing as set context, it'll look like that, but also updates t0 context. So now it would switch, it wants to run t1 run, it should run, hooray, and then set context back to t0. And now the last time we essentially get context on t0 was that swap context. So if we get context back to it, it would look like it just returns back from this function and t0 is running again. So it would print main is back in town, and then I delete thread two stack, which we haven't used yet. So I have to go into the directory because we crashed it. So if I do that, it prints off hooray, and that's printed by thread one, that has its own stack, all that fun stuff. And then before it stops, it's just like, yeah, okay, go back to thread zero. So thread one would have never finished that function because it just jumped back and let thread zero run again. Then thread zero came back, printed main is back in town, delete freed thread two stack, which we haven't used yet, and then just return zero, which would call exit, and your program's done. Yep. Yeah, so basically the swap context, it's a nice way of essentially doing get context on this and simultaneously doing set context on this. But if you tried to implement, do that yourself, you'll see that that's not quite, that kind of sucks to implement. Yep. Yeah, so your question's basically like in the lab, if I'm running T one and I'm done, well, how do I know what I'm going to next? Or how do I switch to it? Yeah. Yeah. So what you could do is, well, you don't know, you won't always go back to thread zero, right, but you'll have a queue, that's just like a global variable probably, and you just pick the next thread off that, and in part of your data structure, you would have this U context, and then you can just swap context back to it. So you just pick it off your queue, if you're gonna die, swap back to it. Yep. Yeah, so how this was previously done, so in this U context, you can actually get access to all the registers if you want to modify them yourself, but the problem for that is it's not going to be portable because I have an R machine, some people have x86, some don't, so previous labs assumed x86, and you had a monkey with the registers, but that's kind of just a unnecessary pain in the butt. So you can do that, but it's not gonna work on all the architectures, right? So if you do it on x86 and you try to run on my laptop, it's just gonna crash and burn. So yeah, we're just gonna use the default one, so it works on everyone's machine, and you should also be able to do this lab if you want, if you have a Mac, you can do this without the dev container or Linux or anything, this is all done in user space, so you can do it all local development if you want. If you're on Windows, well, you should probably not ever use Windows again. Yeah, oh, no question? Okay, so, yeah, so the natural thing might be that, hey, well, this is nice, which might come back to your question, like this is nice. If in this run, I know what thread to switch to, but you don't have control over this, right? So I think this is more your question. Yeah, so if I don't have control over this, well, the other option is like, I can just get rid of this set context, but then I'm in a situation where as soon as the first thread ends, my program's dead. So that's not good either. So the last piece of the puzzle is going to be this thing called UC-Link, and this is why I'm going to create a thread too. So I created, I have a context for thread two, and I'm going to set this UC-Link, and what this does is instead of just exiting whenever this T1 run function returns, it will actually set context call to this context when it's done instead of exiting. So I'm essentially using T2 to do all my cleanup, and this is where you would actually pick the next thread to run. So you essentially have a thread dedicated just to do any cleanup. So this is what you would do. So in this case, I'm going to set UC-Link to T2 context, and in T2 context, well, it's going to be a new thread, so I'll allocate a stack for it, and then I'll get context to initialize it because I have to before I do a make context, and I'm going to just give it a stack, and then I'll give it a function T2 run, and then this is the code that would run whenever a thread is about to exit, whenever a thread's about to exit, where the hell is it? And in this, well, because we set it as being the successor to whenever T1 is dead, well, we know T2, you know, whoops, sorry, T1 should be done. So if we only run when T1 dies, well, in this case, this is the only thing that runs. So T1's dead, so I should switch back to thread zero. In this case, you wouldn't hard code thread zero in your lab three, this is where you pick it off the queue and maybe do some other things, and then that's when you would do your set context call to whatever thread you want to go back to, but because this one's like in a deterministic order, well, I know I can just delete thread one stack, maybe you want to do other ones, and then I would set context back to thread zero, which is still alive, and in this case, it would come back to this swap context, and then print mains back in town, delete thread two stacks, so now everything is deleted and clean, so if I go ahead and do that, I get hooray, that's thread one printing its message, and then it returns, which would do essentially a set context to T2, T2 runs, which prints T1 should be done, and I'm going to do some cleanup. This case, lab three, you're going to probably get the next U context from your queue or whatever you want to use, and in this, but in this case, I know I'm going to switch back to T zero, and then I get mains back in town, which is a continuation from that swap context, and then it would just end the program, so any questions about how I'm using those stuff? Yep, yeah, so you'd want to call delete stack when you're sure you're done with that thread, so in your lab three, it's kind of like a process, the only time you're allowed to delete all the resources associated with the thread is after you join on it, because another thread is joining on it, by definition that thread's not going to execute, so you can delete it stack because it's not going to be used again, but if a thread is going through that, if a thread's exiting, well, you might be tempted to delete its stack while it's exiting, but anyone guess what a problem with that would be? So when you execute your code, it's executing on the stack using stack variables, if you free your stack as you're executing code, it's going to blow up and you're not going to be able to debug it, so you can't delete your, well, you can delete your stack as you're executing, but you're going to have a lot of errors, so in this case, it's also set up to be nice, so if you use Valgrind and do that, okay, it doesn't work on my laptop, but again, you can do this for local development, so I guess I can't demo it, but you should get no leaks possible, you freed everything, good job for you, so you should be able to get a clean Valgrind off that, which will probably save you, and if you delete your stack and then try and execute it, Valgrind's going to be like, what the hell are you doing? That's terrible, like it's going to give you a bunch of errors of like uninitialized reads and stuff like that. Yep, so okay, so the question is, why did T1 exit if I didn't provide a return? So in C, if I don't provide a return on a void function, whenever it reaches the end, it just implicitly returns. Yep, no, so the second argument of swap context is what I'm switching to, so for swap context, I'm saving wherever I'm currently at to the first argument, and then I'm going to swap to whatever the second argument is. So another way we said to think of this is like, well, it's the same as get context of this, while simultaneously doing a set context of this, sorry? In line 85, on this 95? Yeah, all right, anyone want to guess? Wait, not that, whoops, I just deleted it, okay. So we want to just move this here, this'll be interesting, who thinks, who wants to guess what will happen? Yeah, yeah, so in this case, let's read it from the top to the bottom, so I'll do get context on T0, although I didn't really need to do it, that was more so I had it set up, so I could show you the infinite loop, then we set up make context on thread one and set its successor to be equal to thread two, and then before initializing it, we just swap context, save our current position in T0, and then swap to thread one, which would run, do a print F return, and then thread one's now done, it would switch context to T2, that we never initialized. So if we do this, we can see what errors happen. So unsurprisingly, it just segfolds. So why is that, it's probably gonna segfold quite early because oh, and this gives me a good opportunity to talk about why Windows is terrible, too, great. So in this case, all these variables would just be just given random things because I never initialized them to everything, so on most sane systems, they're not defined as being anything, it would just be random leftover memory values that it wouldn't really care. So I would essentially just get a garbled address and then the first thing I do is gonna segfault because it's not a valid address. So some of the problems you guys are encountering is if you have the dev container on Windows, by default, all of the memory is zero initialized. So if it was something like a string, well, instead of being garbage and probably segfaulting, it's actually like a null string or it's actually a valid string that points to a null character and it's a string of sides zero. So you don't actually see whenever you forget to initialize something on Windows sometimes, yeah. Yeah, so Valgrind would say essentially that you have an uninitialized read or something as part of the segfault, I'm pretty sure. Yeah, so instead of segfault, I guess I can't show you it, but it would say essentially what line caused the segfault, so we would see like a whole stack trace and there'd probably be a swap context in there. All right, so yeah, yep. So the yield is going to be fairly simple of like if you yield, well, you just stop executing this thread and start executing another thread. So it's pretty much, the yield's pretty much gonna be a swap context and what context is gonna be based off like you're running Q. The question about weight is more involved because there's a few situations, right? If I'm waiting on another thread, it's just like waiting on a number. So first, if I'm waiting on a thread that's invalid, well, maybe it's not a thread because you could give it any number, so I can't wait on that. Maybe that thread's already exited, that's another possibility if you wait on something, but if that thread's currently executing and you're waiting on it, well, by definition you're in that block state so you can't execute any more. So essentially you will have to implement some way of saying you're waiting on that thread and you're gonna have to yield and you wouldn't add yourself to the ready queue because that thread would be responsible for waking you up and pretty much re-adding you to the queue whenever it exits. So the join's gonna be a bit more involved but the yield is essentially swap context but you're gonna use your run queue. Yeah, all right, any other fun questions? Okay, well, I guess talking about queues then. So I recommend using this. It's a bit weird because it's a bunch of like hacky C macros but hey, it's a list implementation that actually works. So tail queue is a double-ended queue where you can take things out from the front, remove things from the front, access things at the front and simultaneously you can add things to the back, you can remove things from the back, nice and easy. So it expects something like this so like it needs an entry and then you essentially have a macro that declares some fields for its pointers so you just use that so you would create a struct, put in whatever else you want in the case of this being your run queue maybe you only need an int but you can put whatever other fields you want and then you do this tail queue entry for pointers and basically that will create for you whatever it's gonna use internally like it'll create a previous and a next pointer for you that you don't have to manage, that it manages for you and then you also create this tail queue head and what that does is create a structure that's supposed to represent that list entirely. So this is just the name of the structure and then this is what the entries are made of and you have to tell it it so list entry has to match struct list entry and then it defines a struct called list head that you're allowed to use so in this case I would create a global variable for my list head and I'll just call it list head because whatever and lab two grades I'm still looking over them I'll try and release them whenever I finalize them yeah so and then this is how you use the list so you initialize it with tail queue init then you can use something like tail queue empty to see if the list is empty it just returns zero or one and then this is how you add elements to the list so you would create a struct for the list entry in your case this is allocated on the stack for your case you'd probably wanna dynamically allocate it using malloc or something like that but in this case I'm just letting it live in main and then this is how you insert something to the back of the list so you give it the address of the head of the list you give it the pointer of the list entry you want to add to the list and then this is just the name of the fields that you gave above so it's always gonna be pointers if you just kinda copy my example and then I just print off I inserted one and then have a print list and in print list there's this macro called tail queue for each they'll essentially set a pointer for entry for every time through the loop it will point to the next element of the list so in this case it should only be one entry I don't have any more time so you can read this code but it's fairly self-explanatory there's insert tail and then there's remove that just takes that and removes it from the list and yeah, you have these examples so you can use them so, whoops, yep, just remember pulling for ya we're all in this together and enjoy your reading week