 All right, welcome back. So hopefully everyone had a good reading week, better than mine of releasing lab two while I was on vacation and getting bombarded. So I apologize if I haven't got to your lab two yet, if something bad happens. So I'll get to that after, but I had to write the midterm. So the midterm's written, so you have something to write. So today, carrying on with locks, and we'll probably finish fairly quickly. And then we can answer midterm questions if anyone has any. So where we left off before, remember we had eight threads all incrementing to value, and it was like whatever value it got, it was very random, it was maybe 80,000, 40,000, whatever. So what actually happened is something we now have to quantify and give a name to. So it was called a data race. And that occurs whenever you're sharing data between multiple threads in this example. So it's when two concurrent actions, remember, different between concurrency and parallelism, so this can happen even on a single CPU, as you're probably discovering as you're doing lab three. So a data race is when two concurrent actions access the same variable, and at least one of them is a right operation. And that's the only condition you have. And then you have a data race where we can have incorrect results that we saw before. So if you want to argue about this, you have to argue about what things are actually done atomically as in either it happens or it doesn't and what are actually separate operations. So the important thing, it was like touched on before we went on break, is atomic operations and those are things that are completely indivisible. So they either happen or they don't. There's no middle intermediate steps that you can see in between. So means you can't preempt it, so you can't switch to it in the middle of it. It just happens or it doesn't. And then the preemptions will occur between any two atomic instructions and then it may switch between them. In the case, the general case, like not your lab three, how you implement thread yield and each thread has to call it to switch. Well in the general case, you don't get a choice when you yield and that just happens at a certain time interval or within a time slice or something like that. So getting a preview for your compiler course just to make it a bit easier to understand. So when you get into your compiler course, compilers just don't read your C program and spit out machine code. There's a few intermediate steps to make things more tenable and more easier to analyze and everything like that. So within compilers there's something called three address code and it's basically a level up from assembly just to make things really, really clear. So three address code is just one fundamental operation. So they're very restricted and they follow the form result equals operand and then there's an operator and then there's up to another operand. So one example of a three address code would be something like I just add two things together or another one would be like the referencing a single value. So the operator and operand are optional. They don't even have to be there. So it's just to break down your code into very, very simple to understand instructions that just do one thing. The idea is that those are the things that are atomic and then you can actually argue about them. So in GCC there's three address code. It's generally some intermediate name. You don't actually need to know these names. This is just for your own, I don't know, curiosity. So it's called Jimple. Why they name it that? You can look it up, but it's called Jimple. And if you want to see it, it's not a word I've made up. As a compiler flag, you can do like slash F dump tree Jimple and you can see the actual Jimple code. And if you want to see all of the three address code generated by your compiler, you can just dump tree all and it'll show you some of the compiler internals, which is kind of fun. So this may be easier to reason about your code than actually looking at the assembly code. So it's not that far removed from assembly, but it's a bit easier to read than just assembly. So for example, if we took the code we had before that was incrementing the count and remember I actually wrote out that it was three individual operations. Well, if you did the Jimple of that, you would get the three individual operations. So remember the three address code, very, very simple operations. So the Jimple of count would be first it dereferences whatever is at that pointer. So it reads that value. And then the names here are just an abstraction over registers. So they just assume you have an infinite number of registers. And then when you actually go to compile it on actual hardware, then the compiler has to figure out which of these variables map to which register, but you don't have to worry about that in this case. And because there's an infinite number, you don't have to worry about them being reused or anything. So it's a lot easier to read. So in this case, we read whatever the value of that count that was our memory read. We put it in a register. And then we see this operation, which is just incrementing the value in a register. So it's done on a single CPU, so it's private to it. So that's where our result is. It's stored in a new pseudo register called D2. And then our last operation is that memory write. So it writes out the value of D2 to whatever is pointed to by that P count. And then we actually did this before we went on break, but just to make sure, if we have two threads that each initially, if we have two threads executing this code, so a read, an increment, and then a write, and initially the value pointed to is equal to 0, well, what are all the possible values of P count? So if we only have two threads, you would expect if one ran to completion, then the other, it would be 2 at the end of it. So 1 reads 0, increments it to 1, writes out a 1, and then the other reads 1, increments it to 2, and then writes out 2. But in this case, if you want to analyze data races, you have to worry about any possible preemptions. And in this case, the only things we actually care about are the reads and writes to that memory location. So we can go ahead and call the reads and writes from thread 1, R1, and W1. And because that's a single thread, those will always happen in order. So you won't have W a write from thread 1 before a read. So they'll always be in this order. So the first thread would be R1, W1 for read and then a write. And then thread 2 would have its own read and write. And then if you assume you can't reorder instructions, so spoiler alert, to make this even harder to reason about, your compiler can actually reorder instructions, or your CPU hardware can actually reorder instructions. So I'm just telling you all the fun things to debug, but don't worry about it. So for this, just assume that you can't reorder instructions. So you'll always have a read followed by a write in each thread. So if you actually took all the permutations of them, you get this. So there is six possible outcomes of reading and writing. So for example, this is the case where thread 1 runs to completion and then thread 2 runs. So we have read 1, then write 1. So that would read a 0, write a 1. And then in thread 2, it would read a 1 increment and then write a 2. And then the other possible order is after I do the read from thread 1. Well, I could either get preempted or not. And if I get preempted, then the other thread will run and read, thread 2 will read. In this case, they would both read the same value of 0. And then at that point, it doesn't matter. So either thread 1 will write first or thread 2 will write first, and they're both writing the value 1. So in either of these cases, our final result is 1. And then same deal if thread 2 reads first. So any questions about that, like showing all of our possible outcomes? So this isn't too bad because each thread is only doing two things, but you can imagine for a more complicated program, hey, it quickly becomes pretty much impossible to argue about all possible interleavings of threads for a certain variable. So we have something in place, like we have a way to actually force an order on threads or to force that only a single thread runs at a time. And therefore, concurrent actions can't happen because you are essentially disabling it. And they are something called a mutex. And we'll see what that is. So you can either create them statically or dynamically. And this is just really for your reference when you start using them. So if you define it in a global variable, you can just set it equal to this. Otherwise, the other option is to do something that's more akin to malloc. So I can create an M2. And then I have to init it and give it a pointer to it. And it'll actually initialize the mutex for you. And then that M2, you could have got the memory for that using malloc or on the stack or whatever. So if you want to include attributes, so this is kind of similar. It's still in the pthread library of calls. So there's pthreads had attributes, mutexes have attributes. But for this course, we're not going to touch attributes. We're going to just take the basic mutex. So I've said this mutex, and it's not quite clear what it is yet or how to use it. So how you use it, a mutex is something called a lock. And it operates exactly kind of like a lock you would have. So by default, it is unlocked. And then each individual thread may go ahead and acquire the lock. And the rule is only one thread can get the lock at a time. And if the thread has a lock, no one else can get it. So it might be easier to think about it as the mutex is actually being the key. And then the lock and unlock calls are you actually doing those operations. So if you have some code here that goes and runs, you can't argue about it. It could be preempted at any time, go back and forth. Then you can lock the mutex. And only one thread will make it past this lock call at a time. So if the mutex is unlocked, well, then if a thread tries to lock it, it would successfully do that. And then it can actually execute some code in here, which will have some nice special properties. And then after it, you say I'm done with it, and you unlock it. So the reason it says protected code that is if a thread goes in, acquires a lock and starts executing some code in here, and another thread would also like to execute that code, it won't be able to, because it would have to first make it to this lock call. And the rule of mutex is only one thread may acquire it at a single time. So if another thread tries to acquire it while the first thread has it, then it will just make it to that line and stop, and essentially just block there until it can actually get the mutex again. So that means this protected code, only one thread will be able to execute this at a single time. And therefore, if only one thread it can execute it at a single time, then there's no data races because we can't have any concurrent operations in it. So there's another complication we'll talk into later where you can actually deadlock yourself if you have a roundabout situation where you have multiple locks, but for now, we'll only argue about a single lock. So this is how it would look like if I took that example before and actually wanted to prevent the data race. So let's just go to it. All right, so here's our code we had before. So let's go and read it. So we had this main that created eight threads. So numthreads is eight. And we did what we said we would do before. So for i equals 0 to i is less than numthreads. So it goes from 0 to 7. We create a thread. So we create a thread, we give it no attributes, we tell it to execute the run function, and we don't give it any arguments. Or we give it null as an argument. And then after that point, we have eight threads all running. The scheduler can go ahead, do whatever it does. And that's where we got our crazy answers. And then the main thread goes ahead, joins all of them, waits for them to go. And now the only difference here is we have a mutex. And then we print out the counter. So we destroy the mutex because the threads use it and we're done with it. So we had our data race on that counter variable. So if we want to make sure that we don't have any data races, well, we would have to go ahead and lock it before we use it. That way, if another thread tries to acquire it, while another thread is actually in the middle of doing that counter, it won't be able to acquire the lock and can't get in there. So it'll just get blocked until another thread, the thread that has a lock finishes incrementing the counter and then does it unlock and then it was up to date. And now the other thread can go ahead and reacquire the lock. So if we go ahead and run that, we should see, last time when we ran it, we just had, ooh, we just had a bunch of different values every time. But now when we execute it, we get 80,000, 80,000, 80,000, 80,000, 80,000, 80,000. So it seems to work or at least work better than we saw before. So any questions about this? That kind of works. Or any comments about this? Is this a good, am I doing something like that's way faster here that I'm using eight threads? I have eight cores, so is this like a good idea for speed, probably? Yeah, yeah. So now I have eight threads, but all they're really doing is all doing the like meet close. Most of the computation, which isn't much, is doing this, incrementing a counter, which doesn't take that long. And before that, it has to acquire a lock and then release the lock, which is some code, so it's gonna take some time. And then only one thread can actually do this at a time. So I can't have any parallelism. So it's essentially one of my eight threads gets to do this and then all the other seven are waiting for it. And then essentially it waits, one of them gets it, while the seven others wait. And it's probably really, really slow. Like if I go ahead and time that, well, I mean, oops. So it actually goes fairly quick, probably too quick to even measure. So that's probably a bad example. Because if we had it before, build. So even though this is really, really, really fast, it's still like twice as slow and uses a bit more resources. This probably isn't the best example of this, but it is definitely slower. And we can argue about that, hey, 70 threads are just gonna sit around and do nothing while we're waiting. So it's not probably the greatest. All right, any other? Yo, yep. Yeah, so it's going to lock out each threads. And the idea is that only one thread is doing this plus, plus counter at a time. Yeah, so all the other threads, like all the other threads would be executing this loop, for example, incrementing I and doing all that stuff. So all those threads could do that in parallel if they wanted to. But as soon as it reaches this lock, only one thread will make it through this call at a time. So only one thread executes this line at a time. But other than that, they're running in parallel. Yeah, so yeah, mutex destroy, it just kind of, yeah, frees any associated memory with that mutex that you didn't allocate. So it won't like, if this mutex I malocked it, I'd still have to free it, but like it might malock internally like some other fields. So it would free up those. Okay, any other questions about that? Cool. So what the critical section means is that only one thread executes the instructions at a time and it has these properties, the first being safety. So only a single thread is in a critical section at a time. The next one is liveness, which is if multiple threads reach that critical section, one and only one must proceed because if two proceeded well, then we still have a data race and we're not preventing anything. And another thing is the critical section can't depend on outside threads because we can mess up and deadlock, which means threads don't make progress. Don't worry about that for now. So we'll start covering that in the next lecture. And the next property you want for a critical section is a bounded weight, aka starvation free. So we've heard starvation in other contexts already like in scheduling, but now comes up again in locks. So the idea of not having starvation with locks, that means if a thread makes it to the lock call, it must eventually make it past that lock call, assuming that you actually unlock it at a given time. So our other goals for critical sections, if you want to implement this yourself, is it should be efficient so you don't want to consume resources while waiting. So if I can't acquire that lock, I should probably do something smart and not just constantly do nothing and waste CPU cycles. You also want it to be fair so you want each thread to weight approximately the same amount of time, maybe even do first come first serve and sure some type of order between them. And you want them of course to be simple, easy to use, hard to misuse, which is always your goal when implementing a library. So similar to libraries, it's kind of like levels to synchronization. So there's some hardware provided like low level atomic operations that are specific to your CPU. And then above that, you build a high level synchronization permatives so that mutex is an example of that and we'll see some others throughout the course. And then using all those high level synchronization permatives, you have a properly synchronized application that doesn't have any data races. So we'll go over different implementations of actually a mutex lock and unlock and some of them are really, really simple. So if you have a system, that's a unit processor system, which means it just runs on a single CPU core. You only have a single CPU core. Well, your implementation can be really, really simple. So if you only have a single CPU core, well, you have to give up the CPU to switch between stuff. Usually the only way to actually have concurrency on a single CPU core is using an interrupt, otherwise code will just execute from the start until whenever it's done. So if on a single CPU system, your only source of concurrency is interrupts, well, the obvious solution to disable concurrency is to disable interrupts. So your lock call would look like this. So as part of locking, you just disable interrupts so no concurrency can happen. And then for the unlock call, you re-enable interrupts and then you re-enable concurrency. So if this is the only source of concurrency, well, then I just eliminated it, no data races, all good. But we know unfortunately anymore, I guess for you guys, you have to worry about having multiple cores on your system and this isn't going to work. And also because we're thinking about an operating system, well, these interrupts, I'm talking about like hardware interrupts. So because you have a separate kernel space and user space and your applications run in user space, well, you can't really disable hardware interrupts because you're not allowed to. So let's try again and implement a lock in software. So I will take my lock, I'll just represent my lock as just an integer that they allocate. So in my implementation, my init for my lock is I will set the value of that int to a zero and I'm going to follow a really, really simple convention where if the int is zero, it means it's unlocked. If the int is one, it means it is locked. So my lock code is going to be pretty dumb, but let's see. So my lock code will just have a while loop that just keeps on executing if the value that the lock's pointing to is a one. So if the value is a one, we have a semicolon here at the while, which means it does nothing. So it's just going to keep on trying that over and over and over and over and over again. And the idea is it will only get out of this while loop when the lock value is zero. So while it's one, it keeps on going over and over again. As soon as it hits zero, it'll break out of that while loop. Yeah, but you assume that threads will properly do lock and unlock. Oh, so this assumes you have a multiprocessor system. So one thread would be doing lock, and the only way that value would be one is if another thread acquired the lock, and eventually that thread's going to unlock at some point. So eventually, the idea here is I'm constantly reading it over and over again. And the idea is, well, if I read a one, it means it's locked by another thread. So I'll just keep on reading that value over and over again until that thread would unlock it, which just sets that value to zero. So then my code would read that zero in my lock, and then it would progress. You don't buy this. Yep. Yeah, we're writing like a mutex lock. Wait until they're done. So in the lock code, if I see that the value is one, what should I do? Like if another thread has a lock, then I just what? How? I have a wild loop. Okay. All right, so we agree with this as like kind of an implementation. So let's see. So let's demonstrate. Oh, yep. Oops. So the question is, why do we set the value to one in lock? Yeah, so after the wild loop, we know the value of the lock is zero. So we set it to one, indicating that we have the lock now. So yeah, a simple scheme that if the value is one, it means it's locked, and if you write it to one, it means you got it. Right? Okay, so let's, we'll assume that our lock is initialized to zero, and let's see what happens here. So thread one, thread two. So how it normally works, so let's make this bigger, is somewhere in the code, let's put in our counter example for, so let's say we would lock L, we have our counter plus plus that we don't really care about, and then we have unlock L, and then thread two, it would be executing the same thing. So lock L, counter plus plus, unlock L. And remember that our lock code, let's put it in here. So this code does a while, L equals one, we're just gonna loop infinitely, and then we would indicate that it is locked by writing a one to it. And then my unlock code just writes a value of zero. So the idea of this as well, thread one, say thread one goes first. Well, the idea is the value of the lock initially is zero. So thread one would reach this lock, it would go into the while loop, read the value of L, it's currently zero, so it would break out of the while loop, and then this thread would set L equal to one. So now the idea is that if we got preempted at this point and it went over to thread two, well thread two would get a lock call, and then the lock call would go here, go into that while loop, it would read a one, so it would just keep on reading that over and over and over again. Eventually we'll get preempted and thread one will execute again, it can take its time, do whatever, increment counter, and then reach that unlock. So it would reach the unlock, and then that's just going to write the value L equals zero. And now if it gets preempted again and we go back to thread two, well thread two would read L again, now it's zero, break out of the while loop, set L equals to one, increment the counter, and then go ahead and write zero to it when it's done. So only one thread can execute that counter thing at a time, right? Oh, I gotta know, why no? Yeah, so that's exactly right. So in implementing this mutex, my implementation itself has a data race, so you have to actually argue about that. So what could happen is exactly what he just described. So initially that value of L is zero to indicate it's unlocked, and what could happen is, well thread one starts executing this lock call, it makes it to this while, reads the value of L, it's zero, all good, that's a memory read, and then it reads zero, and then right before it actually writes one to indicate it's locked, it gets preempted exactly then, and then thread two goes ahead, executes lock, that memory location is still zero, so the other thread, let's put this thread one, so thread two would go ahead, read L, L is still zero, and then they're both at this point, it could, you know, we could preempt to go back to thread one, do whatever, it doesn't matter at this point, we're past screwed at this point, so if T two goes, it could go, it would set the lock equal to one, return from lock, and then it could execute the counter, we could get preempted then, before like the same issue, after it read the value of counter, it could get preempted, and then thread one would go ahead, set the value of the lock equal to one again, which doesn't really matter, and then it also reaches it past the lock, so now we have two threads that are somewhere executing counter, and we're screwed, so we have a data race again, are locked in quite work, so everyone kind of agree that this implementation sucks, how would you fix it? Yeah, so just as each thread has its own variable, that's private to it, and then in the lock I check them all, so I can't quite do that, if each thread has a variable that's private to it, I can't check them all because one thread would be trying, and it can't check the other ones, right? If they've pried, okay, so I could have one variable for each of the threads, so that's, you could probably get away with doing something like that, and it could work, but you're wasting, you're essentially every new thread, you're making this slower, and more memory, yep, where did you learn about compare and set? Somewhere, okay, so the idea of this is yeah, it's hard, so if I don't have any help from my CPU, it's really, really, really hard, or just kind of wasteful, go back, so yeah, just to get back to this, so there's two problems with this, not safe, both threads we saw can be in the critical section, and it's also not efficient because it wastes CPU cycles, if a thread doesn't get it, it'll try over and over again, and essentially burn its whole time slice or whatever, just doing something that you know it'll never make any progress on, so it's kind of a waste. Oops, come on. Oh, cool, my slides died. Oh, gee. Okay, well, whatever, let's talk about the midterm then. Any questions about the midterm? Yeah, yeah, quite similar to the one I showed you the fall 2011, or 2021 one, so you can ask me about it because I have a really bad memory, so I had to submit it like a few, well, submit it later today, so I already forgot about it, so it's very, very, very, very similar to that one, although there's some questions of like, hey, did you actually do the labs? So if you've done the labs and you can do that midterm and you can do some, how much should I say? If you can do some virtual memory questions, if you can do some scheduling questions, if you can do some process questions, which pretty much means you did lab two, you'll probably be fine. So yeah, yeah, how did I make this short answer? So for the short answer ones, basically you had the summary at the end of the lecture, so I tried to distribute it pretty evenly over the lectures of some key takeaway you should have or something you should have learned from that or something you should be able to explain. So if you can do the midterm, like the sample midterm and actually reviewed all the lectures, paid attention to the lectures and did the labs, you should be fine, hopefully. Yeah, sorry? Nope, no threads on the midterm. The word thread does not exist on the midterm. It is, let's see what I do. I did 30 marks of short answers. They range from like max five marks less up to two. So two to five marks for some short answer questions. Then there's three bigger questions. I'm probably saying too much, whatever. Just don't stop me. There's one questions on processes, one question on scheduling, one question on page tables. So, yeah, I know there's not that much to go off of which is kind of scary, but it's not that bad. I mostly write the midterms not to try and screw you over more to check that you actually learn something. Yeah, processes. So any other questions while I'm spewing too much information probably? Nothing at all about the midterm? Yeah, so you can ask more questions on Discord but honestly the midterm's not that bad, yeah. Will you be writing code? Do you want to write code? So, yeah, you do not write code, you will read code and I think the most I ask you that's related to coding is like, hey, I wrote something stupid. What should I have done to not write something so stupid? Yeah, no, you have a cheat sheet. Okay, what functions do you need to know? Not that many, you need to know fork, you need to know weight, you need to know what do you use in lab two? You maybe use dupe two, open, close. Is there any other that you really used? Oh, exact, maybe. What? Yeah, dupe two, yeah, pipes, good. Yeah, hey, probably not. So on this weight, I just gave you weight with no nothing else, just weight. So weight doesn't really take any optional arguments. So yeah, you don't have to write the whole thing, you don't have to write the entire man pages or try and fit them on the page. You've used fork on the labs, right? So fork, you might want to write fork, creates a new process, returns a process ID. Dupe two, create clones of file descriptor. Left, right, yeah. Well, S-Trace is a good one to know off the top of your head, but I don't think I asked about it. Cause the sample midterm I gave you had S-Trace in it. Yeah. Yeah, so like, I mean, the lab questions are like, it involved fork and dupe two and all that. So a question would probably just involve that and it's kind of related to lab two. No, no, it wouldn't be like in lab two, you, well, everyone's lab two solution was different. Some of them like crash, so it's like, yeah, so it's not like in lab two, how did you implement this? Yeah, but yeah, but it's like little snippets of things that the lab would have helped you understand. And mostly the, hey, I did something silly. That was generated from me looking at your labs. So someone that fixed something probably already has the answer. Well, so the thing about the test cases is you don't even have to have an implementation. You can write whatever you want and check whatever values. It doesn't actually have to run with your code. So the idea behind the lab, like testing is, hey, it's something that you don't know what your implementation should do if you implement it. So you're allowed to essentially ask the solution what the numbers are. So if you crash the solution, full marks. Okay, if you crash the solution and the crash happens in my code, full marks. If you do not crash my code, and yeah, if you don't crash my code, then it should give you some output, right? The idea behind libraries is they should work and if they don't work, they give you an error code. So if you write something completely insane, it should just be like all the checks that you do of whatever calling that, they should all probably be like negative one, right? So you can write something insane. The only thing I ask is like, just what in it at the beginning, only one of those, don't do anything that silly, but one in it at the beginning and then do whatever the hell you want. Yeah, yeah, do date for the actual labs next Monday. Yep, no, so in this, you don't put any assertions because the idea behind that is you might not know what the correct value should be. So just don't write any assertions, don't do anything, just check whatever value you want to know the result of and that's it. No, don't touch the check, just call check. So all we're gonna do is run your code or test code with the solution and then tell you all the results from check. Okay, so let's see. I don't know if I can share that. Let's see. So let's go here. Okay, so let's quickly do a new file. Okay, whatever, so there's this check function, right? Avoid check, takes an int, you don't have to worry about it. So in your thing, you write a test function and then I will execute this. So you can write whatever you want here. So let's come up with a test case now. So first you do what a knit because you have to and then let's make up something. Avoid, I don't know, let's call it. So you would do something like this int id equals and then you could just do check id and the idea behind that is if I run it with the solution you get the result of that. So in this case, you're just writing this test and then you can have it do whatever the heck you want. So in this, I would check id and then if you don't have a solution this would just print off some random value because you never actually said it because you didn't implement this. So if you executed it, it would probably say check negative one, but the idea behind that is you don't have to have an implementation. So I will run it with the solution and then tell you the result of that. So if I actually executed this with no solution it would say check negative one and then later I will tell you that hey, when I ran it, I got check one and then your test case would be more involved. So I would say I want to do something silly. Say, oh no, I want to join this and it doesn't have to work, right? I can make it silly, right? And I could do, I could check all of these. Like that's a test case if I really wanted to and I would check all the values with no implementation it'll all be negative one. If I ran it with the solution solution would probably actually join on T one or sorry, yeah, join on T one and it would do like check one, check zero and then all the other ones would be check negative one because it's already been joined. Yeah, probably. No, yeah, not initializing it and hoping it works does not count. Or like doing, yeah, doing this is not valid. So the idea behind this is to help you with tricky cases and not make my, not me regret my life for doing this to try and help with it. So yeah, don't do anything terribly silly. That's terribly silly. But like this, this is kind of silly but that's a valid test case. So if I just brute force it over and over again, it'll work, hopefully not. Yeah, and if you cause my code to seg fault, full mark, let's, no, I don't wanna say that. Okay, never mind. Huh? You think it's less effort to get my code to seg fault? Wow. Okay, go for it. Yeah. Yeah, I think my code will be fine. Don't worry about, yeah. Yeah, I don't think you'll get my stuff to seg fault but you're welcome to try. Yeah. Okay, yeah, my code seg faults, watch. I could, yeah, cause you could do something like, no, you have to get my code to seg fault. Yeah, if I said get your code to seg fault, everyone would get 100 instantly. Look, X equals null, X equals four, seg fault. Like everyone has seg faulted without even trying. So it's, yeah, getting your code to seg fault's easy. Getting mine to seg fault is probably harder. But yeah, that's a good challenge. I won't quite say it, but if you do cause my solution to do something really, really, really stupid, I'll do something. I won't say what it is because I'll probably regret whatever I say. So I should probably think about it. All right, anything else? Well, lecture's over anyways. So just remember, I'm pulling for you. We're all in this together.