 All right, welcome back. Thanks for braving through the snow. I almost didn't make it here today So let's begin and learn basic scheduling so There is an important two types of resources Everyone still talks. Okay, great So, yeah, he's still got ten minutes. What okay? So Today we're talking about basic scheduling. So there's two main types of resources here There's something called preemptible resources and non preemptible resources So the major difference between them is preemptible resources can be taken away and used for something else So like a CPU is an example of a preemptible resource. I can give your application the CPU. Yeah, oh Oh For the class it kind of works, but not really it's at max So I guess I'll have to complain the tech after so Yeah, so there's So a preemptible resource is just something that can be taken away at any time like CPU so your application can go execute do whatever and then the Colonel can go ahead take away the CPU from you and start running another application so preemptible resources are typically shared through scheduling and Example of a non preemptible resource is just anything that cannot be taken away without Acknowledgements, so it's something that's given to you and you get to use it You don't have to share it so like disk space would be example of a non preemptible resource So you get some disk space either use it or you don't it's not taken away from you so non preemptible resources are shared through allocations and Deallocations and memory would be another example of this and For note if you get into like really high performance computing Well, you can actually allocate the CPU and deallocate the CPU, but in normal circumstances It's just a preemptible resource that the kernel controls So there's two things there is a dispatcher and a scheduler that kind of worked together a dispatcher Is the like low-level mechanism that is actually responsible for context switching and the scheduler is the high-level policy That decides what processes run when and where So the scheduler just runs whenever a process changes state So first we'll consider a non preemptible process So if a process is executing it can't be you know, it doesn't stop executing so it just runs until completion So in this case our scheduler is pretty simple We just have to make a decision when a process terminates and I need to put something else on the CPU to run So this is preemptive What and again preemptive means the operating system runs the scheduler at its own will So if you do like uname-v and your kernel, that's one of the things it will tell you it will say preempt in it And that means that it's a kernel that maintains control and will pause your processes and swap them in and out They don't just run until completion So there's a few metrics when we discuss scheduling and some of them are kind of at odds with each other So we ideally want to minimize something called waiting time and response time So you don't want to have a process just sitting there doing nothing You want it to actually execute at some time and you want it to actually you know perform some useful work for you So ideally you would like to minimize that You also want to maximize CPU utilization You don't want a CPU idle if you only have a single CPU that's not too difficult of a task But it becomes harder and harder the more CPUs you have so if you have you know 128 cores or something like that then that job might be significantly more difficult than if you just had a handful Then another thing we want to do is maximize throughput so we want to complete as many processes as possible and The last one which is kind of at odds with all the other ones is fairness So you want to give each process, you know approximately the same percentage of the CPU if you can help it So first one is the most obvious scheduling strategy and this is what happens if you go I don't know to McDonald's or something like that. It's just first come first serve And this is actually a valid algorithm. So it's the most basic form of scheduling First process that comes in you'd be you just form a line essentially and say you get the CPU And then you store the processes in that line in FIFO order. So that's first in first out And whatever order they arrive in is the order they execute in which isn't a typical which isn't Like a very compelling algorithm, but hey We'll see this for a bunch of different questions first come first serve It's typically the first thing you think about and it's like a really easy thing to implement so we'll go through a bunch of schedulers today and they'll all be represented by Gantt charts and They'll we'll use the same example today, but fudge the numbers a little bit. So We'll consider four processes P1 to P4 and then their arrival time is when they start want to start executing So when you would start them and then the burst time is just another way of saying that's how long they actually will execute on a CPU for And to simplify everything We just argue it in time units where the smallest time unit is just one So you can't do like half a time unit or something like that So all of them arrive at the same instant at time zero So we'll assume some arrival order between them and we'll just assume that it goes P1, P2, P3, P4 So if you do that and you do first come first serve our scheduler looks something like this at time zero Every process we could execute and we just pick the first one that arrived which is P1 And it executes for seven time units. So in this case, we're assuming non preemptible things So as soon as we pick it it runs until it's done So here P1 would run from zero all the way to seven which would be seven of those boxes and then it's done So as soon as it's done, we have another decision to make We need to Schedule another process in this case It would be P2 because it was the second one in line and it runs for four time units So one two three four now at time equals 11 We have to schedule something else P3 was the next one time unit. We schedule it 12 make another decision There's only one thing left. So P4 goes for the last four time units So any questions about that simple scheduler or like the way it's presented? Yep So tie breaking would work in this case tie breaking works because I gave an order here So everything looks like a tie, but you will always be given an explicit like a rival order you know to essentially break the ties So this looks like a tie, but in this case you'd always be taken, you know, what microsecond, you know, it does faster So in this case if we compute the average waiting time for each process We just add together how long it's waiting for and then just divide it by four. So They all are ready to execute at time equals zero P1 Wait zero time units until it executes because it executed immediately So P1 waits for zero time units P2 waited for seven because it had to wait for P2 to be done P3 waited for 11 because it needed wait for P1 and P2 to be done Then finally P4 needed to wait 12 time units because it needed to wait for the last the first three to finish So our average waiting time simple calculation It would just be 12 plus 11 plus 7 divided by four since there's four processes Which I think if my math is right, that's like seven time units of average waiting time So let's take that number seven and have it our head and just we'll go back to your question So this looks like it's a tie, but we'll just assume a different Arrival order at that time instant. So there's actually no ties And if we do the same schedule again while we put P3 first then P2 Then P4 then P1 Now in this case if we calculate the average waiting time, well, it's a bit different So P1 how long did it wait? Well P1 waited for nine time units because it had to wait for P3 P2 and P4 Then P2 only had to wait for one time unit P3 waited for zero time units And then P4 waited for five time units So our average waiting time now is zero plus one plus five plus nine So divided by four, which is like 3.75 So just based off that order without changing anything our average waiting time essentially went down by half And we just kind of got the luck of a draw So this kind of gives us an intuition that this is like really really inconsistent and probably a bad idea And we can probably do something smarter So the smarter way and the kind most probably optimal thing to do is instead of first come first serve Well, in order to minimize waiting time, I'll just schedule the job with the shortest first time first and still assume no preemption So if a process hasn't come in yet, I can't schedule it So in this time, this is the example we'll use for the rest of the lecture that the arrival time Actually has different times. So there's no ties So that's the only change here. So now if we do shortest job first Well At time zero now only process one Was requested to start. So we have no choice but to choose process one So we choose process one and it will execute until it's done at seven time units because again No preemption yet and while it's executing at time two P2 wants to execute, but it can't Time four P4 wants to execute, but it can't and then at the at time five P4 wants to execute, but it can't So by the time it finishes P1 finishes at time seven We have three processes waiting So we have P2 or P3 P2 and P4 So we'll just take the shortest job first, which is P3 And then between P2 and P4, there's a tie So in the case of a tie of shortest of like burst time, you would just take whatever arrived first So in this case our order, we'd want to execute things at time seven is P3 then P2 then P4 So after knowing that we would just execute P3 for its one time unit Then at T8 when it's done we execute P2 then P3 So any questions about that? So now we can get our average waiting time Which will be a bit different because not all processes started at time zero So for P1 Well, it's waited around for zero time units It came in at zero and it started executing at zero So it didn't wait for anything then everything else gets shifted a bit So P2 actually waited for six time units because it arrived at two and then didn't execute to eight So eight minus two is six or you could count the boxes if you want to so it waited around for six time units Then P3 it arrived at time four didn't execute to seven So it waited around for three time units and then P4 Came in at five and didn't execute till 12. So it waited around for seven time units Yep So that's a good question of how do we know the burst time in advance because you just kind of start a program and you don't know so This is kind of you analyze this after the fact So when you're actually doing real scheduling you will not know the burst time But this is a way to evaluate things, you know after the fact So that's a good observation because in real life, you're not going to know the burst time But now, you know, there's some research where you could you know Perhaps use some AI models and try and predict some burst time But you know, if you're wrong you might be really worse off than you know, if you just kind of have a more general policy So in this case our average waiting time would be four So if we just do the math on that So yeah going into that Not practical. It's probably optimal to minimize average wait time But again, like you just pointed out you don't know the burst times of each process. So that kind of puts a Puts a wrench in our plans right there You could you know use a pass to predict future executions AI whatever But this has some fundamental problem of something called starvation Where the really long jobs might not ever execute So if you constantly get little short jobs and you favor them Then you'll your cpu is only ever going to execute little short jobs and the long ones will never execute And being in a situation where a process can never execute is just starvation So it doesn't have a chance to get any resources So we also could have a little tweak to it. We could add preemptions to it So shortest remaining time first is just the tweak to shortest job first Where things can get preempted so It's still the same rules except that we can interrupt any process that's running if a shorter one comes in So again, we're just assuming the minimum execution time is one time unit And this also provably optimizes the average waiting time and minimizes it as much as possible So with the same setup here Well, it's going to be a bit different So at time zero there's only p1 So it would come execute and then at p2 now we have preemptions So at time two p2 is able to execute And it has a burst time of four and what is remaining left on process one is well It wants to run for seven. It's run for two time units now. So it has five left and p2 has four left So p2 has the shortest remaining time left. So you just schedule p2 to run instantly So p2 would get scheduled to run it executes for two time units and then p3 comes in So p2 has two time units remaining it wants four. It's executed for two And then p3 has one time unit it wants to execute for which is shorter than everything remaining So we would shove out process two put in process three And then process three executes for its one time unit and then process four comes in In this case, there's no tie process two has two time units remaining Process four now has four time units remaining and process one or process one has five time units So we'd pick process two Runs for two time units and then process four would Go next run for its four time units and then process or then at time 11 process one would end So yeah shortest remaining time first still has problems with starvation In fact, you know in fact it gets a little worse. So our p1 Actually executed the completion when we didn't have preemptions But now if you have preemptions you might tease it a little bit gets to run initially and then you kick it away And then it never runs again. So yeah So this has the same starvation problems So in this case if we do our average waiting time, we have little gaps here that we have to account for So average waiting time becomes a bit more of a problem So in this case p1 waited for nine time units. That's nine consecutive time units. It wasn't executing for And then p2 well Here it didn't have to wait any time it started executing instantly But before it was done, it had to wait one time unit while p3 was executing. So it waited for one time unit p3 waited for zero because it came in at four and Executed immediately and then finally p4 waited for two time units. So these two time units So a shorter way to get those if there's bigger gaps or you don't want to like count the boxes or anything you can just Take the number from when that process finishes and then subtract that from when it started So in this case, you know p4 finished at 11 and started at five so in total the total time it was there was six time units And if you subtract out the burst time the thing you're left with is its waiting time So it was there a total six time units it executed for four of those. So it waited for two So that's another thing you can do if you want to be quicker at it So in this case our average waiting time goes down a little bit From the case where we had no preemptions to preemptions So now our average waiting time is three instead of four and that was mostly into like our intuition is mostly Because we essentially kicked out p1 early All right, so hopefully this is fairly straightforward less confusing than all the forking and processes and things So this is as tricky as it's going to get today. Yep Yep So your question is when I switch tasks like what am I wasting or what? Okay. Yeah, so the question is essentially When I'm switching between processes is that process still in memory or what happens with it? So we'll get into the mechanics later but Realistically when you switch them you're switching like Register values like the program counter and stuff and all that memory is going to exist And when we get into virtual memory, we'll see how You don't do anything and they just can't use each other's memory anymore and you get some protection But we'll get into that after After we get into virtual memory Okay, so this is the last most complicated algorithm we'll get in today and it also has some intuition So it's round robin. So so far things have not been fair So You know if you have siblings your parents might have done the fair thing is like, oh you have to share a toy So you each get you know five minutes with it and you have to pass it around So round robin is essentially exactly that So if you want to be fair round robin, so The operating system will divide the execution into time slices, which has a fancy word called quanta And then an individual time slice is just called a quantum. So that's like the smallest unit of time So what we do to implement round robin is you still use a fifo, but you kick things in and out So you have a fifo queue of processes similar to first come first serve And then based off your time slice at the end of time slice That process that's currently executing just goes to the back of the line So it go yeah just goes right back to the line So Any intuition about that quantum length like the length of the time slice? So is there any practical considerations I should have if I'm actually selecting this value? Yeah A clock cycle. So if I just do it too fast So you'd have to do it at least in a clock cycle to be able to like change values or something like that But you couldn't do much work. So If it was yeah, if it was way too short, you couldn't even have enough time to switch processes So nothing will happen then Okay, yep Yep, so there's going to be some overhead with switching processes. So you probably don't want it too fast And yeah, there's another one back there Yeah, it should be longer than the switching time. What about if it's really big So what about if the quantum the time slice is longer than any process takes to execute? Yep Yeah, that's just first come first serve again, and then we have all the same problems again So we kind of need to have a fine balance. We don't want it too fast Otherwise, we're going to waste a lot of time just switching back and forth because that can't happen instantly And if it's too long, it's just first come first serve, which is decidedly not fair, right? If you have to Share something with your sibling, but you get to have it for 18 years. Well, then, you know, you're all good if you're the first sibling So We'll try and illustrate this too by showing different quantum lengths for the same schedule. We've been arguing about so far so So here i'll go through it real quick. So the these are kind of like exam ish questions. So Here's our arrival times. We have a quantum length of three units Or a time slice whatever you want to call it So our time slice is three so When we make our scheduling decision well at time zero p1 arrives There's nothing else to schedule So it would run for its time slice of three time units. So we can fill it in p1 p1 p1 and then at the bottom Of every time instant there i'm going to draw a q whenever it changes So initially we would have p1 come in in the q and then We have to decide what to execute. There's no other choice. We'd execute p1 Then at time two our empty q would get p2 added to it. So the next time unit We have something else to execute now When p1's done done its time slice at p or at time three it would get added to the back of the q So at time three our q would look like p2 then p1 So you just pick whatever is at the front of the q for our next schedule Scheduling decision and here time slice again in three time units So p1 or p2 would execute one two three So it would execute for its quanta time slice whatever you want to call it of three time units And during that time some things happened. So at time four p3 arrived So p1 would have been at the front of the q because we took p2 off And then we would have added p3 at the back Then when p4 came in at time five Well p1 was still at the front of the q Then p3 was next and then p4 So Then we go at time six Well, that's when p2 is done executing It still has some more time it would like to execute. So it just gets added to the back of the q So now our q looks like p1 p3 p4 Then p Two which got kicked off at the back. So now we have everything in our q we just take it off in order So p1 is next so p1 would execute for its time slice of three time units And it would fill up that whole slice because you know it wants to execute for seven and right now it's only at six So it would execute for It's time slice of three and then get thrown to the back of the q. So now we have p3 p4 p2 and now p1 is at the back So now we just pick off the next process to run it would be p3 So p3 p3 p3 Now it's done its time slice now our q looks like p4 p2 p1 p3 Yep Whoops. Yep. My but I jumped the gun So yeah, sorry p3 its first time it's only one so yeah, it wouldn't use its whole time slice So thank you. Thank you. So p3 only executes for one time unit and it's done So if it's done, it'll just be kicked off it executed It's done and we just pick off the next thing in the q So you're allowed to end your time slice early and it doesn't get wasted Like we don't just do nothing for two time units because The kernel would have you know got The exit whatever it did and it can now switch to something else and do something more useful So in this case our q is p4 p2 p1 So now we would just schedule p4 It would execute for its three time units And it's not done yet. It still has one time unit left So our q would look like p2 p1 p4 Now each of these processes just has one time unit remaining So we would just go in order so p2 p1 p4 So any questions about that? Yep Sorry Yeah, so the question is does our wait time get really long and In this case we do because we essentially just had one time unit so left on all of them. So Just we just had more work to do but in this case, yeah, it would have been kind of unlucky and we would have increased our uh Increased our waiting time. So for that, I guess what is our waiting time? So average waiting So how long did process one wait around for? Here i'll make it a bit easier 16 So how long did process one wait around for? Yeah eight So process one waited around for eight And why while it ended here at 15 Here i'll do this Process one ended at 15 it started at zero So it was there for 15 time units and it took seven to execute So if we just subtract it must have waited around for eight or you could count the boxes if you want so process two Well, where did it end? It ended at time 14. It came in at time two So it was there for a total of 12 time units And it executed for four. So it also waited around for eight Then we have process three in this case. It waited around for five time units So it finished at 10 And it started at Started at four So it was there a total of six and it only executed for one. So must have waited around for five Then last one we have is p4 which waited around for seven time units So it stopped at 16 And started at five. So Waited around for 11 and it wanted to run for four. So math It was seven So if we do this the average waiting time would have been seven Then there is another Time that we might care about if you just care about one of the things you might care about is the time it takes from when You first ask for it to run For it to run a single instruction So maybe you get some output out of it and that is called the response time So that is generally something you care if it's like a graphical application where it's not going to do that much cpu work You just want to schedule that sometime So the response time is just the time it takes from when it first gets scheduled to when it first executes a single instruction So in this case for process one it's zero because it started and ran instantly And then for process two Well, it only waited for one time unit. So it was it started at two and execute It's god's first time slice at time three. So it only waited around for one time unit The next process Three while it started at time four and didn't start executing until nine. So it waited around for five And then process four. Well, it started It got requested at five and didn't start executing until 10 So it waited around for five In this case, if we do the math we get 2.75 And as someone brought up before like switching between processes is not instant So that's like a third thing we want to measure So every time we switch processes remember it's called a context switch So we can count the number of those So in this case, we just count how many times it switched from one process to another So right here it switched from process one to process two Then at time six it was process two to process one Then at time nine it was process one to three Then three to four Then four to two Then two to one then one to four So if we count out all of our blue markers there, we should get what seven So in this case, we had seven context switches. Our average waiting time was seven And our average response time was you know 2.75 and then Also one question you might have is like oh well What about if a process arrives at the same time we throw something to the back of the line? So in that case you want to always favor the new one because that will get its response time lower The other one already had some response time So it can just go to the uber back of the line and you would just favor the new process So We can do this example again. So here's all of these answers for that. Oh, sorry So average waiting time is a time until it finishes. Yeah So average waiting time is time it starts like how long did it wait from the time it starts to the time it finishes And then response time is time it starts to when it first starts executing So average waiting time Every process the time every process takes from the time it starts to the time it finishes And how long was it waiting during that whole time Or basically another way to think of it is waiting time is how long did it sit there idle doing nothing while something else got a turn which is, you know If you go back to the analogy of your siblings, you know having to share something with your siblings It's how long you didn't get to play with something. So that's another way to think of it So We can do this again and be like, oh, okay. Well, if that's our qualm length of three Let's do it again with one and one This is just more of a pain because we'll just switch things over and over again so Let's all will change is the time slice So let's go back So now we have a qualm length of one single time unit. So as low as it can get just to see what happens So same things here happen Time zero we only have one process That's p1. So we would choose that to execute just to make things easier. I'll just write the number here instead and then At time one when it's done, it's time unit, which is only one Well, it would get thrown to the back here And then it's the only thing. So we just execute it again And then p2 comes in so we would favor that so now our q would look like p2 p1 Then we would execute process two Our q would look like p1 p2. There's nothing new Then we would execute one Now at time four we get a new friend. So it would be whoops p2 p3 And then the one that just run goes always goes to the extreme back of the line. So we'd get p1 So then we would schedule process two Our q would look like p3 p1 and now we have a new friend we have p4 And now Then we have p Two which just executed. So it's all the way to the back of the line So now we have all our processes and it's just going to go through this order Hopefully you don't want me to write this q every single time because that would get kind of painful So it's always going to go three one four two three one four two three one four two until things are done. So in this case Three would execute It's done at this point because it only wanted to execute for one anyways So the rest of our q would be one four two So that's just going to go in this order until things just kind of end So it would be one four two one four two And now at this point process two is done because it only wanted to execute for four time units Which it's done. So now we only have it's cycling between p1 and p4 So it would go one four one four And then it just kind of then it's they're all done at that point. So any questions about that It's just more tedious. Yeah Yeah So this is just round robin. So our tie is A process of rise whenever one gets thrown out again. So that's kind of our tie here Otherwise, we won't really have ties if we have it because we're just doing schedules on a single cpu. So we won't really have ties So the only tie we have is Process gets kicked out while a new one arrives Yeah, we'd prefer the new one. Yep Okay, so in this case if we go get all of our numbers again, so average So average waiting time again like the question We count every process when it starts executing to when it's done How long was it waiting around doing nothing for and then divide that by however many processes there are So process one How long was it waiting around for? Well, I should probably just do this again That's all the important numbers. So process one it finished here at 15 and it started at zero So 15 minus zero is it was There in existence for 15 time units and it executed for seven of those So it must have waited around for eight then process two ended at 12 Came in at two So it was there for a total of 10 it executed for four So it must have waited around for six of those 10 time units All right anyone quick math. How long did process three wait around for? Yeah Yeah one so it came in at four and it just waited one time unit and then it got scheduled immediately so one Then how about uh process four how long did that wait around for? so Fished at 16 came in at five and again it ran whoops It ran for four anyone quick math or we fall asleep Seven hey, yeah seven. So it just waited around for seven units And then if we do this math, so I won't ask you this one So you add those together divide by four processes And we get five point five Which was you know our waiting time went down. Hey, it was seven before It's five point five now. That's pretty good What about the response time? So this one thankfully not much counting is involved. So P2 came in at zero started executing at zero So the response time is time first comes into when it first executed How long is it waiting for so it was waiting for zero Now process two same thing it came in at time two started executing right away So that's another big old goose egg Then next one was process three. So it came in and it waited for one time unit. So it came in at four waited one Browned and then it was scheduled next so it waited around for one And then the next one Well process four was the unluckiest of the bunch It came in at five here And it waited around for one two time units. So it waited around for two So if we take this that's just three divide by four, which we should know how to do Whoops. I don't know how to do that. I wrote the five first So zero point seven five so In this time our response time went down from two point seven five all the way to zero point seven five So our response time got way better and that kind of makes sense because we're swapping so fast So The number that probably went up that isn't great is the number of context switches Because so far I've just kind of been assuming that everything is instant. So How many context switches did we have well? one two three four five six seven eight nine ten eleven twelve thirteen fourteen yikes So in this case we had 14 context switches, which If they weren't instant and they actually you know if your context switch took let's say for argument's sake It took one time unit well Then realistically this whole thing will have taken like 30 time units to execute instead of like the 16 so Almost about half of that time would have just been switching between processes, which you can imagine is not that great So you could do it again with 10 units, but at 10 units that's longer than anything So it becomes that first come first serve and You can see The number of context switches way down because we only switched three times Our average waiting time actually improved a bit, but our response time went way up so And this is like the same case as first come first serve without preemptions So the round robin performance depends on the quantum length and the job length So Round robin typically has like low response time good interactivity. So you want that for you know Applications you interact with it's fair about the cpu Low average waiting time and there's a caveat here when the job lengths vary But the performance you know really depends on that qualm length and you have to Write it is no correct answer. So there's extremes at too high. It becomes first come first serve Too low. There's too many context switches, which is just overhead. That's not time your processes are actually executing for And it also has another property that we didn't show in the examples that the average waiting time actually gets worse when the jobs have similar length And that's just something you can simulate So scheduling involves lots of trade-offs. We looked at a bunch of different algorithms Although some of them were probably somewhat intuitive first come first serve Shortest job first shortest remaining time first, which is the same idea with preemptions And then shortest remaining time first, which You know that optimized the lowest waiting time Then we had round robin which kind of optimized fairness and response time But our average waiting time or gets sacrificed a bit So before we end too, so we'll just go back to some process practice so Something like that looks a bit weird And let's argue about what happens during this and I'll try and explain it in a slightly different way to And show that hey, it's the return value A fork that's different and like really try and illustrate this so Initially, let's assume we're just executing a process. It's starting It's process 100 and it is currently at that line. Well So far there's no fork. So everything is as you expect. So in process 100, it would execute this line Get some space for PID and on the right here. I will write its variables and everything like that So process 100 has a variable called PID. What's the value of PID anyone remembers their C? Sorry 101 so the value of PID now it's just C. It's an uninitialized integer So what's the what's its value? Zero if you're lucky undefined behavior. Otherwise, so it's garbage. So Let's just make up a value just so we can keep track of it. So it'd just be garbage some value. I'll assume it's 42 just for fun Then when you fork there is One process calls that fork. So in this case process 100 calls fork And that's what gets cloned. So Process 100 would get complete cloned and then a new process would pop up. That is an exact clone So process 100 would make process 101 and it would actually get that value of PID 42 so it is an exact clone So what's going to happen is now the only difference is actually the return value of fork So you can think of it this way So it's a clone at the time of the fork The only difference between the processes is what fork returns and then whatever you do with that return value That's independent in the processes and they can do whatever with it So for process 100 fork returns 101 because that's the new one And in process 101 Fork returns zero because it's the new child So Then at that point You could choose either one of them to return first. So if 101 returns first Well, then It would update its value of PID to the return value of fork because that's what PID equals fork does So its PID would be equal to zero And then it is done executing that line. So PID 101 is here now if process ID 100 goes along well It would just update its copy of PID with its return value of fork, which is Completely different than the other one So its return value of fork is 101 because that's the process ID of the new process so now at this point Both processes would be on this line Now they would Yeah, oh That's okay. Okay. So now either one could execute and Would create a would call fork again. So let's say process ID I don't know 101 calls fork so process ID what or A new process would get made called 102. That's the exact copy. It would have PID equal to zero And now we can argue that Hey, well process 101 called fork So in 101 it would get return 102 And in process 102 It gets zero returned So now in this case Whatever one returns first you can argue about it, but we have no defined order. So we'll assume 101 executes first. So it would update PID It's PID to 102 And then carry on and then it's done and then in process 102 It would just overwrite zero with zero again and then be independent So now each of them would finally call exit And exit So now oh, okay, we're out of time But hopefully this helps a little bit and we can finish it later too. All right. So just remember pulling for you We're on this together