 All right. Welcome back to a wonderful Tuesday rainy evening. So today we will be talking about one of our most important abstractions in this course that we will be using over and over again. And that is processes. So we've already kind of laid the groundwork for them in the previous lecture. So what a process is, is it is a instance of a running program. While a program or application, right, I've been using those terms kind of interchangeably this far. It's just like a static definition. It's just that elf file we saw before. So it just contains, you know, the instructions, any data that was associated. So like the string in that first example, any memory allocations it might use, symbols it uses. So like your actual programs you compile, right, if you use printf somewhere in that executable, there'll be information in it that says, hey, I use printf when the operating system loads me, you need to find printf in the standard C library, for example. Then your operating system would figure out how to actually do that. And then kind of what we care about in operating systems are things that actually run and execute. So a process is what executes a program and what the kernel does is it manages processes. So a process is just an execution of a program and then you can have many of them. And what a process is, it kind of encapsulates a bunch of different virtual resources. So you can think of it like encapsulating pretty much all the virtual hardware on your machine. So we already know and kind of discuss like a virtual CPU. So at the very least the operating system, if we're virtualizing a CPU, it needs to keep track of registers for a process. And the most obvious register if we care about program execution is probably the program counter, right? But it also contains all the other resources the program would use when it is executing. So like memory and IO, which we'll get into later. But for now, again, we'll just care about the CPU because we're all kind of used to that by now. So kind of just to very clearly separated out that every execution kind of runs the same code, right? If we have multiple processes of the same program, which you are allowed to do, right? Each of the executions will be running the same code. So there'll be the same block of code in every process. But where it is currently executing, which you can think of as the current program counter would be different for every process, right? So execution is running a very specific part of it. And it can change between processes. So that's why you can run, you know, if you had a long running program, right? And you were bored of waiting for output from it, you could just run another one if you wanted to, which would probably make the first one slower. But hey, you could do it. So why the operating system does it is just because it's more flexible. It's a good way to just encapsulate everything, especially when you're the operating system or kernel, and then you have to manage it, right? So it contains both the information for the program. So all the instructions loaded into memory and anything specific to its execution. So again, where is it executing in the program? Where's the program counter? Maybe like what files does it have open? What libraries is it using? And things like that. So you can have multiple executions of the same program. And you can even, you know, allows a process to run multiple copies of itself, which we'll get into later when we learn how to actually create processes. So within a process, all that accounting information in operating system terms is something you'll find called a process control block or PCB. And it basically just contains all the information specific to an execution of a program. So like process state again, which would be, you know, also CPU registers. If you're the kernel and you have to keep track of who's running the CPU at any specific time, you probably have some accounting information for scheduling. So for instance, you might not want one program to run exclusively or run, you know, 100% of the time. So you probably need to track how long it's run for. So you know when to actually take the CPU away from it. But that'll all come in when we talk about actually scheduling and deciding what process to run at any one specific time. It also have memory management information. Again, that's a future lecture IO status again, future lecture, and then any other type of accounting information that you might need to make decisions for. So process just encapsulate everything you need to know about the specific execution of a program. Okay, so since we haven't used threads. So if I say the words parallelism and concurrency, who thinks those mean the same thing? Oh, so everyone thinks they mean something different? Yes, hands up. No one has any hands. No one knows anything. All right, we got thumbs up for no one knows anything. So in plain English, if you're talking to a normal sane person that doesn't stutter, that doesn't study computer science or computer engineering, parallelism and concurrency mean the exact same thing. But because I guess we want to make life difficult for you, they actually represent two different concepts. So what concurrency is, it's basically being able to switch between two or more things at the core of it means you can get interrupted. So the goal of concurrency is to make, you know, progress on multiple things and be able to kind of juggle things around. While parallelism is doing two things at the exact same time. So they're completely independent. You don't have to juggle them. You don't do anything. You just run them both at the same time. And you want to kind of run as fast as possible. And that's kind of been the kind of trend lately, right? We just keep putting more and more cores in our CPU and hope that more things can run in parallel. And therefore things go faster. So kind of a real life situation. So let's say we are also polite adults since we're in university now. So you're sitting at a table for dinner, right? And you have, let's say you have four options. You can either eat something, you can either drink something, you can talk, or you can kind of gesture with your hands. But the kind of weird thing about this so we can discuss everything. So you have to kind of bear with me on this is that you're so hungry that if you start eating, you just won't stop, right? So you won't interrupt yourself for talking. You just will scarf down whatever is in front of you until you are done. So if we go off our definitions before, what tasks can be done concurrently and in parallel? So let's start with something easy. So can I talk and gesture in parallel? So hands up if I can do both those things in parallel, right? Because one with my hands, the other with my mouth, I can do them both at the exact same time and they make progress, you know, at the exact same time. So I can do both of those. And I can also write, talking is pretty much the same as drinking, I can both, you know, so I can gesture and drink at the same time if I want to, but I have to like just use one hand, but pretend that's fine. Okay, what about if I want to drink and talk at the same time? Can I do these in parallel? Okay, someone's very talented. Can I do these concurrently? Right? Yeah, I can do those concurrently because I could, you know, take a drink, put it down, talk, right, I can interrupt myself, I can do whatever one of those I want at any specific time, I just have to switch between them. All right, well, what if I want to eat and drink at the same time? Can I do those in parallel? All right, another same talented guy. Okay, let again, remember, we're trying to be polite, you know, polite, nice adults. So, so just one of us is the rest of us are pre polite. All right. So can we eat and drink concurrently? Oh, we got a few hands. Okay, so these are things we can't do concurrently because that stupid rule I said where you're so hungry that you won't stop until you're done. So if I start eating, right, I can't start drinking because I'm so hungry that I wouldn't do it again, contrived example, but you might actually run into this when you're doing program execution. Let's see, what other example? Can I eat and gesture in parallel? Hands up for that. Yes, okay, we can do that in parallel. Can we do them concurrently? Hands up. No, right, I can't do it concurrently because it's that weird situation where if I start eating, I can't stop. So I could kind of interrupt myself from gesturing if I wanted to, but I'm either starting or stopping and I'm either doing that at the same time as I'm eating or I'm not, I'm not interrupting myself eating to make progress for the other one. So hopefully these are all the combinations of these, right? So you can't eat and talk or drink because it's the same thing at the same time and you can't, you can't switch. So they're not parallel because they're using kind of the same resource, right? You're using your mouth for both of them and they're not concurrent because of that stupid eating thing where you wouldn't be able to stop and switch. But you could eat and gesture at the same time, right? But you can't switch again. This is contrived most of times where this might happen. So it's good to have an example of one that would apply even though it's super contrived. So you could do those in parallel, but not concurrently because you can't interrupt yourself to just make progress at one at a time. But you also can't drink and talk at the same time, but I could switch. So I can't do them in parallel because using the same mouth resource, but I could do them concurrently, which basically means I could switch and make progress on either one of those. Drink or talk and gesture at the same time because we can do them in parallel, different resources, and we could also interrupt ourselves because we don't have that stupid eating thing where we just won't stop, right? So hopefully that makes sense about parallelism and concurrency because I know in undergrad, for me, no one really explained it. Even that even they were different. And if you look in the dictionary, it very much seems like they're the same thing. But so I'm introducing this so when we talk about actual threading and stuff, threading involves this stuff. So it's a good idea to get a base for it now and we'll revisit it when we actually talk about threads. Okay, so this is kind of required reading for, you know, if you go into the history of operating systems. So there was a programming model called Uniprogramming, which is like old batch operating systems. So has everyone, anyone ever used DOS? Alright, we have like two people that use DOS. So DOS is an example of this. I guess now I get to officially feel old because I use DOS as a kid and no one else does. But in DOS you can only run one program at a time and it kind of sucked. So that's what the Uniprogramming model is. You only have one process running at any particular time. So you don't have any concurrency, you can't do anything in parallel, you just have one thing running at a time and if you want to run something else, well you have to exited yourself and then start up something new. Well, if you have multi-programming, which is kind of the operating systems now, what they're designed for, it allows multiple processes to run at the same time or concurrently. So they can run in parallel or concurrently and it's up to the kernel to decide which is which. Right, in modern operating systems you want to take advantage of all the hardware resources you can. So you want to run things in parallel as much as you can and you also want to run them concurrently so it looks like even more things are running at the same time. Even if you have only four physical CPU cores on your machine, you might be running 16 different applications at a time and you want it to seem like you're actually running all of them at the same time when even though, right, some of them have to be running concurrently if we only have four physical cores. So this is kind of the life cycle of a process. First, if you want to execute a program, you have to tell the operating system what you want to start executing and create a process for it and then it will be in this created state where it's doing initialization, the operating system is loading it, so on and so forth and then as soon as everything's loaded and everything's kind of initialized and it's ready to execute, it will go into this waiting state which is also called, you could also call it ready if you want, it kind depends who you talk to and then it will kind of cycle back and forth for the majority of its life between this waiting and running state. So whenever the kernel decides to run it, it will be running for a period of time and then it will, you know, put it back to sleep, it'll take the CPU away from it and it'll be back in this waiting state and then sometimes when it's running, your program may request access to a file, it might try and be writing to disk or something like that in which case it would get into this block state where it's waiting on some hardware resource and can't make any progress until that's completed so then it will be in this block state and then the operating system can go ahead and move it back to the waiting state whenever that bio operation is complete so then it can execute again and then finally at some point you're going to call exit and then it will transition from running to terminated and it will be done and yeah there's a question yep uh so it's likely in the block state if it's like the beach ball of death it's probably waiting on some hardware resource or some network connection or it could just be waiting on some resource or be in an infinite loop in which case it would just kind of be running forever and not making any progress and they have some smart things to know if it's not making any progress even if you're running it so it could either be running infinitely or in this block state yep yep so if you're waiting for cpu time you can execute at any given time you're just waiting for the cpu so you're the only difference between blocked and waiting or waiting can also be called ready is if you if the operating system decides to give you cpu you can actually utilize it so if there's free time and you're in the block state you can't be running anyways so it'll just skip over you so if you're waiting on some network resource or something or if you're waiting on this to write something back right if you're writing if you're trying to read a whole file at once it will just wait until because hard drives are really slow right so it'll wait for that cpu's are way faster than hard drives yeah he is like okay I can give you time yeah and block is like no I can't give you time yeah to like get like a hard drive comes back or something yeah yeah so the the scheduler when we ever get into that is pretty much concerned with this loop here right managing what's ready and what's running and if it's blocked it can't run on the cpu yeah so waiting so right if you look on your machine you're going to have maybe hundreds of processes so there's no way to run them even if you have a very like high end machine you have 16 cores right if you have 16 cores and 100 processes there's no way you can run them all at the same time right so some things are going to like you know 80 processes are going to be waiting at any given time because it has to do this juggling act yep okay so we don't know how to do any of this stuff yet right so we won't even get into how to create a process even though you've been doing it this whole time and you haven't really realized it okay so kind of oh yep question so it'll be in the waiting or ready state just when it's not scheduled scheduled so it could be running concurrently it could be running in parallel it just depends on the architecture of your machine like how many cores you have right yeah so the kernel manages the processes you can think of it as a long-running process but it doesn't it doesn't really account for itself so it's always running and it has a bit more things because it could interrupt itself and then wake itself up and just steal resources from everything right because you're the kernel you have to do the scheduling so you have to be able to steal the CPU away so but you can think of the kernel if you want to as a long-running process but it won't show itself in the list yep yep yep yeah because it can the kernel can access kernel mode right and it can access stuff that you know normal programs cannot and one of the things is to wake itself up and steal the CPU back away from other processes all right so yeah the scheduler is deciding that whole when to switch which process is running at any given time right so when you create a process when we learn how to do that we just kind of know the operating system has to at least load the whole program into memory before it can execute it right remember when we had our l file we loaded the entire file into memory so that would have to be done before we can execute it right and while it's waiting right the scheduler which again coming later decides when to transition it to running and first we're going to focus on kind of the mechanics of switching processes because it's kind of important to understand any overhead that's going on and like the mechanics of it so if you're right in the kernel which thankfully you don't have to do this is kind of your core scheduling loop that will be in any kernel you ever see first it will decides what when to switch it will pause the currently running process again deciding that process is the job for the scheduler which we'll get into later but the mechanics of actually switching the process is it will pause what's ever running right so it will stop whatever program counter where it's at it will save the state so it will save at least the program counter so you can restore it later so it'll save it to some variable it'll save it into memory so then it can start another process so after it saves the information of the currently running process that we want to move to the waiting state it will find the next process that it wants to run right again that's what your scheduler is going to do it's deciding what to stop running and what to start running so after it decides that it will load the next processes state which again would at least read the old program counter wherever it stopped at before move it into the you know replace the current program counter with that processes program counter and then it would resume so it just pick up exactly where it left off so this mechanic of switching between two processes is called a contact switch and we'll probably we'll see it again so the kind of one of the cruxes of this is we've been talking about you know the CPU being smart and taking away time from like interrupting the process and then taking the CPU away from that so I've been saying that over and over again because that makes the most sense and that's what's done you know now in current kernels but the alternative before which you can guess was probably a disaster is you can let processes you know decide for themselves so that's something called cooperative multitasking where the kernel will not take the CPU away from any task and it's up to the user processes themselves to say hey I'm done now you can take the CPU away from me it's fine right so you have to actually make a system call to inform the operating system you can take the CPU away from it yep yeah sorry when I say kernel and operating system I kind of use them interchangeably so a kernel is a hundred percent part of the operating system and other than that operating system it kind of gets fuzzy past that but no matter what you say for any definition of from anyone on the planet a kernel is part of the operating system so throughout this I kind of just say them interchangeably so sorry yep doesn't kernel use a significant amount why do they use others? yeah so we'll get into that because that's one of the kind of design decisions because right if the kernel is using the CPU your user applications that you probably care more about aren't using the CPU right so that's yeah that'll be one of the design decisions so that's an important thing okay yeah so again cooperative multitasking right you'd have to make a system call to give up the CPU obviously it's advantage of this is if you don't make that system call you get to run your program until you're done right so what incentive do you have to relinquish the CPU and then one program can kind of monopolize the CPU and you can't really do anything about it so that's like one really weird style of operating system most sorry style of kernel most kernels are true multitasking kernels where the operating system or the kernel itself will retain control and then pauses processes without you having any say about it so if you do true multitasking you could either you know give processes a set time slot so you could say you execute for one millisecond and then you can kind of go through them around Robin which is a scheduling algorithm and then you get one millisecond you get one millisecond and then just do that forever you could also wake up periodically using interrupts so you could have dynamic time slices for each application and then set yourself an interrupt on the CPU to wake yourself back up and steal the CPU away from them so yeah yeah so we did that before so how a process ends itself is through that exit call oh give control away so you can but it's optional yeah yeah you can most will have a way that you can say hey I'm good for now right I don't verse what user programs takes well so the important part of that is the mechanics of swapping these processes so this is called a context switch you want to minimize this as much as possible because that's pure overhead right any time I take to switch between processes is time taken away from the user processes that you care about actually running right so at minimum you at least have to save all the register state which you know that's only a few bytes that's not too bad but even if you do that and you have to save all the values it's kind of hard so can you imagine if you didn't have any hardware support you suddenly get access to the CPU and then you have to save all the registers in their current state without touching any registers can anyone imagine actually doing that because it is impossible without hardware support right it's like saving something that you're trying to use to save it it's just weird so you need at least some hardware support for saving state to save some registers that you're about to clobber to use to save the other registers which is kind of weird and you also might not care about certain registers so if you don't have to save a register you can save valuable you know nanoseconds by not having to save them so some operating systems or kernels will go to great lengths to not save anything that isn't absolutely critical for restoring a running process so and contact switching again is just pure overhead it's the time you take to switch between two processes you want to be as fast as possible and given that you don't want to switch too fast so say it took me you know one nanosecond to even one nanosecond to context switch if my time slice is also one nanosecond suddenly I'm wasting 50% of my CPU just switching between processes instead of doing anything so there's going to be some tradeoff here and again you can think of the flip side of that if hey my context switch isn't that slow and I just context switch you know every say like five seconds well you won't have very much overhead but again the responsiveness is going to suck so you touch a you touch your mouse and then maybe there's a lot of other processes and you don't see the mouse updates to like 30 seconds later while the extreme end of that is going to also be horrible right so you kind of want to minimize it as much as possible while also being responsive so it's one of those things that's a fun engineering tradeoff yep yeah because the CPU is going to have to use some registers right yeah yeah so at the very least you'd probably need help because if you were running code you'd immediately clobber at least the program counter unless you recalculated it back by knowing how many steps then you'd probably be using some other registers that you'd have to be able to recalculate back and it would it saves your life if you can just have a hardware instruction that saves like some registers to memory location yeah yeah yeah which is the a fun problem where if you don't have any hardware support you'll be sitting there for a long time yep well so it would you know the current developers would read the documentation know what it saves right yeah so they'd start off with the hardware instruction then have software that is a bit smarter that might like conditionally save stuff so yeah that's typically what's done so they'll do hardware instruction for bare minimum and then try and do some smart software stuff to kind of ignore things it knows it can ignore but that's like a way advanced topic that we won't have to get into all you need to know is context switching takes time and it's overhead all right so let's talk about I guess the meat of this course and making system calls because remember we're kind of going to be living on this user space kernel space boundary and you do that through system calls but right so far we've never made a system call and actually making system calls are rare I'll save you looking at the documentation of system calls but they're kind of a mess so luckily in kind of the unixie version of the C standard library they clean it up a little bit so most of the system calls have a direct C equivalent that kind of have a signature that you would expect but they do some other things so they set this thing this kind of you can think of it as a global variable called error number if anything bad happens to it which will tell you right you'll be able to read a number look it up in a table and see what the error actually is it might do some fancy stuff where it will buffer reads and writes so it's not making you know tons of small system calls which waste a lot of time it might try and simplify some interfaces so like combining two system calls for example or it might add some new features on top of it just to make your life a bit easy so we'll see kind of how to use them but first let's even look at the exit system call because we used exit group in C we've just been returning from main but your other option is to use exit and this is just to kind of demonstrate additional features that C might puts in so if we write this C program here that just runs main we you can use this function called add exit to register functions to run when your program exits so right the first line of main is I call at exit and register a function called finie which is this function here which will just put a string to the terminal that just says it's executing right and then after I register a function I execute do main and then I return so anyone want to hazard a guess of what happens when I actually execute this program what gets printed so does do main get printed yeah okay most people at least think do main gets printed someone that thinks it doesn't get printed want to tell me why anyone brave enough or everyone thinks it gets executed okay well let's just go ahead and execute what why bother guessing so here's that same thing again so I will make it it's called at exit example so again this is in the example repository so if I do that both things get printed even though that do finie is not in not in the main right I register it to happen when the program exits and see has some nice niceties that you know lets you execute stuff at exit so everyone get this yeah why does do main get yeah so this just registers it to to execute when the program exits so our program exits when it kind of hits this return statement right because eventually this return will get fed into an exit system call so this is kind of where our main exits right yeah return zero so we can even make it explicit if you want so I could just call exit myself there right so if I do that if I'm not lying to you then it should do the exact same thing right it should register this function print the main and then if that runs when it exits you know before this is done it should print this right yeah yeah yeah so this is saying this is just registering this so whenever the program exits I don't care how it exits it will execute this function before it's done yeah so like if I just did something like this and then call do that okay well here I'll make it original again so if I do that it should do the same thing because it just registers the function so does the same thing yeah whenever exit gets called so it doesn't even have to exit main so here if I'm not lying to you it doesn't really matter right so let's say exit one and we'll put a one there so we can see that you know it's different than exit than main returning zero so say we'll do this right so if bar calls exit one then you know it should exit there I should never see a return zero I should never see a zero exit code and if I call exit it should also run that dofini because it was registered by default you know if you return for main it will call exit because C is nice for you but you could also call exit yourself explicitly if you want to yeah yeah so exit so there's no difference between exit and return in the main function otherwise there is right if I just put if I just put a return here it just returns from bar right but it exit and returning from main are pretty much the same thing yep so if I register it let's be clever so like that so you can do that so you can do that so yeah so it's actually defined if you look at the C documentation for ad exit it will register them and run them in the reverse order that you register them in so it runs like the first thing that happens after you call exit yeah but because the last thing exit do will call the system call exit right which is kills it yeah so exit is a C nicety where it adds features one of the features it adds is ad exit so right yeah so that's a good question so what I could do so I could read the man pages right if I read the man pages of exit group it says hey this is how to make an actual system call and not the kind of nice C thing so if I make the system call so let's go ahead and copy pasta whoops that's the calendar stop if I include says syscall.h and say even I I register the functions here right so if I run this it would do you know do main and then print the finni ones but if I go ahead and do a system call right syscall this is how you do the system call which is exactly what we saw in kind of the first lecture exit group status let's call it too so this system call is not a C wrapper so I expect now if I hit that system call any functions I register they don't execute right because that's a system call it's dead so if I go ahead and compile this whatever it's implicit so it just says do main because it's not using the C wrapper it's dead dead as soon as it hits exit group and you can see I definitely hit it because I don't return I don't return to for main or exit I only do it as part of the system call and I get a to return here so so yeah this is kind of if you use the C wrappers some might behave a bit differently again to add additional features so that's an example of one feature all right so the fun one so kind of the first thing that we explained that we saw when we executed that really small hello world was an exec VE that I said it is now later so what this exec VE does is it replaces the current running process with another program completely and resets it with whatever is defined in that executable so it has the following API it takes a path name so that's the full path of the program to load which we now know if it's Linux it's going to be an elf file of some sort and then it takes something we've all been familiar with in C it takes argv which is like actually countable is not going to count for you it just wants you to put a null at the very end and then there's an environment pointer where you can set environment variables for now we don't care about them and then if it returns an error has occurred and it hasn't loaded a new program and started executing it and if it's successful it doesn't return because it just transformed this process into another one so let's go ahead and see what that looks like so here is our new main so first I'm going to print off I'm going to become another process and I will set up an array of character pointers so the first one I will call ls and these are the argv that you actually intercept when you have your main in C right so I'm setting them so the first one I add is ls which you might think is kind of weird and then the second one is null so I'm done with any arguments of the program and then I just leave my environment as none and then when I actually execute or do the exec v equal I have to give the full path to the binary so like if you type which ls okay it's an alias but if it wasn't an alias it would tell you exactly where that program lives it lives in user bin ls and then I give it the two arguments so my array of strings and my array of strings representing the environment so if I run this anyone wants a hazard to guess of what happens anyone brave enough to volunteer yeah yeah so if I run that right it will print off I'm going to become another process and then execute the exact same thing as if you run ls with some caveats because if I execute ls it actually looks like this and it has colors so it's actually a little bit different and we'll see why that is but you can see that as soon as this got executed right none of this code worked because I have another proof statement that said if exec v work this will never print because if exec v is successful the process is just completely replaced right and then it just starts executing ls so let's see and we can see that I don't think I made a program called ec344 so if I do that and compile it I should actually see some type of error code that says exec ve failed right so if I run that it says I'm going to become another process and then it gets into this so exec ve if you read the documentation this says if it returns negative one it means an error has occurred and it will print error no or it will actually set error no so I just save error no and then this p error functions will print off a error string that relates to that error no but funnily enough that in printing it it might also encounter a different error so that's why I save it and return it at the end because of that p error returns an error it would kind of overwrite that error code so that's why I save error number and then in this case right we get exec ve failed no such file directory and then it returned a two which there's like if you look at the error no error codes two means no such file directory right all right any questions about that uh sure over there so it's not like that? yep yeah so this p error code will put the message you put here like here and then just fill it in fill in the rest of it from the error no that got set so you don't have to look it up yourself sure yeah so that's a good question that I kind of repeated myself this is I'm using ls and then the first argument is ls and that's right by convention the first argument is always the name of the program that was used to execute it because if I go ahead and change it to ha ha or something like that right if it doesn't actually it who here has ever read argument zero and then done anything about it probably no one right we all ignore it so in that case we get the same output because it doesn't really matter in some cases it could matter so some people are very very clever so something called busy box which implements like cprm and all of that is actually one binary that will change how it behaves based off this first argument and based off how you call it so you can do that you will probably never do that though so it's pretty much by convention it should be the same name as the executable but generally it's never used yep so the kernel is smart and knows where it got executed from yep so the role of the environment variables is people generally just set them to and then read them while the program is executing it changes behavior but it's just a I guess a kernel supported way of passing variables around without having to set them through a command line yep how you would set set them you could just do I don't know just something like that like how you would set them oh so in your program you can use it's called get end something like that and you can read an environment variable from there and get its value and then change your program behavior based off that yeah so you can say some programs say you can set some settings by command line or you can set them by setting a special environment variable called whatever so it's just different ways of passing variables so like you can see your current so if I type e and v it shows all the current environment variables set that might be used by different programs is there anything that's good so like for example one the one that's commonly sent is the user so by default if you run your program it will just pass all the environment variables so so most programs by convention will be able to access user and then know what user executed it without having to pass anything on the command line yeah so it replaces a currently running process by loading that program and then starting executing that program you mean this one like this variable no because so like I'm typing into bash right now or zhc and it has to make that system call so it has to know what you typed and it passes it right yeah this so that is not permission denied because that's actually a folder all right give me one second I'll wrap this up and we continue on discord or something so is that so yeah your operating system has to load a program create a process with that context maintain process control blocks switch between running processes using a context switch and replace programs using exactly all right boom for you we're all in this together all right out of time