 All right, happy Monday for the class. Yes. Two questions, if something has its name defined explicitly, does that apply internal name equivalence? Yes. So it uses the same name internally as we defined explicitly, so if there's name equivalence, then there must be internal name equivalence. Yes, the only difference between name equivalence and internal name equivalence is anonymous types. Anonymous types, OK. My second question is for type unification, how there seem to be a lot of intuition in constructing the tree that we just don't have as we take a statement and we start applying the smaller trees that we went through the fundamentals to construct this? You don't have to construct the tree yourself. OK. Never. You're going to give us the partial. You will always be given the partial, yes. So we just have to number the nodes and you have to perform handling on their side inference on given the partial. So you'll always be given. On the exams, maybe not on the homework, it's necessary. Definitely on all the missions. Cool. Any other questions? OK, good news, your midterms are rated. They're not told and they're in the blackboard yet. That'll happen either today or tomorrow. I have other good news. Well, good news and bad news depending on your review of things. No class on Wednesday. No in class on Wednesday. I will be recording a lecture and posting it online so you know we don't miss a beat. So you'll be required to view that before class next Monday, so we will start where that class leaves off. There's a midterm on Friday, so I would definitely be here. I mean, it depends on who you want to stay in the class on. Cool. OK, tweet. So now we switch gears and talk about the runtime environment. So we, yes. You're still in that office hours on Wednesday. No, I'm out of time. I'm not in no office hours this week, too. Also, I'll email everyone though to let you know. But yeah. Is it Friday, Wednesday, or Thursday? We'll see what we cover. It could be Wednesday, or Friday, or Friday, or Friday. Cool. So I will say this midterm will not be cumulative. So it will be off, starting from semantics. So there won't be any syntax apart, say, just very similar to your practice midterm, right? I'll tell you in the practice midterm. All right, so the runtime environment. So we talked about semantics, which help us determine that these parse trees that we're creating, once we've actually parsed a grammar in length, once we've parsed a string written according to a certain grammar, and verified that, yes, this is indeed a string that can be produced, sorry, under this thing. I was like, somebody be really short on saying this table. So once we have that string, we verify, yes, this string. So we've taken a sequence of bytes, turned it into a sequence of tokens. We verify that these sequence of tokens can't actually be generated by a given grammar. And we map all those constructs into a parse tree. Then using the semantics and type system, we've been able to say, hey, this is actually, this is an allowable parse tree, right? It may be valid according to the syntax of the grammar, but that doesn't mean that it's semantically valid. So the type system allows us to say that, or this helps us to. Now we're going to switch to is, how is this stuff actually implemented? What does the compiler do? So we've been talking a lot at kind of a high level about things, right? We're talking about parsing these kind of things to be applied to a lot of things. Here we're going to get deep in the details. So this is really going to, I think it's very cool, I don't like this topic, because we talk about how things actually are built and how the compiler translates this program to write it into something that will actually exit. So we have, when we talked about box circle diagrams and trying to understand semantics, we had locations and names. So what was the distinction between locations and names? The names were human readable, so names were what you, the programmer deal with, right? Variables usually, variable names, right? And then what's the location? Not necessarily the address, right, but the memory, that value has to live somewhere, right? That name is associated with the value. That value has to live in some location, right? And so the question is, what we're going to study is how does the compiler actually implement locations and names to deal with things like scoping? How does it know how to automatically deallocate some memory, like stack allocated memory when it goes out of scope? And we want to ask the question of how does the compiler map names to memory locations, right? When does this process happen? How does it actually do that? And so we're going to look into this process very deeply. We're going to assume static scoping from here on out in this section, right? So all the code samples we're going to look at assuming static scoping, dynamic scoping is a little trickier. I mean, it's doable, but it's a little trickier to implement these kind of things. So we have different types of variables that have different types of memory allocation associated with them, right? So where can a compiler put global variables? And what do I do when I say put? Where's it stored in memory, right? So where can a compiler put global variables? So what are some of the options? So you're writing a compiler. You see, aha, the programmer has declared some global variables. Where can you put them? In the cloud, what does that mean? On a server, it's a little too big, but we call it too big. How do you get it from the server? We're talking about one, memory. Does, do you the programmer care where it's stored? Do you ever have to think about that when you're coding a program? Where is this global variable stored? What do you care about? You care about the scope so that you're referencing the same, but what do you really care about when you say you care about the scope? You can grab what when you need it? The global variable. The global variable, so you can get the value in that global variable when you need it, and what else can you do? Change it, you can modify it. And then if you access it, what should you see? The most recent change, right? That's all you care about. So where could you put it? So you can put it in memory, what else? A register apply. So what would be a problem when putting it in a register? It's definitely an option. What are the pros and cons? Can I fix the number? Yes, each CPU architecture depends on how exactly the many registers they have, but fundamentally you're talking about those registers mapped to a piece of hardware. So they fixed number of registers. So if you were to store all of the global variables in registers, well, you'd be only limited to 16 registers, 16 global variables, let's say. And then now it says you can't use those global variables for anything. You can't use those 16 registers for everything. It would be fast, it could be pretty sweet. What else? If you could convince me that you could write an AI that could do that and store it on a post-it note and handwritten, then I'd say yes. Otherwise, even a programmer can't do it, right? Because this has to happen while the programmer's running. This is the compiler. The compiler has to generate some code to do something. What else? So what? Database? A database, right? Or you could even store it on disk, right? You could, your program couldn't write out your global variables to a file, and then when you want to read it, it reads it in from the file. And then when you change it, it writes out the new value to a file. What would be the pros or cons there? It'd be slow, right? Discs, although SSDs are a little faster, right, than spinning disks, but, right, these are much, much slower than memory, right? What's the benefit? Store a file. You can multi-program without, because typically a process doesn't share its stuff with any other processes. So if you write it out to a file, you have to wait for the IO, but if you have two concurrent processes, both accessing that file, you might be able to develop a multi-programming application. Isaac, so you can have like performance memory sharing, right? So you could just have, depending on the permissions you gave that file, you could allow other processes to read and write your memory, right? That would be pretty cool. Also, it could tolerate, right? The disk is permanent storage, right? So then we could actually have, if your computer shuts off, your program could just go back to running. That would be pretty cool. Okay, then when we talk about the cloud, how could you actually do that? Yeah, so you could write to not your disk, but some other machines disk, right? Let them deal with storing the value. And when you want it, you grab it, and when you want to change it, you upload a new value, when you get that disk, right? So the important thing to remember, well, there's a famous saying that the cloud is just somebody else's computer, right? So yeah, they still have to store it somewhere, right? You can't just throw it in the cloud and assume that magically things will happen, right? On their side, from their perspective, they're taking what you're giving and they're storing it in one of these options, right? Cool. And so this is the key idea, right? So depending on where the compiler places these things, affects these things, right? So where, who can access global variables and who can't? And this kind of helps inform the compiler writer's decision of where to store this value, right? Which location to choose. And for these reasons that maybe we don't want disk, well, we've talked about slowness, right? You think about writing to your own hard drive is slow. Imagine making a network request and then waiting for them to write it to their hard drive and then telling you that they successfully wrote it to your hard drive over the network, right? You're talking about pretty easy slowness. And so it's important to remember, what are important about global variables as far as access? So who can access global variables? What does everyone mean? I can access global variables? All, everything in the same file can access it, right? The other important thing is any other files after that that you compile and link to that file can also access that variable, right? That's another important thing. When you compile your program with the dash C option, right, you're creating an object file that now other programs don't need your source code, they can link to that and call those functions and access your global variables. So what the compiler does is it just chooses a fixed location in memory. And it says, okay, this global variable will always be stored here. And whenever they access the named foo, I'll instead say, hey, read from this memory location. And when they change an alter foo, then they'll write that value to that memory location. And so this is what it looks like. So we step through, let's say we have a function here where we have global variables, A, B and C. We have an 8th main. We set A to be 10, we set B to be 100. We set C to be 10.45, we have A is equal to A plus B and then we return zero. What we care about is the result of A here is going to be, should be 110, right? That's what we care about. So what the compiler does when it parses this, just like how in project four when you see variables being declared, you create, when you're doing your parsing, you see a variable declared, you create some type for that variable. You say, hey, this variable, we have a new variable X that's explicitly declared and it's a type foo or string or int. Just like that, when the compiler here sees new variable declarations, it says what kind of variables are these? They're global variables, right? Therefore, what I'm gonna do is I'm gonna assign them some global fixed memory address. So kind of in pseudocode, what it says is A is gonna live at memory location, A will live at memory location, B, C will live at memory location, C. And it's important to remember these are fixed numbers that are not gonna change throughout different executions of this program. Depending on how it's compiled, I shouldn't say that. There are position-independent executables now where this may not be 100% correct. And then what the compiler does is when it compiles each of these main functions, it's gonna change it just slightly and it's gonna say, hey, the memory in memory, so if you think of memory as just a huge array, right? Memory and offset A set that equal to 10. Memory and offset B set that equal to 100. Memory C set that equal to 10.45 and then set memory A equal to the result of memory A plus memory B. And it seems kind of very straightforward pseudocode and you'll notice I didn't include the return. We'll get into function calls or returning functions and all that fun stuff. But for now, we just wanna focus on where does the compiler choose to put these things in memory. And so what you can do is you can compile this program and if you look on the notes of this slide in the PowerPoint, there'll be the exact instructions I used to compile this program. And on x86, so 32-bit architecture, when I ran it, it basically said, okay, I'm gonna put A at memory address 804.9634, B at 804.9638, C at 804.963, C. And then this is the assembly instruction that it created. So the way to read this is we're using AT&T assembly syntax, so we're moving left to right. So this is a move that L is for long, we're moving 32-bit values. We're moving $0xA into 804.9634. So what's, so the dollar sign means this is a constant. So then what's this value? 10. X10, yeah, so it's moving 10 into this memory address, which is the memory address of A, right? Remember when we get to this point, the assembly language, we now no longer have As or Bs or Cs to talk about. All we have is locations. And the next line is move $0x64 into 804.9638. So what line does this correspond to? Yeah, B equals 100, so $0x64 is 100, I think, unless I made an error. Then we move $0x41273333 into here, the percent sign means a register. So this is register EX. So this is one of the registers in $0x86. So what is this? So can we tell exactly what it's doing yet? Does this correspond directly to one of the lines here? Does, let's think about it this way. Does every one of these instructions on my C program map one to one with assembly instructions? No, does it even have to be there? So you gotta think, right? So right now there's a one to one mapping, right? Between A equals 10, there's a one to one mapping between source code C, A is equal to 10, this instruction. B equals 100 maps here. So the thing to think about is could it ever be a case that one C line maps to more than one instruction? Because does C say specifically anything that this instruction is going to take this many, or this C code is gonna take this many instructions? No, there's no guarantees, right? All we know is when I put a value in B, when I get it back later, I better get that same value. That's all we have. So, just by looking at this one thing, we're moving a value into a register. But I said C is here, 804.963 C, this is not setting the value of C. But we see that the next instruction is move EAS into memory address 804.963 C. Is that because of the nature of the, what do you mean the nature of the type? Technically yes, but only when you're doing floating point operations. I think there's, I can't remember, I think there's some floating point stuff that's floating point calculations. But here are we calculating anything? For N2. No, never mind. Zero X, four, one, two, seven, three, three, two, three, that's a value of 10.45. In what? In hex. In hex? Let's see. That's still recording, man, it's awesome. It's good to get rid of random changes. Well, I think it's so far, right? Okay, global variables, so here's how we test that. So we have a theory. You're mostly computer scientists? I don't remember. Let's say you are. You're taking a computer science class, so yeah. Right, so we need to design your star net. So I have my handy-dandy calculator in hex mode. I can put in A, I can calculate it here, and say that it's 10. Right? Good, so I know that this is the next representation of the decimal 10. If I look here at 64, take it into 10. In base 10, it's 100. Now when I put in four, one, two, seven, three, three, three, I think I'm missing a three, three, four, three. In base 10, that number is, maybe I want to take a stab, I just messed up. One billion, 93 million, 88 thousand, please. Yeah, 88 thousand and 51. So is that 10.45? What is it? It's a floating, what type of floating point representation? Yeah, it's a floating point representation. Yeah, so it is. So it's I triple E floating point representation. So we can't just convert this to base 10. This is for integers. We'd have to look, and you can actually see here, the cool thing about this calculator, is it shows you all the bits, the ones and zeros here along this line. So certain groups of these mean things, and I'll be completely honest, I do not know I triple E floating point information off the top of my head, but this encodes that value 10.45 in 32 bits, and how can it use this 32 bits? Well this is a 32 bit register, so yes. But why for my C program, why could it use this? I did compile it in 32 bit mode, what else? It's float. What? It's float. I used a float. Float says use 32 bits of precision. If I used a double, that would, why it's called a double, it uses double the amount of precision as a float. And then the instructions would not come out like this, and they would be some different instructions here. So now to go back to the same thing, or to go back to the question, I actually should know I'm not, let's check. Nope, all right, I don't know. Okay, so I am here. So what's the difference between these two lines and the first line? Right, so this is moving 10 into this memory address, and these two instructions are first moving this hex value into EAS, and then EAS into this value, so Y into this memory address. So semantically what do these two lines do? Which is the same semantically as the first two lines, right? So why didn't we do this? Different representation, it's a float, it's still 32 bits, yeah. But what if I had changed this 10 here to that 1 billion, 85 million, 55,000, whatever, 55, I don't know, I lost it. That would've been really cool, you guys would've been super impressed if I got that number, right? So what if I put that there in this line? So, but it would be literal value 412733333 into that memory location. So why am I doing this stuff by putting it in the register? Float is four bytes, it's 32 bits, just like an integer on a 32 bit system, yeah. Well I didn't, I'm just guessing here, but do you still have that 412733333 inside of EAS for a reason? That could be, right, let's see. We can actually peek ahead and see that we don't use C afterwards, but that could be a good reason, is we get this value into a register so that we can compute on it, right? It's a good guess. Why do we have to guess? Why don't we know? Aren't you guys smart? Because it's not your compiler, I like that answer. You didn't write GCC, did you? All that the compiler, remember, from the programmers perspective, and you can't use that for everything, by the way. You can't just say, oh, I didn't write GCC, who knows? Right? The important point here is that to the programmer, all we're saying is, set this global value with name C as 1045. How the compiler implements that really doesn't matter to us, and the compiler may be doing tricks. I have a theory which is that this is such a large, it's a 32 bit number, so it may take up more space in the actual assembly language. So you may not be able to move a value that big into a memory address. But I don't know 100% if that's true or not. It, usually in something like MIPS, you can't do that because your instructions aren't fixed, right, to a certain length. Here, x86 is variable instructions. It could be that. It could also be because this is faster for whatever reason. And the compiler writers have done some tests and optimizations and said, for this specific system, this way it's faster. Short answers, I don't know. So it's actually kind of fun to think about why, right? So you're trying to actually reverse engineer the compiler, I mean, obviously GCC is open source, so you could look at the code. Or you can look at the output and say, huh, that's interesting here, right? It's doing something different for basically the same operation. Okay, then going forward, we have moved the memory location 8049634 into EVX. Registered EVX. So what's at 9634? A, so 10. So it's taking the value inside memory address 8049634, just 10 and moving that into EVX. It's moving the value into 8049638, which is B, which is 100 into EVX. So here we're overriding the EVX so that this proves our theory earlier. It was a cool guess without saying the rest of the code. Now we have a crazy instruction, which I'll describe in a minute. Load effective address. Let's see, if I remember correctly, it is take, so you have three parameters. Take the first thing, add it to the second one times the third thing. So you think the way I remember this, so LEA is load effective address. Usually the way you use it is you have a pointer in the first argument. So you have a pointer to sum, this is how I remember it. You have a pointer to an array. And you wanna access the I element of that array. So you want to add to the current pointer, how many bytes into that index. Let's say you're going to index five, but you can't just add five bytes because each element of your array, they may not be one byte exactly, which is this argument. So if it's an array of integers, we're gonna call four. You would have your pointer as the first argument, five as your second argument, and the fourth argument would be four. And so it would add 22 to your original pointer. The key thing, the other key thing here is it does not do a dereference. So it just adds these things together. And that's why the compiler, for whatever reason, decided to do this instead of an add instruction. So it's essentially, with the one here, add EES to EES and move the result into EAS. I say EES twice. Add EES to EAS, move it into EAS, which is doing what? What's gonna be in EAS now, which is? 110. Are we done? Is this the end of it? Why not? You need to save the active memory, right? Even though you calculated that A plus B, we didn't actually complete the semantics here, right? The semantics are we have to put that value back in that location. So we have moved EAS into 804, 96, 34. So what's really interesting is that when I made this example, I used CentOS 6.7 and I believe on CentOS 7, I know definitely this is a new version of GCC and so I think it does this instruction as an add instruction now. So it's just interesting to see, I don't know what, you have to dig in to really why they changed that, for whatever reason they decided to change it. The global variables, good? So local variables. So now we're talking about local variables. What makes a variable local as opposed to global, right? So a local variable has a limited amount where it's going to actually be used. And we're talking specifically static scoping here. So what are the constraints on local variables in let's say C? It can be only access within that specific block, right? We know C has block level scoping, so only in between the braces is a variable declaration valid. So where can a compiler place local variables? What choices does it have? The stack, which is what? Memory, is it different from memory? Is that a super tricky question? Type of memory? Is it a different location than memory? So it is, then what? It's a different location. Then like where the global variables were stored? Yes, so it'd be somewhere different than global variables, right? So one thing, so to think about this like, so could you use global memory? We just saw global memory, right? We saw it's super easy. You just assign a fixed memory offset to everything in the program, a fixed memory location to every variable. So can we just use that? Why learn something new? I mean, not learn something new. You're always gonna learn new things. But why do something different if the techniques you already have for global variables, you know it works, you can just apply here. Security efficiency? Recursion? Recursion, what do you mean recursion? Why is that important? Recursion can generate an arbitrary amount of memory that is gonna require a specific set of walk. So let's look at examples. So I have this factorial function. It has a parameter n, right? Although this doesn't quite answer your question, but that's fine, I think it will, right? So what memory location do I give this n? Is n a local variable, the function factorial? Yeah, the fact that it's a parameter that you pass into the function really doesn't change things that much. It still has to be local to this function, right? And it's accessible inside this function. So how do you write a factorial function? This should be, you should be able to do this in your sleep now. Recursive functions, you have your base case of n is equal to zero, return one. What's my problem with my base case? Well, let's do this. Otherwise, return factorial of n minus one times n. What's my problem with my base case? If you pass it a negative number, what happens? It'll loom forever, right? So I think you can have a very strongly worded comment on the factorial function that says do not pass this with a negative number, which is a really bad way to enforce that because who reads comments? I should be checking in here, right? If I was, I would probably maybe, depending on how I was coding this, I may check, hey, if n is less than zero, less than n, sorry, if n is less than zero, throw an exception, right? You should just stop. You should not be trying to calculate the factorial of a negative number. Why is that better or different than just returning one or zero? It's easier to debug. That's easier to debug, right? That is, the factorial function is only defined from n from zero to greater than zero, right? Actually, I don't know if that's true. Is it defined? Math people? On negative numbers? I don't think so, right? Doesn't make sense. Yeah, it's not divisive, right? So if they're calling my function on input that is not defined on, it should blow up, right? If it just gave them back zero or some random value, well, then their program's not gonna blow up. They're not gonna know they have a problem and they're probably gonna compute some wrong result. Okay. So, what would happen if I just stored n inside of a global variable? Yes, let's say, let's put n in just any memory list, doesn't matter, we'll fix it to one. We'll call factorial, let's say 10. So in order to call that, right? We know n is this fixed location. We just copy 10, put it into that fixed location. So that location has 10, right? We call factorial function. We check that fixed memory location. Is it zero? No, it's 10. Otherwise, return factorial of n minus one times n. So I have to pass in n minus one of this function. So I take the n of just 10, I subtract one. I put nine there. I call factorial again, right? I'm gonna keep doing that. Let's say it finally returns. It's gonna return me whatever the factorial of nine is. Then I multiply that by what? It's gonna be 10 in that memory address. When we call factorial again, I put nine in there. And then that calls factorial and puts eight in there. And that calls all the way down to n is equal to zero to return one. So here, this will return one and we'll have times what? Zero. Zero. Because inside that fixed memory location is the value zero. So if you think about it, I can't have just one location for n. Why? That's each invocation of factorial produces a separate and distinct copy of n. That is the whole idea behind, really behind recursion. You don't necessarily have to recurse on yourself just as long as you're on that call stack. You call a function and calls another function and calls another function and then it calls you. You would erase all those values of that other invocation of that function. So this is why this is critically important. This is why we cannot just have global variables and just say, yep, these are global variables or global variables. And this is the key idea that then leads us to the stack. There was, which programming language was it? I wanna say Pascal, but I don't think that's true. I think it's Fortran. You can have procedures, which sounds very similar to a function, but that procedure cannot call any other procedures. You can only do something and then return because they haven't kind of really figured out how to do local variables and how to deal with all that kind of stuff. Okay, so, what's this, okay. So we looked at global variables, right? So we said, ain't, can't do global variables, right? That does not work, fundamentally does not work. So what can we do instead? Well, we just talked about, right? Each invocation of that function needs its own little bit of memory, but it only needs that memory while it's executing. And then when it returns, then we can free all that memory, right? So the idea is to use scratch memory for each function. And that is where we get to the stack. So the idea is we want some little bit of play set that each function can write to. So we give each invocation of a function its own little space to write to. And then when it returns, we clean that up automatically, cool. So what's the stack in terms of a data structure? Last and first out, what does that mean? What operations does it support? Push and pop. Push and pop, so I push three things on it, one, two, three, when I pop, what's gonna come up? I hear all three options there. Three, last and first out. So I push on one, push on two, push on three, I take something off, it's gonna be three. So you can think of it, as we draw it, a stack will grow down. It doesn't really matter which way you think about it, but we push things on the stack, and then we pop things off, they're gonna pop off the reverse order. You think we also, and this actually ties in very similarly to how we thought about flexible scoping and dynamic scoping. Each time we got to a new scope, we would add a new scope, we would push on a new scope, and then when we left that scoping, we would remove that scope and pop that scope. So you can think of the simple table that we created for dynamic scoping is using a stack, which is a different type of stack. Here we're using a stack of memory. Questions so far? Okay, so the stack is essentially a scratch memory for a function. It is used explicitly in various CPU architectures, MIPS, ARM, X86, X8664, probably more. Even languages that don't, so what this means is, as we'll see on X86, there are explicit push and pop instructions to push things onto the stack and to pop things off the stack. Other languages, once they are does not have that, but I can't remember. One of them, you manage it yourself, so you have to manually push things on and manually pop things off. There's no explicit instruction for it. Okay, so we're mainly focusing on X86 and the way it works on X86 and the way I'm gonna draw it, because obviously a stack can grow in any direction for us. We're gonna start at high memory addresses which are gonna be on the top of the page and we will grow down to zero. So when we show memory, we'll see fffffff and zero at the bottom. And we'll see that our stack's going to grow down. So as a function, as scratch memory, not only can we store our local variables on there, as we'll see, but we can also push whatever variables we want onto the function, onto the stack and pop the values off the stack. What do we have to ensure us as a function if we're called the stacks at some certain position and we push stuff onto the stack? What do we have to ensure? Okay, good, so we need to know, we need to know the current location of the stack and we need to be able to communicate that to other functions that we call. What about when we return? Will the stackers be anywhere? So you gotta think about it, right? Some function is calling us, we're at some, the stack is an arbitrary location. We're gonna do whatever, call whoever. When we return to that function, they don't wanna have to deal with the stack being moved, right, we should be good makers and leave the stack exactly how we found it. Which is nice because it means we can do anything we want but we have to make sure that that stack is consistent when we leave. Okay, so the assembly language in X86 explicitly supports this. So the ESP register, so what is the first one? What is the E-stand for here? We saw EAS, ESP, extended, why is it extended? From 32 bills? From what? 16. 16, so that's the key thing here when you're reading these things. So you see ESP, you know it's an X86 registering, you know it's a 32-bit value. The old dollar sign, XP, that would be the 16-bit value. And then, well, we won't look at it here but if you ever see X86 64 code, it'll be RSP, so it'll be an R instead of an E. And I actually don't know what the R stands for. Say really extended. Okay, so we have two operations. So we have ESP, so ESP is a pointer to the top of the stack, right? So if it's a pointer to the top of the stack, what is the value inside of ESP? The memory address of the top of the stack, exactly. So if we were to draw a box circle diagram of ESP, ESP would be a box, and inside it would be a value that would be the memory address of the top of the stack. And again, it may be, depends on how you look at things when you say top of the stack to me, where the current, if you pop things off, that's where things would come from. If we push things on, that's where they would come from. Remember, we're going to grow the stack down. Okay, so we have two operations here that are important. We can push things on the stack, and push takes a register. So for instance, push Eax means decrement the stack pointer down specifically by four. So we decrement, remember, we have high numbers at the top, low numbers at the bottom. So decrement takes us down. Move down four, take the value inside Eax, copy it to that memory location. And so what would it pop be? Let's say you want to now pop that value into, let's say, EVX. Can we move it up four first? Take the value that the stack pointer is currently pointing to, store it in EVX, and move up four. How can you tell that without me showing you the next bullet? How do you know that? Yeah, so specifically, right, these have to be complements of each other. You're doing exactly this operation in reverse. So push Eax, decrements, and then stores the value. Therefore, pop it and it needs to grab the value, and then increment by four. Okay, store the value. So pop Eax. So let's look at an example. So when drawing the stack, always at the top. F, F, F, F, F, F. Zero's at the bottom. We have a stack here. So, ah, all right, let's do 50. All right, we'll stop here. I know you're all, I'll keep doing some special.