 All right, welcome back. So hopefully everyone had a good reading week and actually slept at least a day, a day. That's pretty good for engineers. I think it's next week, right? But we can go over page table stuff because there's only like not that many people all last Thursday, people kind of forgot about. All right, so today we're gonna talk about basic memory allocation and all the content. The quiz will mostly be like scheduling, page table replacement, or sorry, page replacement, page tables, scheduling, memory allocation. So let's talk about some basic memory allocation. So hopefully this is kind of review. There's a few strategies you can use when you need memory. So static allocation is the simplest thing. So you just create a fixed size buffer in your program. So like char buffer 4096. We know that number is special now because that's the size of a page. And you just declare this like at the top of your program and then whenever the OS loads your program into memory, part of the process of it loading it to memory is this will actually be allocated some space. So it would be backed by a physical page. And then you can go ahead and use that memory for the duration of your process. So you don't even have to free it. So it lives as long as your process lives. And then whenever your process dies, then that memory is freed because all the resources associated with your process are freed whenever, you know, you get acknowledged as your parent. And hopefully everyone's being good parents in lab four. So that's what happens. Program loads, kernel sets aside memory for all your like global variables. So if you threw static in front of this, nothing's going to change. Yep. So the question is for lab four, how does anything but the OS know if you didn't wait on your children? So the tester works one of the tests of the tester, it S traces your program. So it counts the number of times you fork and counts the number of times you wait. So otherwise it wouldn't be able to know, but I'm looking at your system calls for that. So also some creative solutions for a lab four do not put read and write in your program. It's very, very wrong. The lab four primer was just how to use the APIs. If you read the spec of the lab, it's a completely different problem. So don't like blindly copy. Finally copying is bad. You'll be using pipes. That's about where the similarities stop. You need to use pipe. You need to use Duke two. You need to use clothes. You don't need to use read and write. We were just showing how to communicate. Yeah, it's a completely different problem though. Yes, I did. Yeah, so remember the lab four primer? I made a child process. I had two pipes. I used one to send data to the child process and one to get data from it. So I had two way communication. I was sending data and then getting data from it. Lab four, you're just connecting things in a single one way communication channel. The pipe just flows from the first argument to the last argument. Okay, where was I? Yeah, okay, sorry, back to this. So this is static allocation. Exists as long as your program exists, nice and easy. Don't even have to remember to free. But often you need dynamic allocation. So you may only conditionally require memory. Static allocations can sometimes be wasteful. You don't always know how much memory you need ahead of time. And while you're running things could change. So I don't know. Even if you're programming a game or something like that, you could be like, well, at most there'll be like a hundred characters on the screen. So I have a global variable that can hold up to a hundred characters. But then after a hundred characters, you don't have any more memory so you can't really do anything at that point. So you generally want to use dynamic allocation if you don't know the size for things ahead of times. Because if you do, static allocation, you can't change it after your program loads. That's just what it is. You can't request any more memory through global variables or anything like that. So then the question is, well, how do we allocate memory? So hopefully we're used to this now. Although you might not know the mechanism that the compiler uses to actually allocate memory for you on the stack, but you either allocate memory on the stack or on the heap. So stack allocation is mostly done for you in C which is why we basically use C instead of assembly. It is a bit nicer than assembly. So if you think of a normal variable, just in a function you have int x or something like that. Well, that's four bytes. So that's going to be allocated four bytes on the stack. But what you don't know probably is that there is a special function called the compiler inserts for you. So you don't actually have to write that called allocA, not malloc. So allocA will allocate memory for you on the stack. So, and it takes an argument which is the number of bytes to allocate. Yep. Why would we use alloc? So the question is why would we use malloc instead of allocA? Anyone want to, anyone else want to answer that? You can ask for a number like different size memory on the stack if you want. Why wouldn't you? No? The stack's dynamic. It just grows. Yeah, so allocA allocates memory for you on the stack and malloc allocates memory for you where? On the heap, right? Completely different places. Sorry? Yeah, four size of int. So allocA and malloc have the same kind of API. It's the number of bytes you want. So this is like internally, that's what the compiler's gonna do if you ever have the fortune of reading LVM code. You'll see allocA all the time and that is basically just the stack allocation. But the nice thing about allocA is when you have malloc, you have to free it. If you have allocA, you don't have to free it. So whenever the function that has the allocA returns, all the memory is freed because all it does is restore the stack pointer, which is also really nice because it's super, super fast. So no matter how many things you allocA, it's only one operation to deallocate everything because you're just moving that stack pointer back to the beginning and basically undoing every single stack allocation. So all it does, one allocate, it's really, really fast to deallocate stuff, but you're kind of stuck with this lifetime. Yep, yeah, yeah. So the good question is why I'm deallocating for stacks, I'm assuming. I'm just changing the stack pointer, I'm not changing the data at all. Yeah, so your question basically is, what if, basically if I make a silly example of it, say I write a function that returns a pointer, let's call it foo because who wouldn't wanna call it foo? We'll allocate int x, something like that, so that's allocated on the stack. And then if I go ahead and do something like return address of x and do something like that, then that's bad. So at this point here, whatever you call foo, it's gonna allocate that on the stack, so it's gonna push the stack down four bytes and return the pointer to that, essentially. And then in your return, you're giving that pointer away and as soon as you hit return, that's the end of the function. So that deallocates that x. So now it would just reset that stack pointer and then you would be returning a address to the stack that is no longer guaranteed to be x because x doesn't exist anymore. So likely it's going to just be, if you use it straight away, you might be lucky and it might still be the value of x. Like if I wrote x equals, I don't know, if I wrote x equals 42, maybe you get lucky and it's 42. So if I had, say we had main and I declare, hey, p equals foo, well, maybe I get lucky. And if I read whatever the value de-reference p, maybe I see, I actually get 42 out of it. But if I call another function, bar, I don't know what that's gonna do with the stack. It probably will go right, probably have its own values. And then if you de-reference p here, well, this could be anything. Probably gonna be some leftover garbage with whatever bar use to do that. So you don't really want to do that. So that makes sense because that will definitely give you weird errors. So like the address, so p will be a pointer to somewhere on the stack that is a bit further along than main. So like main, so let's say it grows up. So main will be here, it'll have some room. There'll probably be space for p at some point. And then whatever I call foo, it has some other extra information than x. And then I return a pointer that points here and then foo returns, so all this gets reset, but the pointer doesn't change. I'm still pointing at something way off in the distance. And then bar could execute, say bar is bigger and it uses the variables like a, b, c, something like that, that could be pointing at like a, b, c, who knows. Yeah, it's probably still there because I looked at it directly after returning. And you're not, yeah, it's no longer reserved. So you have no idea what it, like whatever reserved is long dead by now, like we return from foo. Yes. Yeah, I mean, if you allocated, say you like allocated another, well, if you don't do anything, your program's done by that point, so you can just exit. But yeah, I mean, you might get lucky if you read the C spec, you're not allowed to do this at all. It's undefined behavior because you're, maybe you have like a secure version of C that like wipes the data when it undoes the stack so that there's nothing left there. So implementations are actually allowed to do that, but it's slow, so no one's gonna do that. But according to the spec, you're allowed to do that. So that's why you have to be careful with C, anything undefined in the spec that's free for them to do whatever you want. And this is undefined in the spec. So the question is, hey, are there a bunch of people basically implementing C and doing the unspecified things in different ways? Yes, of course. No, so you've like the Microsoft C compiler, you've GCC, you've Clang, so that's three compilers. Yeah, it's just different people writing them and the Microsoft one does very odd things. So anything, yeah, I've debug student code that only had errors if you compiled it with Microsoft's one because Microsoft does some real crazy stuff. The Microsoft compiler is not great. Okay, so this is stack allocation. So you've also used dynamic allocation before, so on the heap. So these are the malloc family of functions and this is what we're going to talk about how they're implemented. So this is kind of the most flexible way to use memory but also given the number of seg faults we've all done in our life, probably the most difficult to get right. So you have to properly handle your memory lifetimes and free exactly once. So you malloc once, you're only allowed to free it once. If you free it twice, that's an error. If you never free it, you're wasting memory. That's a leak. So both of the cases are bad. Yeah, yeah, so the question is if I run out of room on my stack, can it just request another page? Generally, I think it works the same way as Pthreads where your stack is a set size. So the most your stack will grow is the eight megabytes. Stack overflow. Yeah, that's the most. But so it could get less and they need to request more pages up until eight megabytes? Yeah, so if you get into the details of it, so your stack will be given eight megabytes of virtual address space, which is all deemed valid, but it'll only back it by pages as you're actually using it. So if you only use four kilobytes, it'll only be four kilobytes. But as soon as it goes above the four kilobytes, it'll be a page fault and then it'll just grab another page because it's supposed to be valid. And then as soon as you go over eight megabytes, it'll be stack overflow, which is also everyone's favorite error, but probably you've got it less in this course. Yeah, has anyone ever got a stack overflow so far in this course? Probably not, right? Yeah, so we eliminated the class of bugs, yay. Yeah, so malloc. So there's also a new concern if you're implementing malloc and that is fragmentation. And again, fragmentation is basically wasted space. So this is a unique issue for dynamic allocation. So since you allocate memory in different size contiguous blocks, or at least it looks contiguous because it's all virtual addresses, you're not allowed to compact anything if you are using malloc. So what compassion is say you do a bunch of mallocs of different sizes and say you like have 100 bytes and then there's like a whole of like, I don't know, like two bytes or something like that. And then another malloc of like 100 bytes. Well, that two byte whole you might never ever be able to use again because what if you only malloc like four bytes after that? Well, two bytes isn't enough so that's basically wasted memory that you'll never ever get back again. What you would like to be able to do if you could move memory as well, that two byte whole I'll never be able to fill. So I'll just look at memory and then compact everything so that everything is nice and contiguous again, get rid of that whole. But you're not allowed to do that in C because you're using a pointer and if you are the memory allocator you are not allowed to just move memory around because the program has a pointer to it and it's not gonna change. It's just a address and it's not gonna change the address. You've no way of updating the program and be like, hey, I moved your memory, don't worry about it. But if you're Java, you can actually do this because Java has a layer of indirection because you don't get to touch pointers in Java so you'd never actually know the value of the pointers. So the JVMs free to move memory around on you because you'd never get the address of something so it can move it around as long as it updates all the references and it's consistent, it's fine. So Java actually doesn't have this problem because it can move memory on you but it's also why it's slow because it needs to update all of the addresses to make sure everything's consistent and you probably won't get into that but that's just a nice thing to know. So fragment basically whole memory that's wasting space and we need three different things to all be true in order for fragmentation to be an issue. So all the allocations have to have different lifetimes and then all the allocation sizes have to be different and then we also need the inability to relocate previous allocations. So can someone tell me why I would not have any fragmentation if number one was not true? So if all the allocations had the exact same lifetime I wouldn't have fragmentation, why? Yeah, is that what you were gonna say too? Yeah, so if number one isn't true then it's basically they're like stack allocations so they all get freed at the same time so I could just allocate them all continuously together and then since they have the same lifetime when I deallocate them they all get deallocated together so I won't have any fragmentation and that's why you don't have any fragmentation on the stack which is kind of another reason you'd wanna use the stack. All right, what about if number two isn't true? Why wouldn't I have any fragmentation in that case? Yeah, or if I go above that think of the kernel so the kernel allocates everything in size pages so if there is a whole it will be the size of a page which can fit an allocation because they're all the same size so I don't have any fragmentation, everything's the same size so it doesn't matter so that's why you wouldn't have any fragmentation in that case and then number three, you wouldn't have fragmentation because if you can move stuff around you can just move stuff around to get rid of the holes which is kind of what we discussed with what Java does. So those are your three so all of those have to be true in order for fragmentation to be a problem which if we try and make the most general solution of making a malloc then these are all true so malloc can't move stuff, malloc has to handle different allocation sizes and then malloc has no idea what the lifetime is. You can't predict when someone's going to malloc and then free it later and given some of the things we've seen you might never free it as well. Okay, so there's actually two types of fragmentation there is internal and external fragmentation basically the memory allocator just cares about does allocations and blocks of memory and that's all it cares about. So external fragmentation is when you allocate different size blocks and then there's no room for an allocation between two blocks that it already allocated. Well, internal fragmentation is an issue with some memory allocators so some memory allocators might allocate fixed size blocks maybe to get rid of the fragmentation issue or one fragmentation issue but there might be some fragmentation within the fixed size blocks. So in this picture at the bottom there's two blocks there that are managed by the memory allocator like the bigger blocks and then there might be wasted space within a block so say, hey my memory allocator is not gonna have any fragmentation because all my blocks are like 64 bytes or something like that. Well, you still have to deal with the case of you get a allocation for like 40 bytes and one for 10 bytes in which case you would allocate the 64 byte block but 22 of those bytes would be wasted internally within that block and only the 40 would be used. So even though you can get rid of some fragmentation there'd still be internal fragmentation. So we'll see that next lecture when we get to more specialized memory allocators but we'll just keep it general for now. So for now, we're only gonna have external fragmentation because we're just going to match, our allocations gonna match whatever the size of the request is that comes in. So our goal basically we just wanna minimize fragmentation so it's just wasted space, we wanna prevent that because memory we're not allowed to use on our system which is bad. I don't wanna see a out of memory issue if I try to allocate a gigabyte and I know I have like three gigabytes left if it just says out of memory I'm gonna be like what the hell this offering system sucks and then switch to something else. So we wanna reduce the number of holes basically between blocks of memory which is our fragmentation and one of the ideas you might have is if we have holes well maybe I'd want to keep them as large as possible and that way I can just use that hole to do more allocations so that would be cool. So I wanna just keep allocating memory without wasting space. So when you go to implement these allocation strategies most allocator implementations actually just use a free list and keep track of the size of the block set issues. So keeps track of the free Boston memories by just chaining them together in a big link list and we know we need to handle a request of any size. So for an allocation we would just choose a block that's large enough for the request, remove it from the free list and now it would be used and whenever we have to deallocate something we just move the block back to the free list and then we can reuse that and if it's adjacent to another block so like there's two free blocks right next to each other we just merge them together and make one big free block. Yeah so the allocator would need a link list which would cost memory too. So some implementations will put some basically the list information and counting information before the memory that gets back to you which at least one of you on Piazza like corrupted mallocs internal structures. So that's what happened there. So it writes some information, accounting information. If you look at mallocs the memory it gives you if you actually just write random bytes right before it you'll screw up its accounting information and then malloc will know what to do. So whenever you malloc something you get a pointer right just like subtract two bytes from the address and just change the value to something. You're allowed to do that won't say fault because that memory is used by malloc to keep track of all the free blocks and everything. So if you just write random information to it it's gonna get very confused. So at least one of you has done that accidentally on Piazza because I saw some post where it's like yeah some weird internal malloc error where it's like yeah this doesn't make sense anymore. So yeah, if you want you can look at so there's different malloc implementations too. Some of them might not do that. I believe the default one does do that but they're allowed to keep the bookkeeping information wherever it wants but some reuse the memory which is kind of cool. So in general if you're implementing malloc there's kind of three general heap allocation strategies that you can use. The first one's called best fit. So I basically choose the smallest block that can satisfy the request which of course means I have to search through the entire free list. So unless there's an exact match because if there's an exact match then I can't do any better so I may as well use that block but in general I'll have to search through every single element to see what the closest block size is. And then worst fit sounds kind of bad. So it chooses the largest block that has the most leftover space and makes it a bit smaller and again this also has to search through the entire list. And then there is first fit which is much easier to implement and doesn't have to search through the entire list. Only has to do that in the worst case so it basically just chooses the first block that can satisfy the request and uses that immediately and doesn't care how big it is. As long as they can satisfy the request. So to make this a bit more concrete let's see let's do some allocations using best fit. So at the top there is memory so let's say our memory allocator is managing this big block of memory. There is a red allocation that we're not allowed to move because we already made it and then a blue allocation we're also not allowed to move because we already made it. So then the blank space here are the number of bytes that are free. So we have two free blocks. One has 100 bytes in it and another has 60 bytes in it. Yeah. Yeah, so for this I'm not showing the details of like actually keeping track of the free list. We're just looking at it but if you were to implement it you'd have to keep track of the free list. So you'd have a link list somewhere in your program. Yep. Yeah, so you'd have to allocate memory for yourself too. Yeah. So malloc is managing the heap. So it can grow it as large as it needs so it can use some of the heap space for its own link lists and things like that. Yeah. So malloc, there should just be a lab that you have to implement malloc. But malloc what it could do is just have a global variable that's just like the head pointer of the list and any time there's an allocation it would create a node basically and then it could point to the node and say hey that one's used and then have one list for free that by default you'd increase the size of the heap and then say hey the whole thing is free minus whatever I need to keep track of it and then you just do allocations from there and then every time you allocate something you just add a node to that list with some of the memory. Yeah, the nodes would be the same size. Yeah. Yeah, so all your nodes would be the same size so you need to keep track of for every allocation you need to keep track of a pointer you need to keep track of how big it is probably and things like that. So that's like what like 16 bytes at most so every allocation is gonna weigh 16 bytes. Yep. Yeah, it's just telling you what memory is not in use in the heap. So you just so we'd have a link list somewhere and it could be in global memory that just is the head of the free list and then in this case there'd be two elements in the free list would be 100 byte block and then a 60 byte block. Yeah, yep, yeah. But we're not gonna be terribly concerned about the actual implementation of malloc just the general overview without worrying about where you put pointers and where you put nodes and everything like that. Okay, so in this case the way easier problem is like hey say I just have 100 byte block that's free and a 60 byte block if I'm using a best fit strategy well if there's an allocation that comes in that's 40 bytes, where do I allocate this block? Do I allocate it in the 100 or the 60? 60, because we're doing best fit so it's the closest one so 60 is closer to 40 than 100 is so I would do the allocation in the 60 block and I have 20 bytes left over. So I have another allocation that comes in which is 60 bytes in the purple so where do I have to allocate this block in 100, right? There's only two free blocks, there's 120 it doesn't fit in the 20 so I have one option. So now I have another allocation that comes in and no one frees anything and it's in the pink and it's a 60 byte allocation and now I'm screwed. And this is where it gets really frustrating whenever you use your computer because it'll do stuff like this because we have 60 bytes free but they're just not contiguous we have a 40 byte block and a 20 byte block so we have 60 bytes free we're asking for 60 bytes but malloc would fail in this case, yep. Yeah, so the question is what about if I just return two pointers to the program? How good is everyone at using one pointer? So how would you, yeah, imagine you just get randomly two pointers back sometimes when you use malloc. Yeah, then what bit of like half of your numbers in one block and then the other half in the other block, I think it's really weird. Okay, so let's go through this example again using worst fit. So now we have a 40 byte block and we rewind so we have the 100 and the 60 free so where do we allocate this one? If we do worst fit in the 100 or the 60? 100, worst fit so we are looking for the biggest one. So if we put the 40 in there we have 60 bytes left over. Now we have the 60 byte allocation where do we allocate it? Doesn't matter, they're both the same they're both 60 bytes, so put it at the front and now the next block, hey, it fits exactly in the remaining space. So worst fit in this case actually does better. So it's not always going to be the situation but the point is that between best, because just because best fit is called best and worst fit is called worst does not, does not guarantee best is always the best and worst is always the worst as confusing as that might be. But both of them are both slow so if you go ahead and simulate this to try and draw some conclusions about it well if you look at best fit it tends to leave very large holes and very small holes and the small holes might be completely useless that they'll always be fragmented and you'll never be able to use that memory again. While in worst fit if you simulate it it's actually the worst in terms of storage utilization so its name is actually kind of appropriate even though it might sound like a good idea and the nice thing about doing simulating all of this and checking is that first fit tends to leave average size holes and it's also faster, first fit. Oh first fit. Yeah, first fit leaves average size holes and it's much faster because you don't have to scan through everything all the time. Yeah, it's kind of like page replacement where yeah page replacement where random is actually better than some of them, than FIFO. So random is actually oddly decent in computer science. So we'll talk about this later because the kernel also has to implement its own memory allocations. So for user space stuff it just cheats and just does it in terms of pages and kind of kicks the can down the road and says you deal with it. But the kernel also uses memory so it actually has to handle its own allocation so we'll see what the kernel does. But in general for user space stuff and for the kernel there's static and dynamic allocations and for dynamic allocations fragmentation if you're actually implementing the allocator fragmentation is kind of your biggest concern. So there's two types of fragmentation. There is internal fragmentation and external fragmentation. So the memory allocator basically just uses blocks. So fragmentation between blocks is external. Fragmentation within a block is internal and there's basically three allocation strategies in the general case where you are trying to do where you have any size allocation and you just have the most general problem. So there's best fit, worst fit and first fit. So any questions about that? If not, we can go back to the page table stuff just in case. All right, no questions, cool. So just in case as a final call. So four, so hopefully some people around are looked at lecture 23. So is there any questions about when we run this and we do the MMU for this? Why the index numbers are the way they are or how this works at all? Or does everyone know how this works? Hands up, we're good with page tables and everything. Ish, ish, okay. So let's go through a few questions. So just in case we don't remember, this is a 39 bit, oh God, this cable, virtual address. So because it's a 39 bit virtual address and our page size is four kilobytes. Well, in that case, since it's a four kilobyte page size, that means I have three levels of page tables for a 39 bit virtual address, right? Anyone tell me why? So for a 39 bit virtual address with this page size, the four kilobyte page size, I have three levels of page tables. Why do I have three levels of page tables? Yeah, so remember kind of the trick to multi-level page tables is I try to each level of page table fits as the number of entries that fit on a page. So in this case, I have eight byte page table entries. Actually, I'll go ahead and show this. So in SV39, I have a 39 bit virtual address to a 56 bit physical address. I have four kilobyte pages or 4,096. And then my page table entry is eight bytes. And this is actually what it looks like if you wanna get into the nitty gritty details is the last 10 bits of it are permissions like if it's valid, if you can read it, if you can write it, if you can execute it. And then there's 44 bits for the physical page number. And then there's just 10 bits that are reserved for future use. So this makes sense why we have a 56 bit physical address because 12 of those bits are used as an offset to our page because that's two to the 12. So if we do 56 minus 40 or minus 12, that's 44. So that's our physical page number. So how many levels of page tables we would have if I didn't even, if I just told you this information, you should be able to calculate like the number of page tables we need. So if we don't remember it, so the number of levels we need is the ceiling of the virtual bits minus the offset bits minus how many bits we need to index something. So in this problem, how many virtual bits do I have? 39 because that's what it says. How many offset bits do I have? 12. So why do I have 12? Well, because 4096 is two to the 12. So I have 12 bits. How many index bits do I have? There's a little trick here, but not too much. So for multi-level page tables, what's our little trick we're doing? So each level of page table is how big? Sorry, 512, yes. Yeah, so be, yeah, so there'd be nine bits. Everyone know why it's nine bits? So our trick when we have multi-level page table is we want to, each level of page table is going to fit exactly on a page. So I have two to 12 bytes on a page and then how many entries can I fit? Well, an eight-byte page table entry. So if it's eight bytes, how many eight-byte numbers can I fit on a single page? So two to the three. So two to the 12 divided by two to the three, you can just, we know our exponent math, we can just do 12 minus three. So it'd be two to the nine. So we would have nine index bits to be able to reference everything there. So the number of levels we need, 39 minus 12, which would be, whoa, which would be 27 divided by nine and that's actually a nice division so we don't need to ceiling everything. So it would be three. Yeah, so that's a good question. So why don't they just, hey, you know, more space is better. Why wouldn't they do like a 40-bit virtual address? Yeah, I need four page levels. Why wouldn't you do a 38-bit virtual address? Yeah, I'm not using all my space. I'm just allocating a full page but I'm only ever going to use half that page table. Why wouldn't I just support more? It's essentially free at that point. Page table entry. So all the page table entry has some permissions is all you care about and the physical page number. So that'll be part of the design and they'll tell you that. So if you just look at SV39 it says, you know, it's a 39-bit virtual address and it goes to a 56-bit physical address. So that's part of the design you'd be told that. In the most part we don't even care what the number is. It could have been like a 64-bit virtual address if we wanted to because we have the space but they just reserved some space because they wanted to. Okay, so that's why we would use three levels of page tables and that's why it's a 39-bit virtual address because we only wanted to use two levels of page tables. How big of a virtual address could we support? So if we only had two levels of page tables, how big would our virtual address be that we could support? Yeah, 30-bit, which is only a gigabyte which no one would actually use. So basically we'd need three levels of page tables to support anything kind of realistic nowadays because one gigabyte, we're probably gonna have a more physical memory than that. All right, and why it's 30? Yeah, so 30 would just be nine times two plus 12. Yeah, okay, so in that case, so let's go back. So in this case, because I know that, that's why I use these indexes. So for ABC, that's my virtual page number. So if I separate that out, so the last three hex characters here are gonna be the offset. I don't have to really care about them because they're gonna stay the same. So if I write out ABC in binary and then divide it into groups of nine, so they'll be your level zero index, your level one index and your level two index are going to be numbers between zero and five 11. And then you just need to make sure that whatever your root page table is, which is gonna be a level two page table, that entry, you just figure out whatever entry that needs to be at, it needs to point to a level one page table. And the level one page table, you know what entry it needs to point at, needs to point at a level zero page table, and then that has to point at the actual address. Yeah, yeah, DEF, sorry, DEF is our offset here. All right, so you can go ahead and play with this. If you want, you can watch the end of section two where we, you should be able to like, if I give you the address seven, FFF, FFF, FFF, FFF, FFF, you should be able to set up the page table so that it actually resolves to like one FFF, F. So you can go ahead and do that, that's a good kind of question. So just remember, pulling for you, we're on this together.