 All right everybody, let's get started. So before we go into the content that we'll cover today, let's first do a quick recap of what we talked about last time. So last time we introduced the VM interface. So for kernel, the VM are supposed to manage all the physical pages. The VM are supposed to give the kernel some pages and later on when the kernel returns the pages back, the virtual memory system are supposed to be able to recycle and reclaim the pages. And also, when you deal with user processes, you often need to play with the red spaces. And that part is also the responsibility of the virtual memory system. And for the user, actually, the user won't realize any of the existence of these allocator pages, recap pages, and so on. For the user, the user, excuse me. OK. The only thing the user cares about is that when you want to access some page, it will trigger some page fault. And how the virtual memory system will handle that page fault, whether to reject the request or to actually add some physical pages to fulfill the request. And finally, we have a syscall that belongs to the virtual memory system. It's called sbreak, which basically manipulates with the heap space. So this is the virtual memory interface. So this is to the outside of the world, whether it's other part of the kernel or the user space, what the virtual memory system is supposed to do. And last time, we studied DanVM. How does the DanVM fulfill all those responsibilities? So we know that the DanVM does virtually no physical page management. When the kernel wants to allocate some memory or allocate some physical pages from DanVM, it's just called ram-steal-memory, which just steals the memory without returning to the RAM. And so that's the physical page management of DanVM. And for user address space, DanVM makes some very strict assumptions of what the user address space looks like. It assumes that the user address space always have fixed number of regions, which is two. In this case, it's code region and data region. And the stack size is also fixed, whether you are a tiny BintroBinfo program or you are heavyweight, fog test, fog bomb, we allocate the same stack size for every user process. And it doesn't support heap. So there is no heap region, first of all. And there is no sbreak syscall support. And finally, DanVM doesn't support swiping, which means that when it run out of all the physical pages, it will just panic instead of do some smart thing like swap some pages out. So this is how DanVM fulfills all those virtual memory interfaces. Obviously, you want to improve upon that. By that, I do not mean that you can adapt DanVM. So study DanVM is helpful to get some idea of what the DanVM system looks like. I would suggest you, when you design your virtual memory systems, you start from scratch instead of incrementally try to improve DanVM. Because if you do that in the later stage, when you do swiping, it's kind of very difficult to further approach the limit of DanVM. So you want to start from scratch at the very beginning and have a good start point. So this is what we talked about last time. Today, we will mostly focus on the physical patch management part. In particular, we want to see how does the virtual memory system boot boot bootstrap. By that, I mean that how does the virtual memory system figure out how many pages are there available for me to manage it and all that. So we will study the data structure here, which is very important in the physical memory management system, which is called call map, to manage all the physical pages. And then we'll look at what the possible state of a physical patch could be. And it's very important later on. But for now, it's kind of obvious whether it's available or not. But later on, it may have other state which you need to think of. And finally, we look at how the MIPS actually treat the how does the MIPS memory management unit works? How does the MIPS translate all those virtual addresses? This is not necessary for this assignment, but it's very helpful for you to work on this assignment for you to know when you deal all these virtual and physical addresses, you know what you are dealing with. So first, how VM bootstrap. So initially, if you think about it, you have this many physical pages, say four megabytes of physical pages, even before the system bootstrap. So you have this many number of pages. And when you type the command sys161 kernel, actually the simulator will try to load the kernel image to some part of the physical memory and start running from there. That's why kernel, that's how kernel gets running. So suppose this is all the physical pages you get. You have physical address zero. You have physical address, let's say it's four megabytes. In this case, that four megabytes will be assigned to a magical variable called lastp address. So if you look at some assembly files, you may get some idea of what's the actual physical layout. In this case, this detail doesn't matter much. But conceptually, at the very beginning of the physical memories, you have some exception handlers stored in there. So whenever there is some exception, the hardware will first set the EPC to some virtual address mapping to this region and start handing the exception. So this part you don't need to worry about. This is already handled. So then after that, the kernel is being loaded to the memory right after the exception handler. Say the kernel size is like 50 kilobytes, then this 50 kilobytes will be occupied by the kernel code, right? So the kernel's main function may be somewhere here and the processor just start executive instructions from there. And at this point, you already know that the first function get a code when the kernel startup is called the Kman function. That's the first C function that gets called. And it's somewhere here, it's kernel code region. And inside C main function, sorry, Kman function, it will do serialize a series of bootstrap functions to initialize different part of the system. And you will notice that the first part that the kernel tried to initialize will be the RAM system. And if you look at the code, here I have the main.C open up. So this is Kman function, it first called boot. And inside boot, the first function to code is here, RAM bootstrap, right? So the first priority of the kernel at this point now is to figure out how many resources are there for me to use, right? If you go into the RAM bootstrap function, you will find that. So it first called the function called RAM size, which is a low level function that provides how much physical memory are there available in bytes. And so we do something. So here you can see that LASAP address is actually assigned to the size. So if the size is four megabytes, then the LASAP address is four megabytes. Then it will calculate the first available physical address. It's very important to note that this first available physical address is not zero. As you already see in the slides, zero to, there are some portion of the startup of the physical address is the exception handler. Right after that you have some kernel. So the first available physical address is actually after the kernel. You can see that it's first free, which is the magic symbol set by the linker and assembly language, which indicates the first free available virtual addresses. And somehow it is mined, it's subtracted that by this macro, which is K segment zero, which OX element, we will explain this why you subtract that to get the physical address. So now for now, just ignore this part and you have LASAP physical address, which is the memory size, physical memory size. And you have the first available physical address here. So now when you do ramplestrap, you first figure out what's the region of the memory is available for you to use. So this is what the physical memory looks like and what's the value of each variables right before ramplestrap. Or actually right after ramplestrap, because ramplestrap are supposed to initialize these two variables. So this is the physical memory it looks like right after ramplestrap. And you will notice another thing is that actually the VAMplestrap function is called at the very last of the boot function here. So after ram you have thread, you have clock, VFS and all that. And finally, you have VAMplestrap. That's not the last, but it's quite at the late stage of initialization, right? So you will notice that between ramplestrap here and VAMplestrap here, there are a bunch of other initialization functions, right? Some of them we'll call Kmalock, which is intuitive because when you want to initialize, you often try to allocate some structure and then initialize them, right? So when they call malock at this point the VAM system is not bootstrapped yet. So Kmalock will call get, allocate K pages, which is to allocate physical pages to some other part of the system. And if you look at the implementation of allocated K pages, it will call get a P pages, which we'll call ram-steal-memory, right? If you go to the definition of ram-steal-memory, you will find that they just keep forwarding the first physical address, right? So right before ramplestrap, the first physical address is here. So from here to here is available physical memory you have. And then if some other function called Kmalock, which at the end will call ram-steal-memory, then what the ram-steal-memory does is keep forwarding this first physical address, right? So this part is actually stolen by various part of the system, right? And when you reach the VAMplestrap, the first physical address will be here, right? So this part is actually stolen by, we don't know what other part of the system. And when VAMplestrap get called, this is what you have. From here to here is the available physical memory you have. And it's very important to figure out where that is. And you already know that because you can have first physical address and last physical address to identify the region of physical memory that is actually available. So inside VAMplestrap, what you're supposed to do is that you allocate one extra data structure which is called call map right after this stolen physical region. So this is, when you allocate the call map, this will be the last time you call ram-steal-memory. You steal this amount of physical memory and they store your call map there. And from here is actually the real free memory after VAMplestrap. So inside VAMplestrap, you will allocate this call map data structure. You initialize this structure and after from there on, the allocated key pages are supposed to work in the sense that the allocated key pages should stop calling ram-steal-memory. Instead, it should call your allocated key pages function which consults a call map data structure to figure out what's available physical pages are there is available for me to allocate. So if you go over this process, you will find that you need some information to know, for example, inside allocated key pages. That function can be called before VAMplestrap or after VAMplestrap, right? If that function is called before VAMplestrap, there's really not much you can do about it. You just do what VAM does, which is you call ram-steal-memory to serve the request, to kind of give some physical memory for other part of it to use. Once VAMplestrap gets called and after that when allocated key pages get called, instead of call ram-steal-memory, you should do some smart thing like consult the call map to figure out what's available, our physical memory are there for you to allocate, right? So you should have some flag to indicate that whether or not the virtual memory system has bootstrapped. If it's not a bootstrapped, then you just call ram-steal-memory. Don't feel guilty about that. And after that, after VAM gets bootstrapped, you should consult the call map to allocate the physical memory. So this is how the physical memory, the view of the physical memory changes in the bootstrapped process. Any questions about this? So you first have some exception handler, you have kernel, then you have some stolen memory, then finally is some call map structure that you allocated for the virtual memory system. After that, you have some, a bunch of available physical memories. Any questions? Now, so the thing is VAM bootstrapped, VAM system rely on other part of the system. For example, you will want to synchronize the access to your call map. To do that, you may want to allocate some lock. To allocate a lock, you should have the stress system already working. Otherwise, the lock doesn't really make sense. So there's some dependencies there to force the VAM bootstrap in that order. So you cannot just bootstrap the VAM earlier. So you don't want to change the bootstrap order of the boot function. That's very critical. So you want to try to accommodate that and do things on the existing framework. So this is the VAM bootstrap process. So now let's take a closer look at the call map. As we already said, call map are allocated right after the stolen memory, which is here. So call map should be a list or array of entries. Each entry corresponding to one physical page. That's how you manage things. You allocate one small entry or small metadata for each physical page, and they store the metadata information in there. So call map is consisting of array of metadata entries. Each entry corresponding to one page. And if you look at entries, say you have four megabytes of memory. With 4K pages, you have how many pages? Four megabyte divided by 4K, one K. You have one K physical pages. Then the length of the call map should be what? One K. You have basically one entry for each physical page. So you have this one-to-one mapping to the firm call map entry to the physical pages. And by mapping every physical page, the translation between the physical address and the index to the call map is very easy. I suppose you have physical address, say 100, sorry, 8K. Then what's the index into the call map entry corresponding to that physical page? 8K divided by 4K, which is two, right, zero, one, two. So the translation or the mapping between the call map index and the physical address is very easy or straightforward in this mapping. You have one-to-one mapping between the physical addresses, I mean the page's best physical addresses and the call map index, right? And because these pages are already used by various other part of the system, by accept handler, by the kernel, stolen by other part. So when you initialize your call map, not every page is available, right? Some of them is already occupied by others. And you should figure out how many pages are available there and how many pages are already occupied. But to do that, you can first get this first physical address which you have access to and then calculate how many pages are here. Suppose that 10 pages are already occupied. Then when you initialize a call map, the first 10 entries should be marked as used, right? And then the following entries are to be marked as available. This is how the call map looks like and how you should initialize all the call map entries. Any questions on this? This may be a little bit trickier if you really think about it. So the length of the call map is straightforward. There's no doubt on that. Given this much physical memory divided by 4K, you have the entries. But this part, I mean calculating how many physical pages that is already being occupied is a little bit trickier if you think about it. Because you have, first of all, you have some alignment issue, right? You need align this first physical address up to even when the first byte of the page is occupied, the entire page is occupied. That's the first thing. And then when you calculate the first physical address, you need to calculate the length of the call map, right? So that's some implementation details that you need to think about. But the general idea is that you should initialize the call map this way so that after VM bootstrap, the call map reflects what the status of each physical page is, whether it's occupied or it's available, right? Any questions on the call map initialization? Yeah? Yes. The last physical address is the end of the physical address. It's a value, for example, four megabytes. Last physical address there, yeah. That's how many, that's how much physical memory you have. So, we already talked about some of this. So how do you allocate the call map? First of all, there are a variety of ways of doing this. So you can first get a hand of physical address, first physical address here, and manually increment that to reserve space for call map, or you can call RAM still memory. And that totally depends on you. This is some implementation details. And inside of the call map entries, so we have already settled down the overall structure of the call map. It should be a array of entries, and each entry corresponding to one physical page is to recall the metadata of the physical page. Now the question comes to what kind of information you want to keep for each page? What metadata you want to maintain inside call map? Right? So first and foremost, what you want to keep is the status of the call map, right? Whether it's, so for now, the status are quite simple. Whether it's available or not, right? If you do not consider swapping. And then, then what? Pages, do you really need to store that? Like I said, if you do the mapping this way, giving a call map index, how do you find the physical page? Or how do you find the physical address? Yeah, just calculation. It's the same that in the file table, you don't store the FD in the file handle because the index is just the FD. It's the same thing here. You don't need to store the physical address in the call map entry because given the index, you can easily calculate the physical address, right? So that you can just save some bytes there. And what else? Like I said, chunk size, what is that? What's that? Size of what? Size of page? Isn't that fixed? Then why do we, so first off, what's the chunk here? Yes, so I look at the key pages, right? Say I look at the four pages. You should give me four continuous pages, right? Well, you don't have to store the information, but later on when I call three key pages, I just give you the starter address of the first page. So how do you know how many pages to three? You cannot just say three one pages because I allocated four pages for you, right? You need to know, starting at a physical page, how many pages is in the same chunk. So later on when you do three key pages, you can free that many physical pages, right? Then because you don't know where the first page will be because it could be any page, so you need to store this information on every page. Or you need to set up, at least reserve the space to save such information in every call map entry, right? In reality, you don't have to set the chunk size in every call map, but you need to reserve space to save the information. Go lost. Say I call allocated key pages and you decide that, okay, from now on, I want to allocate four pages. You decide that you scan the call map and find a chunk of four available pages, continuous. Then you give the best address of the first physical page. Give it to me, right? Where do you store the chunk size? In the first call map entry, right? Because that's what matters. You don't have to really set up the chunk size in the following three pages. This is not being used, right? So you don't need to store the chunk size in the first entry. But because this is the fixed size entry, you have to reserve a space for chunk size for every call map entry. That's what I'm saying, right? So later on, when you call three key pages and I give you the address of the first page, you can consult the chunk size in the first call map entry to figure out what was the chunk size when you allocate that chunk. So you can free that many physical pages. And finally, I listed some information here called owner, which is which process owns this physical page. For now, if you don't do swiping, this only information doesn't really make much sense. You probably won't imagine why you want map, reverse, map this physical page to some virtual pages. But later on, when you do swiping, you do need such information. For example, suppose you have swiping already done, right? You scan your call map, and found there is no available physical memory. What do you do? You choose a victim, you kick out that page. Or you don't really kick out that physical page, you kick out the content of the page. The content of the page corresponding to some virtual pages in some processes virtual memory, right? Can you just do that silently? What's that? Yeah, you copy the content of that page to disk. Then you can claim that this page is available again. But that content belongs to some process mapping at some virtual addresses, right? Can you just do the copying silently and reuse the physical page? Yeah, you have notified the owner, basically. You have to update that process's page table to indicate that this page is no longer in physical memory, it's in disk now. So later on, once that process access that virtual page, you need to swap that page back in to physical memory again, right? To be able to do the notification, you need this reverse mapping to find out that process, to find out where this page is being mapped to in that process's virtual memory, right? Basically, this only information is a total of address space, virtual address, right? Given this, you should be able to find out the page table entry of that process. Yeah, you don't need to? Do you need to? You have, inside, currently, you have access to everything. So there has to be a way for you to, given this owner information, locate that page table entry. That's your responsibility when you design your page tables. Yeah. No, actually, notify here is a strong word. You don't actually interrupt that process. You, what you do is that you set some flag in that process's page table entry, right? So later on, when, so hopefully, that process will never access that virtual page. That's the ideal case, which means you swap the absolute right page, right? If that process do access that page later on, it will find out, actually, because there is no TRBF mappings in the TRB, it will trigger VMFOT. Inside VMFOT, you will find that, oh, actually, this page was swapped out by me earlier, so I now need to swap back. So you don't actually notify it, but just to mark that page is in disk. Any other questions? All right. So now, let's talk a little bit more about the status or the stats of the physical page. So for now, it's quite straightforward. Just ignore the dirty and clean part. You, first of all, you have free and fixed. Free means that the page is available. Fixed, you can think of as they used. We use fixed for a reason. We'll see later on why we use the word fixed. So we have free pages, we have fixed pages. When VM bootstrap is done, all the fixed pages is the pages that in the gray area, these pages are fixed in memory, right? These are also fixed. And there, we have a bunch of free pages, right? And the transition between the page stats are simple, right? You first initialize the status of all the pages, whether it's fixed or free. Then whenever you allocate key pages, you can mark the pages you're returning to as fixed or used as you can imagine. And later on, when the process, when somebody call free key pages, then you can translate to the status from fixed to free again, right? So for now, if you only do physical page management, it's very easy. Later on, you may have more stat when you do swapping, right? Page could be dirty or clean. The definition of dirty is not what you would imagine. Actually dirty means that this page doesn't have a copy in disk. So that's why when you first allocate a page, you would imagine the page is clean because I never touch that page. But instead, the page status is actually dirty because at that time, that page doesn't have a backup copy in the disk. So it's actually dirty. Which means that later on, when you decide to swap out to kick out that page, you actually need to copy the content from physical memory to disk. That's what dirty means. Dirty means that whenever you want to swap out that page, you have to do the copy. You cannot save it. But once you do the copy, now you have two identical copies of that page, both in disk and in memory. So suppose you doesn't touch that page from that point, then the status of the page is clean. Meaning that later on, when you want to kick out the page, you can save the disk right. You can just discard that page content because you already have identical copy in the disk, right? So there are some transitions here between dirty and clean. And this figure should be useful when you do swapping. But for now, let's just consider free and fixed if that's easier. And so this is the page, physical page state. Any question on this? If you only look at the free and fixed, there's not much question except that why do we call it fixed? We'll explain that later. So this is the justice site note that of course we need to synchronize the access.com app because we only have one com app, the entire system, and the multiple threads are trying to access it. So which primitive will we use? Of course, lock, right? Whenever you have such shared resources and you want to enforce exclusive access, you always want to use the lock. And another side note is that the com app shouldn't take you more than two books. This is the third week after the time in the release. Is that right? Is this third week? Last week, we have design document. The first week, we have midterm. The third week, actually, at this point, you should have your com app already done. So how many of you have you finished the com app? Nobody? Well, you should have at this point, it's quite late in the, we have a bunch of other things ahead of us. So once you think you are done with the com app, this is the test you can be sure that your com app actually works and is solid. So we have came one, which is a single thread came out of stress test. And we also have came to cool, which is not a thread came out of test. You don't want to move on or work on anything else. If you fail any of these two tests, you want your com app to be solid, to be bug free and to provide a solid background for your other part of the assignment. So this is how you can test your com app. Any question so far? You should really get a com app done no later than this week. I will put it this way. Because as you could imagine, the swapping is a big headache and that could cost you easily three weeks to finish. All right, so finally, let's look at the mysterious MIPS memory managed unit. We have been talking about physical address and virtual address all the semester. Now, what is actually a virtual address? What is a physical address? Okay, so first of all, the virtual address space is solely decided by what's the length of the instruction word or what's the length of the registers. In a 32-bit processor, the virtual address space will always from zero to four megabytes. You have that number of virtual addresses at your disposal. And the way MIPS manages this virtual address space is that MIPS divide or split the virtual address space into several segments. And from zero to two megabytes, half of the virtual address is actually called user segment, which is as the name indicate, is the user can use any virtual addresses inside this region. Above that, we have two kernel segments. Actually, this would be case segment one. So we have 512 megabytes of case segment zero and 512 megabytes of case segment one. And finally, we have another gigabyte of kernel segment two. These are just the numbers, right? We have from zero to four gigabyte, these many numbers that's corresponding to the virtual address space. And now, when you come to the physical memory, this is really decided by how much physical memory you have, right? So in the OS 161 simulator, you may have four megabytes of memory. Then your physical memory or physical address space will be from zero to four megabytes. Each of them corresponding to one physical byte in there, so this is the virtual address versus physical address. Now, so first thing I want to point out is that every address you manipulate in the or the CPU try to access is virtual address. So for example, let me quickly jump to this. You have a bunch of instructions for if you go to the assembly level and you look at every instruction, every addresses you issued out in the kernel is virtual addresses. You always access virtual addresses there, right? Now keep this in mind. And then, depending on what that physical, sorry, virtual address falls into which region, the processor will use different ways try to interpret that virtual address. For example, here, whenever you have, whenever the processor, so think of yourself as a processor, right? You have a bunch of instructions and then you load each instruction to figure out what's the instruction want to do. Suppose you encounter instructions say, I want to load the memory at, this is this virtual address. And that virtual address is below OX at a median, which is from zero to any number between here, right? So the way MIPS will interpret this number is that, okay, I have this virtual address. I will consult the TLB to figure out what's the actual physical address to use. Do the translation by using TLB. Once I consult the TLB, suppose there is an entry there, there is a mapping there, I got the actual physical address. I use that to actually access the hardware, the physical memory, right? So whenever the processor encounters a virtual address that below this OX at a median address range, we always consult the TLB to figure out the correct physical address. This is what the processor will do when it get a virtual address that smaller than OX median. What if, so say I read the instructions, I say some instructions want to load memory from OX, I don't know, at a median 10, maybe larger than that, but below the OX A, seven zeros, I will not use the TLB. I will just chop the most significant bit of that virtual address. I use that as physical address. To give you example, say I have an instruction call, say the CPU read an instruction like this, right? The instruction wants to load the content of the virtual memory at this virtual memory address. How would the CPU figure out what's the actual physical address to use? No, that's the virtual address you're talking about. When the CPU deal with up-layer programs, we're always speaking using virtual address. So here the programs tell the CPU, I want to load the memory at this virtual address, right? Now as a CPU, how would you actually get that memory content? You need a physical address, right? This is how you interact with the physical memory. And how would you get that physical address? What's that? Yeah, so like I said, the way the CPU do the translation is that instead of consulting the TLB, which is different with user memory, it would just chop the last most significant bit. In this case, if you see the binary is one zero zero, it's a first eight, you have a bunch of zeros, right? The way, so here it would be zero zero zero one, corresponding to here, and a bunch of zero zeros. The way CPU do the translation is that it just chop this last bit. I set it to zero. I use this as the physical address to actually access the physical memory. The effective is like this, is o x at median one zero zero minus k segment one. That's the constant, which is o x at median. You set this bit to zero is equivalent to subtract this number by o x at a median. That's how the CPU gets the physical memory. And so in the previous slide, actually in the code, we see something like this. The first physical address equals to the first three minus kernel segment zero. If you go to the definition, kernel segment zero is just o x at a median, right? What does it mean? So here, the first three is the virtual address, right? It's in kernel segment zero. We subtract o x at a median, we get the actual physical address. Here, we are trying to mimic the behavior of the CPU when CPU encounters such virtual address. This is how we calculate the actual physical address of the first available physical address, right? So this region, any virtual address in this region is directly mapped to physical. So say you only have four megabytes of memory, which means that only the first four megabytes of the virtual address is usable. If you issue a virtual memory larger than that, you will have some, I don't know what the fault is, but you'll have some hardware for it because you're trying to access some physical memory that doesn't exist. So this is kernel segment zero. And kernel segment one is similar where you just do the subtracting or you manipulate it with a most significant base. The only difference between the virtual addresses of kernel segment zero and kernel segment one is that when you access the physical memory, use some virtual address in kernel segment zero, the CPU will try to first consult the catch first. So this data can be catched in the hardware catch while if you access the same physical memory in kernel segment one, and the CPU will not use the catch, the catch is disabled. And you will notice that the same physical page is actually being mapped to two different locations in the kernel virtual address space. So which means that there are two virtual addresses that are being mapped to the same physical address in this mapping. So both kernel segment zero and kernel segment one are mapped, directly mapped to the physical memory. And finally, we have kernel segment two which is not used in this OS. It's kernel virtual address, but it's using TRB which you can leave it alone. Just we don't use it in OS 161. So now if you think about it, you have every process have this four gigabytes of virtual address space, right? They have two gigabytes of usable user memory, user virtual address space, and every process is kernel virtual address is identical. That's why when you get into kernel, you can, no matter which process is trigger, Cisco when in trouble in the kernel, you are using the same virtual to physical mapping. This may be a little bit trickier to digest at first, but keep this picture in mind is very useful when you deal with all those virtual physical memories. Otherwise you will get, you will soon get confused of when you are using virtual memory or when you are using physical memory, right? And like I said, the CPU always, or the program always uses the virtual memory when the program want to load some content from the memory. For example, if you have this instruction, as if you are CPU, then when you saw this address, you immediately know that, okay, this belong to a kernel user segment. I should consult TRB to get the actual physical address. What if there is no entry in TRB? As a CPU, what would you do? What's that? TRB fault, right? You don't know how to translate that. There is no TRB entries for you. We raise a TRB fault. And so now if you have this virtual address, the hardware will know that this address virtual address belong to kernel segment zero, and we just subtract that by all and the media gets the physical address. And similarly, we have addresses in 7.0.1 and 2. And you also will find two hyper macros that do the translation to mimic the behavior of the hardware translation, which called physical address to kernel virtual address, which is nothing but just add kernel segment zero to that physical address. You get a virtual address. And at this point, you should already you should have some idea of, first of all, why kernel pages cannot be swapped out. When you choose a swapping victim, you have to leave kernel pages alone. Why is that? Not exactly, yeah. There will be no TRB fault. That's exactly the reason. So the virtual address to physical address is that doesn't go through TRB. So if you swap that page out, how would you reflect the fact that that page is no longer in memory? So normally, when you swap out some page out, you also kick out the entry in TRB. So next time, when the process wants to access that page, you have a TRB fault. That's how you handle the swapping. Now with kernel memory, there is no TRB involved. So you cannot really know that when the kernel access that page because that translation is not done by TRB. You will not get TRB fault and you don't get a chance to actually swapping that page. And secondly, why the SDK pages have to return continuous or consecutive pages. We know that for the user address space, if we want to allocate four pages, the four pages can be scattered around all the physical address. That's the advantage of using the mapping or the page table. But for kernel memory, we have to allocate continuous pages. Why is that case? Again, we don't have page table. We don't have TRB. That's why the virtually continuous kernel virtual addresses has to be also continuous in physical address because we don't have those TRB stuff and page table going on here, right? So this is how the MIPS virtual memory to physical memory mapping works. To summarize, we talk about this today. We talk about how the virtual memory system bootstrap, how to figure out how much physical pages are there available, how to initialize the call map, what's the status of the physical pages, and also how MIPS are due with all those virtual memories. Next time we'll talk about user address spaces.