 All right, lights, camera, action on this. So, again, welcome to back to recitation on this. I assume everyone's caught up on breath at this point. If you will, the end part of what we're going to be talking about in today's recitation is for the most part, the mechanics of, if you will, the beginnings of assignment three on this. Certainly, if we finish up a little bit earlier, I'd be happy to take questions about the current assignment on this. But before we get into the mechanics of the assignment, a little bit about if you will, what the memory manager is as specific to OS 161 that you currently have right now, the DUM VM and how to use that as an example of how it works and why it is not going to work very well vis-à-vis what you need to implement for assignment number three on this. So, again, what we're talking about is interfaces here. We're just talking about the current functions that DUM VM has in place. If you haven't done so already, you're probably going to want to take a look through those. Don't spend a whole lot of time trying to study how DUM VM, just give kind of a bird's eye overview on this because it is not repeat, not how you want to set up and design your good virtual memory manager. Again, it's an example of how not to do things. Sometimes by design it's made that way. Sometimes it's just because the designers are hard, we're looking to throw together something quick and dirty to support your assignment number two on this. So, again, that's what we're talking about in terms of these virtual memory interfaces in DUM VM. And let's talk about in terms of what we need to do in terms of designing a good virtual memory manager on this. So, let's see here. Assignment three is divided up into three parts on this and I indicated before we skipped out a lecture on this. For better or for worse, the first one is going to be due the week after the midterm. So, again, this is this week, next week in on the white sands of Tahiti, the week after that is the midterm and then the week after that is essentially assignment 3.1 is due. So, that's, if you're coming up fairly soon on that, if you are taking off the week of spring break, again, that doesn't leave a whole lot of time for this assignment. It's a fairly short one as OS 161 goes, but again, you can take that for what it's will. Assignment 2.2 is big, assignment 3.2 is, unfortunately, even bigger. So, poor warning about that. But, assignment 3.1, I will also say is orthogonal to assignment 2.2, which means if you don't have assignment 2.2 completely done, you actually don't need it for assignment 3.1, you can get started and complete this portion of the assignment, even if your assignment 2 syscall stuff is still not fully functional. You will need the assignment 2.2 stuff for, if you will, the page tables assignment. So, that's going to be a double whammy if you don't have assignment 2 functioning by that point in time. And then the last part is swapping. That's right at the end of the semester. That's a cool assignment and we'll get to that in a little bit. So, in terms of the virtual memory subsystem, I should say too that we use the term virtual memory and address spaces somewhat interchangeably. They're not quite the same thing. Virtual memory system is, if you will, the compendium of the thing about that memory, if you will, manages memory and OS 161 over that many operating system. A subset of that is the concept of the address space and the address space structure, which we just got to kind of introducing and lecturing on that. So, if you will, the data structure of, let's see, the address space is part of your virtual memory system on that. So, what is actually or are some of the goals and what's going on here? Again, a memory system is something where, again, trying to map virtual addresses into physical addresses in memory. And again, ideally, what we want is it to go through if the user references a virtual address, it goes to the TLB cache and it's translated by hardware and we don't even have to deal with any of this. But again, occasionally there's going to be a page fault and we're going to have to go to the operating system where we do have to use our data structures to repopulate the TLB cache with updated information here. Now, a couple other things, goals of the virtual memory system. Remember is we want the user to see an address space that is, remember we're talking about uniform, contiguous, okay, something always, mine, mine, mine, all mine. I get what appears to be an entire, if you will, 4 gigabyte address space all to myself and my segments are kind of always uniform and apparently in the same location. That's what we're talking about, if you will, contiguous and kind of also predictable to that. The MMU, again, that's a synonym, if you will. It's, well, the TLB is part of the memory management hardware on this. And again, we talked about this before, so just go through this very quickly on this here. If you will, the user programs should not have to deal with managers. This is something that is actually managed for them by the kernel with a few exceptions. Like the user does have to specify for the kernel, for example, how to grow the heap. That's the whole point of, if you will, heap is a user-managed segment of memory. But aside from that, essentially the virtual memory management is done by the kernel itself here. The purpose of this two virtual memory, this is the concept of process isolation. It provides stability and security to the user so that one user process does not munch on or corrupt another user process. It can bomb its own address space, but it can't bomb other user's address spaces. And if it bombed, it doesn't take down the entire system and you avoid lots of blue screens on this. And other things, for example, we can implement things like swapping to allow, let's say, a 4 gigabyte system to actually use 8 gigabytes worth of memory by swapping stuff out to the hard drive on this. So that's kind of the general overview. We've been talking about this a lot in lecture, and let's have to kind of dig in some of the nitty-gritties on this here. So in terms of interfaces here, I should probably bring up an example of code here. But these are, if you will, the functions that are already given to you in VM.h. They're also already implemented in a hacky version and, if you will, dumb VM. So again, if you look at Archmips VM, dumb VM.c, you'll see implementations of these functions or some of these functions already in dumb VM. Kind of to go through them here. VM Bootstrap is what appears to be what you call when you initially group up the system. Okay? It's your chance to, if you will, initialize your data structures and get started with that. This is actually going to be somewhat problematic for reasons we'll get to in a few minutes on this. So it is not what it seems to be. So if you're taking notes, just kind of maybe put an asterisk by this, VM Bootstrap is problematic. It's not exactly a bootstrap. Out K pages and free K pages, this is essentially functions that are going to be called by K-Malloc and K-Free. So if you want to know, like, how does the kernel actually get memory? Well, let's say, in other words, if I call K-Malloc and I say, hey, give me 10 bytes worth of memory. And Malloc returns as, here's a pointer to 10 bytes of memory. How does it work underneath the equity is K-Malloc calls down and asks for occasionally the volumes of memory from your memory manager in 4K chunks. So in other words, K-Malloc will call this under my routine, alloc K pages or free K pages for K-Free. And that actually is what K-Malloc uses to portion off memory on this. And these are the two functions that you're actually going to be needing to write in part for assignment 3.1. This is part of what is implemented improperly by VM. So again, VM Bootstrap, it doesn't work very well right now. Alloc K pages and free K pages, those are the routines that are used by K-Malloc and K-Free to actually get and release, if you will, chunks of memory that are returned parts of out to the caller on this. Questions on the first three so far? Okay, TLB shootdown, to be honest for now, I'm trying to forget it until you get to swapping. It's essentially what you need to do is there was a couple of questions in lecture about maybe occasionally needing to flush out entries in the TLB, that's what this does. And unless and until you get to swapping, that's something that you can kind of glide by right now. VM fault is really important though. That is the routine that gets called every time there is general, a page fault. So you're going to need to know about this early on and you're going to need to do a lot of implementation and optimization in this. So again, remember, if a user refers to a virtual address, the TLB checks to see if there's entry. If not, it does a page fault which means this routine immediately gets called and says memory manager, there was a page fault, what do you want to do? And you in turn need to tell that what to do. That's kind of the essence of if you want assignment 3.2 and to a lesser extent a little bit, assignment 3.3 on this. Next part about VM interfaces. This time I'm going to skip over somewhat because this is more applicable to assignment 3.2. But again, we have these functions in terms of address space and again, this is if you will the data structure that is used by, if you will, a user process to represent how it's, if you will, address space appears in user line on this. And you can see it's basically functions to set up the disk address space, destroy it, copy it, you can kind of imagine that's what's called at fork, activate and deactivate, kind of nifty what these two functions do, essentially what they do is they populate the TLB or set up the TLB to be populated to use a particular address space because think about it. Remember we were talking about it in lecture and I said like every user gives to assume that let's say address $400,000 is the beginning of, if you will, the code segment. Oh, that's great. We have different address spaces to keep track of which user we're talking about. But what about the TLB? Because you think about it. Let's say process A runs and it's using its executable in virtual address $400,000 which points to let's say physical address $1,000,000 and those entries, those translation entries are currently in the TLB. But then there's a context switch and the second process we start turning on is, well, wait a minute, there's still, if you will, entries in the TLB that if you will say that virtual address $400,000 maps to address $1,000,000,000, physical address $1,000,000, well, that's true for process A but it's not true for process B. So in other words we need to update the TLB and the caches. That's actually what this activated deactivate stuff does so that we can now point virtual address $400,000 over to whatever physical address of, if you will, process B is. So that's kind of what those two do. Don't, you don't have to address all of this right now because again what's going to be most important for assignment 3.1 is, if you will, a lot of the stuff on this page and some of the stuff we're going to be talking about later on, the VM entry pages, okay, functions on this function and this slide, that's actually what you're going to be implementing for assignment 3.2, which is going to be lengthier but again it's a little bit further down the pipe on this. So other things here you can see this is basically defined if you will segment in memory and again remember this is setting up, remember this is one question at the end of lecture, well what about the various segments like staff, if you will, heave, if you will, the global segments, that's what these particular functions do. And again that's part of assignment 3.1, 2. Oh, by the way you can see how it ties in here with load health because remember load health, that's actually what you were using in terms of like setting up and running the init process. Well one of the things that load health does is it interfaces with your memory manager to set up if you will the memory segments to load stuff in from this into particular areas of memory. All right, and last one again you can kind of punt this to later on, the S break sys call that is essentially what the user space uses to manipulate the heap. So essentially this is what mallet calls down to. So in the same way that K mallet calls down to K get pages, well the user space level mallet program, if you ever looked at the internals of it, it calls down to a sys call either S break or modern operating system sometimes M map, but that's kind of what's going on on this. So I'm not spending a lot of time on that right now. All right, done VM here. Again do look through this, do know the overall design of done VM so you can kind of, it does give you an idea about how to implement a memory manager on this, but again it's not a good design and it certainly will not pass the test for if you will assignment 3. So let's actually use this as a jumping off point to see how we can implement a proper virtual memory manager and what is it that you're actually going to be looking for in terms of working code here. So let's see here, done VM, all right. This by the way is physical memory and the way it gets set up when OS 161 loads into if you will memory when a system first boots up when you launch OS 161 or when you boot your laptop or a real machine and line extollers or whatever else it is, typically what happens is physical memory begins at zero and goes to whatever last p-adder. Remember last p-adder, this is the last physical address, that's the amount of memory you actually have. Last time we were talking about virtual addresses on a virtual system by definition you all, on a 32 bit system, by definition you always have 2 to the 32 bit virtual addresses. How much physical memory, that depends upon essentially how many RAM chips you have in your system, which means it could be 4 gigabytes, it could be less than 4 gigabytes on that. So it could be 512 megabytes, 256. So this is actually going to move, we don't know about that, that's actually going to be a problem we'll see in about 10 minutes. The kernel always loads itself in at low amount, in other words beginning at zero and loading on up. Now that is the kernel loads in at address zero and physical memory, that's not how you refer to it in virtual memory on this. You probably have noticed in your error codes and whatnot, it'll say like TLB, MISONLOW or kernel panic or what have you, it'll say if you notice a lot on address and you'll see a lot of 80 million or something like that, you'll see a lot of 80, yada, yada, yada on that, that's because remember with, there's different addressing modes with MIPS chip and I talked about this last week, I said one of them is what's called kernel direct map addressing on this and that is essentially when you're in kernel mode, all addresses you can refer to it by simply adding 80 million to it. So if the kernel is located in physical address zero and on up, that means it appears in virtual address 80 million and up, which is why you're occasionally seeing errors and having to refer to addresses especially if you're dealing with XFP right now, you're probably fine with this, there's lots of addresses beginning with 80 million on up, well it's referring to addresses in the kernel because that's actually where it actually begins. All right, now a couple things about this, how does WVM actually work here? We have out of K-pages and free K-pages and let's see, I wonder if I can actually, this may not make more sense. I'm going to go to the screen and go to the higher end top there on top right. Where are we? Go back to Firefox. Firefox. Highlighted blue. Up. Go to the lower end top right. You just click it. And I did, ooh, thank you, wizard. Okay, I think a country is worth a thousand dollars. All right, so now, sorry about that plot, I did want to but I think it's worth it here. Okay, we were talking earlier about what are some of these functions here and in terms of alloc K-pages and free K-pages. These are the two routines that essentially get called down by, if you will, K-malic and K-free when they need blocks of memory. And again, K-malic and K-free request and release blocks of memory in terms of 4K chunks here. So let's actually take a look at what WVM does. Right now, in terms of alloc K-pages, it calls down to this getPages routine, which we can take a look in just a second, it goes up there. And it calls this routine called RAM Steel Mem. So essentially what RAM Steel Mem does is it does just that. It steals memory. Remember back to the slide earlier where we had the current and the low part of memory and then we had this huge chunk of, if you will, memory above the current on this. So what RAM Steel Mem is, ah, there it is, okay? RAM Steel Mem, if I say give me, let's say, 10 pages of physical memory, what it does is it takes this first PIERR and just pushes it up by 10 pages. So I just keep stealing memory and this first PIERR gets closer and closer and closer until it eventually hits last PIERR and eventually my kernel crashes because it's out of memory. That is how, if you will, WVM gets you memory on this. So it simply starts at first PIERR and just goes all the way to the end of things. So let's back to this here. So in terms of what else we have here, ah, 3K pages, take a look at what that does, okay? Which is why WVM leaks memory on this. So what you're going to need to do essentially for sign up 3.1 is implement working functions of these two things, allocate pages and 3K pages. Certainly so that 3K pages actually frees memory but that instead of calling this JNK RAM stealM thing, okay? What you want is allocate pages to either Excel or better yet call a working getP pages routine that checks your core map for the status of free pages and returns appropriately on this. So again, you can use this for a template for what you're going to need to do but these are the functions you're going to need to know for, if you will, assignment 3.1. So if you want 3K pages, allocate pages and the one right up above there, the getPages on this so far. Back to slides on this. And, okay, physical memory management here, okay? Yeah, what's going on in WVM here is it doesn't free allocate pages, you just saw it, it does nothing here. Okay, so it just keeps stealing RAM until it goes all the way over to the other side and boom, it's end of memory and then actually it crashes the system. So what you need to do is instead of just moving things up, you're going to have to have a data structure, we call it the core map that tracks information on physical pages. Remember, your page table tracks information on virtual pages, the core map tracks information on physical pages. And part of the information that needs to track is the ability to reclaim or recycle pages on this. So you can kind of already see where this is going. There's going to be, let's say, an entry corresponding to a physical page and they'll say, is it in use or not? If it's not in use, we can say here you go, here's a piece of physical page and when it's done, we're going to have to actually mark it as now being in free instead of just continually leaking memory on this. All right, WVM in terms of user address space. In other words, what's the problem and what we need to do, let's take a look at how WVM actually works here. And again, you can look at the details of the code itself, but essentially, WVM follows. You remember we talked about this last Friday? Jeff was kind of proposing several potential memory models on this and one of which is simply allocate to the user one big base in bounds segment of memory. And that didn't exactly work and then we said let's try the segment model where in effect what we have is multiple segments, better than one big pooper segment because it will if you will allow, you don't have to map the stuff in between the virtual segments here. But there's still a problem with that and that is within the segments themselves, remember we came to the conclusion there's a problem because the virtual memory addresses in each of these segments have to map to, if you will, continuous areas of physical memory. Does that ring a bell remember last Friday? Again, anyway, it was discussed on this. That's actually the implementation of DOM-VM. DOM-VM implements the slightly janky version of the segmentation approach to virtual memory on this. So we do have multiple segments, but within that you can see this is actually the data structure for DOM-VM here. You can see virtual address base one, physical address base one and the number of pages. So base and bound, same thing for segment number two, base and bound here on this. So that's actually the implementation. This is real working code. This is what you're using right now, is the base and bound approach to, if you will, memory segmentation on this. So problems with that is right now we have a fixed number of segments here. So you can see there it is. There's segment one, there's segment two and there's a stack here. That's actually your working code. We have no support for more of the three segments on this. And not just that here, is that two of the segments are used for text and data, essentially we have no room for a heap at all. As a matter of fact, if you look at your test code for assignment number two, you will see in no place is malloc called because, frankly, malloc would bomb out right now because, really, there is, it's hard coded, there's exactly three segments. There you go, segment one, segment two and a stack on that. All right, so what you need to do is we're going to have to change things up a little bit. This is what we need to do, a real operating system has this and what you're going to be implementing for assignment number three here is a very old number of segments, I can tell you right now in practice, you're probably, make it five or six, you'll be safe. Okay, you don't have to increase that much but you definitely need to support it from the very least a heap on here. But the big thing here is we also need variable size segments, the stack and the heap need to be very sized, that with the question right at the end of lecture on that in terms of what about in search on this, remember some segments like specifically the text segment that's your executable code, they're stack, they're allocated when a program launches but there's other segments, specifically the stack and the heap that do change dynamically and so your memory manager is actually going to need to support that. So in other words, the stack segment is heap segments, they actually need to change as your program is actually running. So this is change number one. Make sense of questions. In the recap, DUMVM implements the segmentation approach to memory and while it's better than a one big segment, it's still not good enough. What we need to do is if you will, free up the number of segments and free up the size of the segments to float dynamically as the program executes. Problem number two, address translation on this. All right, again, we said that DUMVM uses the, if you will, the segmentation approach to memory which essentially uses base and bounds for multiple segments on here. Now that actually has an upside, it's very quick to implement, all right. In other words, if we have a fault here, how do we calculate the physical address? Well, we take effect, in effect the virtual address, jack it around with an offset and DUMVM we are given our physical address because, again, what's going on here, I can find, yes, here it is. Okay, these physical and virtual addresses are kind of assigned in lock step when the segments are allocated and so all we really need to do is if we'll do a quick addition subtraction, we're done. So it makes it very easy on this. So in one sense that looks good and looks like we're actually going to be taking a step backwards here because, well, it's going to be harder to implement. Each segment of continuous virtual memory maps to a corresponding block of continuous physical memory. The quick translation comes with a downside that, again, this block of physical memory here, I'm sorry, virtual memory right here, maps to a corresponding continuous block of physical memory and we don't want that because, again, on a real system with pressure, we are going to, it's going to be very difficult to get 10 continuous blocks of physical memory on this. So what we need to do is on a real system here, we need to have the ability to map standard physical pages into one continuous virtual memory block. Instead of having a requirement that a block of virtual memory requires a block of continuous physical memory, we want to now be able to have it so that if you will, multiple areas of physical memory. So, like, let's say if I am given, put on your upper glasses here, okay, a block of virtual memory here, okay, and let's say a block of physical memory, okay, I could have, let's say, this one mapped to this, I could have this one mapped to this, I could have this one mapped to, I don't know, be this, he kind of did the idea. In other words, and then this is used for other stuff. So, in other words, even though I'm using discontinuous frames in physical memory, they map something that's continuous in virtual memory on that, okay. The downside, of course, is it's going to be slightly more complicated to implement, but, again, that's the point of the memory manager. Questions on this so far? And next, why anyway, like a page allocation on this, okay. So, what's going on? What do we have here with thumb VM right now, and again, this was also the question right at the end of lecture on this, with thumb VM, everything is allocated essentially statically when the program launches, right, in other words, when it is created or whatever you call XAC2 if you will launch another program from this here. So, in other words, what that means is we essentially, okay, if a program is, let's say, 100K in length, we immediately allocate 100K for the executable code. We immediately allocate however much we're global. We immediately allocate a 72K hard coded stack. It's upfront. That's not what we want. What we want to do is support dynamic allocation on this. So, again, if you look at some of the old logs for OS 161 for previous classes, it does not apply. It's a last year that's got changed. You guys need to support dynamic allocations. Essentially, allocate pages on demand. And what that allows you to do is it allows dynamic growth of the stack also keep, and also sparse segments and data structures. Again, one of the first tests that Jeff is going to be throwing at you is he's going to allocate, if you will, declare, I think it's like a 16 megabyte array, and you're only going to have like four megabytes of memory and you better be able to support it. And you can do that if you have on demand allocation. So this, if you will, is another thing. So essentially, don't allocate your memory until you actually need it. Question on this. A lot of this stuff, again, you're not going to have to worry about until you get to 3.2, but just kind of refer to it later on. Swapping this as sign of three, essentially, if you will, if you run out of physical memory on this, what you need to do is possibly to page it out to just on this. So again, obviously, done at the end does not support that. The stuff that I'm talking about here in terms of user space, okay, like all this stuff is relevant to assignment 3.2, you're going to need to know it, but it's not relevant to what you need for assignment 3.1. What you're going to need to do in terms of assignment 3.1 is the stuff we're talking about in terms of the CoreMap or M SteelMap, and let's actually, question, I'm going to go on to the next batch of slides at this point. By the way, I'm going to try to see if I can combine these two decks of slides together. Last year, they were split up into two different recitations. This year, it makes no sense because the way things are timing out, we need to dial in the format, essentially, today on this. So, okay, here we go. CoreMap, right, what you need to do to pass assignment if you will, 3.1, if this is what's going on here. Number one is reconfigure for assignment 3. What that's going to do is it's going to be pulling out, done with the M, and the stuff that's in that, and you're going to need to start replacing it with your own code on this. You have to design a data structure, right, in the same way that you have, let's say, your process tables, your file tables, your file handles. Now you're going to need to start thinking about what is this representative representation of physical memory, the CoreMap, and you're going to have to, just like in the same way, you need to initialize your page tables, I'm going to initialize your file tables. You're going to need to initialize your physical page tables, aka CoreMap, okay. Two functions here, allocate pages and pre-cade pages. We looked at those a few minutes ago. Again, you can see examples of how it kind of doesn't work in done VM, but essentially you want these two functions here to take a look at the CoreMap and allocate pages, you need to search for CoreMap for a free page, and pre-cade pages says, you know, I'm done with the page, so update the CoreMap to, if you will, say, hey, this frame is now free. And the goal of this in terms of points for the class is pass test KM1 through KM5. By the way, these tests are ones that you have to run through the kernel menu directly on this. So the test for assignment 3.2 and 3.3 will want to get to be run from user-land, but these are run directly at the kernel menu. Questions about the overview? Yes. Is our system going to crash the moment we boot up for 3.1, because we don't have any of this? As soon as you remove WVM, you can, well, yes, no, you can boot to your, oh, wait a minute, yes, exactly. Yes. I don't think you could boot to the kernel without having. You won't even be able to boot into the kernel, because it will not have a, there's ways to make that to a little down a little bit smoother, you know, like we can, you're going to need to do like a lot of great pointing in terms, this is one thing, well, talk to the office staff too, but I would say to, there's going to be some arithmetic that involves on that and do a lot of planning ahead of time. Probably one of the toughest parts about this assignment is you're not even going to be able to do like a lot of panic break pointing, because until it is at least minimally functional, your kernel won't boot. And, again, speak to us. We have some shortcuts that will at least, you know, you don't have to have it completely functional, okay? And we can go over some of the details about this the absolute minimum just to get a booting kernel menu, so you can start your debug process. But, yes, that is a known issue. As soon as you pull out the, you can't boot. Other questions? All righty. So, here we go here. Again, what this is, it is a map of a physical memory, a.k.a. a frame, a.k.a. a page frame, a.k.a. a physical page, what have you here? So, what we're keeping track of is the state of physical pages, again, address space keeps track of virtual pages on this. Now, problem number one here is initialization. Remember we said that what you need to do is initialize your data structure here. And another chicken and egg problem on this, not only will you not be able to boot your kernel, when can we actually initialize this bugger on? Well, there's a couple of places. The reason we're talking about this here is a lot of old forum posts and whatnot used to talk about initializing it in the bootstrap function to make a long story short that won't work. So, that's what the original design of the people of Harvard word but it's not going to work. Because, well, let's actually see what's going on here. The thing about, if you will, the initializing core map in VM Bootstrap is take a look at the order of initialization in main.c here. Essentially, what, let's actually look at this already here. This is in main.c and this is the code that's run when OS 161 is initially booted up here. You're going to need to start paying attention to this code here because there's chicken and egg and dependency issues on this. Essentially, the first, if you will, thing that gets called is RAM Bootstrap. And then VM Bootstrap doesn't get called all the way down here and the reason that's a problem is that what's the core map for? It's essentially used to, if you will, keep track of physical memory on this. We need to be able to support things like K Malakor quest as the current is booted. And the problem is that we're going to get our first K Malakor quest in Proc Bootstrap. Do we see a problem? Houston, we have a very big problem on this because we need to be able to support K Malak in Proc Bootstrap and how do we do that with our core map? But if we don't set up our core map until down here, we ain't even going to boot here. So that's why I say that you're not going to be able to do your core map initialization in VM Bootstrap as counterintuitive as it is. You're going to essentially have to put it somewhere before Proc Bootstrap, either in RAM Bootstrap or write your own function on that. But essentially, you have to initialize the core map. This is the takeaway here. Before Proc Bootstrap hits, it is something that has to be done very early, early on. So let's go back to here. So again, the upshot here is we need to support K Malak request before we initialize the core map to be able to allocate the support to request itself here. We can't. This is what's suggested on some old forum post, is to use the DOM VM allocator and then switch over. It will no longer work with the new tests. Here's why. If you use, let's say, DOM VM to support the first few K Malak requests during booting and then you switch over and you put your core map in, this area of stolen memory is never going to be reclaimed. You will leak memory permanently and you will flunk tests on this. So to make a long story short, this approach will not work. So what you instead need to do is initialize the core map before the first K Malak call, which means essentially before Proc Bootstrap. So this again, how are we going to actually make this work here? So what we want here is before RAM Bootstrap, we'll actually sometime, okay, let's say before, if you will, we get to Proc Bootstrap, we need to install our core map above the kernel here, all right, so that it's already set to handle all K Malak requests before we get to Proc Bootstrap. Questions so far? So essentially takeaway number one is we need to initialize this bugger really early on. Now let's take a look at, if you will, some additional considerations here. What's the purpose of the core map? It's to hold the state of physical memory here. So in terms of software design considerations here, what type of data structure do you think we might want here? How about number one, is access speed going to be important? All right, yeah, a lot, very, you're going to need to be able to, this is on a hot path, I need to get information about the physical page, what, okay, how about another thing? Dynamic resizing, are we going to ever change the size of our physical memory? Unless we're on a server and we do hot 12 or something like that, probably not, okay. So we know essentially that the number of entries in our core map is going to be fixed and that we need to access it quickly, which tells us what we might be probably want to do in terms of data structure. Hash map? I'm sorry? Hash map or something. That would definitely be very fast. How about keep it simple, even simple? No. Huh? How about it? Yeah, just a simple array on this, okay. So just to be clear, because the key thing, well, our size is never going to change here. So we can declare an array and get out of it. We already talked about when to initialize it. Unfortunately, very early on, talk to the staff about this, but essentially, chicken and egg issues. Okay, we're going to allocate it. Let's talk about this, another chicken and egg issue here. How about declaring it on the stack with a K malloc? What's our problem with that? We're trying to support K malloc. The point of core map is so that we can have K malloc. So we can't use K malloc to set up the core map. So that's a non-starting. Well, can we have a global array? That would work very nicely, except for one small fly in the one hand here, which is that how big of our array? It depends upon the amount of physical memory. And how much physical memory are we going to have in our system? You tell me. That's in that sys161.conf file. And the task will be configuring with your kernel with various amounts of memory as you're going on. And indeed, this is true of a real operating system. When you buy Windows on a computer or in a box in a, let's say, a computer store, it doesn't say this version of Windows is suitable for 16 gigabytes and this one is suitable for 32 gigabytes. You don't buy, let's say, Mac OS X that said that this version will only run on 64 megabytes. You expect it to be able to use essentially however much memory you have. So the problem here is we can't declare a global array either. So essentially what we're going to have to do is manually create the state of structure for ourselves. It's a bit of a pain, but this is actually what you have to do, a question. Could you set a minimum, though? I'm sorry? Could you go with a minimum? I'm not sure what you mean. So like with your example, a lot of the time those systems I'll say, like, you need a minimum of this amount of memory for it to work. So couldn't you just go like that route? You definitely could, and then we'll allow you in a boot, but then how are you going to essentially, if you will, support that addition of the right? So you could, I think what you're proposing is kind of a bootstrap approach. Declare a minimum to get it started, and then once you get started, maybe expand it. That actually isn't genius, okay? I frankly have not thought about that, okay? I bet that probably would work. In other words, declare a minimum and then get booting, and then you could adapt point to a K-malic. Yeah. Sweet. Okay. The Korma can be changed later. I'm sorry? The Korma can be changed later, right? It depends how you set up the Korma. The other thing is you can create it manually on this, okay? Either way, I will say, I promise, the arithmetic, it's conceptually fairly straightforward, but it's just mechanically intricate. It's like exec, okay? It's conceptually straightforward. You import arguments in the export, though. Mechanically, you know what it is. It's a pain, okay? It's kind of the same thing with Korma on this. But, yeah, that's actually cool. Excel. Yes. I think with that, though, is that if you make it initially, then you won't be able to change that for the number one. Well, probably not. I mean, I think what you're talking about is making just, you know, we're going to guess that this is the absolute minimum we need to boot the system, and then enough to get K-malic supported and then, I don't know, that that's, you know, I would have to think that too, but I mean, I think it's an intriguing approach on this. But you can also set it up manually. Yeah, because you don't necessarily need to create the Korma all the way through right away, but you need enough to just get it going. And then you can finally initialize it later on. Well, I will say you do have enough information to be able to calculate it manually. But, you know, I take, try it, I mean, how it works. Definitely. Absolutely. Aside from that, we'll use the Korma again. If I talk about this earlier, essentially, it's average. So you're going to have to take your up-global synchronization issues. That's all she wrote. Yeah. Oh, initialization. Mechanically intricate. Oh, yeah, it should be fully aligned. Oh, yeah, in terms of what actually goes into the data structures, remember, it's going to be an array of thing-a-bob's. Essentially, what you need to keep track of right now is a particular page frame, whether or not it's allocated and the size of the allocation. Because remember, Kmalik could come in and say, you know what, give me 10, you can take the inviduous pages on this. And the Korma is going to have to keep track of the number of pages in each particular request on this. Right now, essentially, this is all you need to keep track of in your Korma. You're probably going to have to add more goo later on for the purposes of assignment 3.1. That's really all you need. What's this? That's assignment 3.1. Okay. So I would say start by, okay, there we go. To recap, what do you need for Kormap here? What you are doing is you need to design a data structure that represents physical memory. You need to initialize the Kormap and you need to write a couple of functions to support Kmalik and K free on this. Take a look at done VM for kind of how it's done right now. Again, that's not a good design going forward. Most of the stuff in done VM relates to assignment 3.2 and following. But for the stuff vis-à-vis assignment 3.1, what you want to do is take a look at, if you will, the functions that relate to 3.1 and what do you actually need to do to implement this stuff in terms of like, when you do the allocation, it's going to have to be very early on in the boot process, at least for part of it, okay? Think about the nature of the data structure. Probably it's going to be in a way, possibly in combination with something else. In terms of where you actually want to put it, again, you can't simply do a Kmalik allocation or a straight up global request. So you're going to either have to possibly create it manually or maybe some hybrid. That's actually cool how that works out. And I think that should be about it. All right, next people, questions?