 Jeff's lecture today, Jeff is actually in a CS Education Conference today out in Coffee City, aka Seattle on that. So what I'm going to be doing is continuing with the series on virtual memory on this and just a couple reminders in terms of administration trivia. The second part to assignment two is due one week from today so hopefully you are working a pace on that. I would suggest that you should be mostly done with definitely done with like fork and like weight pit and you should be at the very least nibbling away at exec on that because that's probably going to take you several days to get through on this. Again I assume you do have other classes so allow time for that in midterms. So again pacing ideally to like finish this on time you should be starting exec right around now because you're probably going to have to have a couple days for things like bad calls, the inevitable bugs, other problems. You want to allow enough time to speak to the core staff and just get any other problems roped down on that. Looking ahead to the week after that it should be off to Tahiti for most people I assume for frolicking in the sun for a full week and then once you get back from that it's going to be midterms for this class. Jeff has tentatively scheduled the midterm for March 31st which I believe is a Friday on that. The TA staff is going to try to see if we can't get some help you like a review session for that following like the week of the midterm. So then the week after that the first part of assignment three we call it the core map is going to be do that Friday. So it's kind of like next Friday is assignment 2.2 Friday after that you will be rubbing your toes in the white sand on Tahiti. The week after that you'll be taking the midterm the week after that is if you will checkpoint 3.1 which is the core map. 3.1 is not a big big assignment but it's also the first part to virtual memory and you're essentially only going to have a week to pull it off again not counting the midterm on that. We're going to be going over how to start the core map of assignment in next week's recitation I know that kind of sounds a little bit weird to be touching on assignment three but again simply put we are not going to have time to talk about it because the week after that spring break then the midterm and then the week after that the thing is actually do. So I do want to actually make sure that you people have some tools to get started with the core map assignment and then the week after that begins page tables and you may have heard from last year's class page tables is a whale of assignment it's a big one officially you only have two weeks for that so I would say get started the earlier you get done with core maps the better because it's going to make the big jagged pill of if you will checkpoint 3.2 go down a lot easier on this it's a big assignment but I can also promise you like most of the assignments in operating systems they're fun assignments you can actually learn a lot from that it'll also help you prepare just in terms of the way things fit together for the exams too so a couple of work remarks about the exams just certainly will be talking more about that but just exams tend to be if you will conceptual in nature he wants to force you people to think in other words it's not a memorization exam certainly you need to know what virtual memory is but you need to know like for example if you make x y z change to the hardware how would it impact the performance of a system it will get better in this way and worse in that way or what types of policies what might we implement at the operating system layer that will maybe favor one thing or not favor another thing so he wants you to think a little bit and show that you know the material and can actually apply it and think beyond what was simply talked about in the class itself so his exams are posted online for the last 10 years definitely make use of them there's really no surprises there and just you should be prepared for the exam so just make sure that you do know if you will the classes for all of you people out there in video land who I do see some orange seats here I know there's sometimes the thought that you can hey I can I don't have to come to class because I can watch them online that's certainly an option you have bear this in mind that okay there are three hours per week and that adds up pretty quickly and if you think you're going to go through about six weeks worth of classes well I can do that and hey that's 24 hours so I can pull in all night or good luck retaining material from that so again if you haven't been coming to class now is the time to start going over the material so you don't get overwhelmed the week of the midterm you people who are here okay you can polish your halos you're already on the way to st. City so let's talk about what's going on in this class today we're going to be talking more about translation okay in other words we have this thing again virtual memory and virtual addresses and we've got physical memory and physical addresses and today's if you will lesson is how can we kind of propose at least shuttle between the two on this so we have some proposed methods for if you will linking the two up base and bound segmentation paging on that and we're going to see why we might kind of eventually settle on the model that is adopted by most modern production operating systems on this alright a couple things this is I believe you guys went over this in the last class about is this correct this slide okay but quick version I know this looks somewhat mechanical but you'll definitely need this slide and the mechanics in it take it to heart because you're going to need to know the guts of it for assignment three and how to pull off and code assignment three and again tldr essentially take a look if you're given a virtual address on MIPS they make it easy for you look at the top most hexadecimal digit on this if it begins with zero to seven essentially the chip treats it as addressing user space or we hope that it addresses user space on this and by the way it also is going to be gone through if you will goes through what we like to call the tlb it does if you will some address translation right if the first digit begins with eight nine or a or b essentially eight or nine right it's what we call direct map in other words it's a one to one translation the address does not go through the tlb caching mechanism on that so there's pros and cons to that as you can see it's obviously very simple it's also kind of how we get around the fact of how do we set up and bootstrap the memory management system in the first place because if we're going to have this translation thing of a bob that just going to be talking about later on well how do we get it set up if we turn on the computer in the first place well we have to kind of take advantage of the areas in memory that do not use that translation and hint that's going to be what you're going to have to use that in a lot of parts of assignment three so essentially if you are in kernel mode on os 161 os 161 always uses direct map addressing for kernel addressing makes sense the reason I mentioned that is MIPS as a chip does have provisions for kernel addressing that does go through the tlb and address translation and the cash right you're not going to be using it in os 161 again unless you want to support that that's probably assignment number 13 so again what I would suggest is stick with the policy of all kernel addressing goes to direct map but user addressing will have to go through the tlb and that's essentially the guts of assignment three on this so again this mechanics it may not make a whole lot of sense right now but as you dig into assignment three go back to the slide it's a good jumping off point basic questions about if you will MIPS address translation at a high level all right okay again graphical representation all the low stuff is user addresses and again virtual addresses we don't know what these things map to in terms of physical memory because virtual address 1000 could map to who knows anywhere on physical address okay user address seven xxx could map to again anywhere in physical memory contrast that with direct map we know for sure that this block of eight and nine maps to if you will the lower block of physical memory it's always set in well not stone but set in silicon and then kernel virtual addresses same kind of same thing okay direct map uncached that is used primarily for hardware and device driver attachment right mechanism versus policy on this again it's kind of a tried and true theme that Jeff talks about in the class itself here okay how we do something versus why we want to do something and what we're trying to actually accomplish on this all right basically memory management here if you will the MMU okay which if you think about it also known as if you will like the TLB okay it's the kind of thing that actually does the translation for us all right that is if you will the mechanism and that is something that essentially is for the most part implemented in hardware okay that the translation is how we do it all right as opposed to the policies if you will who manages the policies again that's the whole purpose of the operating system so the operating system sets up the policies it remember the operating system is the policeman is the cop to make sure that one if you will process doesn't munch on another process that one process doesn't exceed it's if you will allotted amount of memory on that so these are policies and that's one of the main reasons why we have an operating system it's the cop that enforces things the operating system could in theory also enforce it but well we do have some problems in terms of well why can't we do that what is the operating system remember back to day number one the operating system is a exactly i.e. software all right so very flexible but there's sometimes downsides why would we possibly want to burn something into silicon and put something into hardware hardware is generally blank than software faster yeah also what's a downside hardware is generally blank than software coaching coaching yeah okay so we're going to have to do some trade offs on this and that's kind of the goal of today's class to talk about this here so if you will the operating system establishes the policies and if you will the it but the operating system doesn't want to get involved with actually implementing it for one reason that we're already kind of hitting at here all right questions on this going once going twice so you just bought yourself a 19th century Elizabethan vase or whatever it is well not what victory okay okay ah okay kernel remember it's a program kernel it's too slow okay so in other words we the kernel in theory well the kernel does know what it wants to do in the kernel in theory could do it but in practice it is just going to be too sluggish we want to come machine that's actually somewhat responsible for this so again if you will the kernel sets the what we just talking about here policies exactly and hardware then provides the mechanism so kernel the operating system that sets okay if you will it tells the hardware what to do and it's the hardware that actually provides the enforcement mechanism on this I know there was a question after Monday's class how do we actually enforce if you will one process from not touching another's process of space and this is actually what this class is going to be going over in a lot more detail on this let's take a look at this how not to do it example let's say that we have a process says hey you know what I want to use virtual address ten thousand tell me what physical address this maps to thanks bye all right so in theory that works because what is the operating system doing the operating system gets to keep track of if you will the mappings and the operating system says okay if you tell me the operating system had better know that virtual address ten thousand maps to whatever physical address it is so so far so good on this what's the if you will fly in the ointment on this here the user is asking for information here and the user is asking for information about what virtual address ten thousand whose virtual address is ten thousand which user do we know mm-hmm problem here so in other word oh by the way do we trust the user no okay so simply because a user says give me the physical address for ten thousand not a chance right so this is the problem with this that we don't trust giving the user the mapping between virtual and physical address we could but then this gets into if you will cooperate of multitasking beyond the scope of this class on this so basically it works in theory but in practice here's the keyword it is unsafe here okay all addresses must be translated but again it's not so much the translation as it is who does the translation we don't let the user do the translation that's the key thing everyone with me on this distinction here in other words the operating system has the mapping the translation all right but the operating system can't let the user do that translation or even really get access to that mapping if you will because that's going to get into security instability leaks on this so okay essentially all your addresses belong to us if you will what was that video game remember all your bases don't I'm dating myself on this so upshot is the kernel is going to be piggy about keeping this mapping to itself so the process is okay machine I want a store to address ten thousand so far so good all right and this is this is what actually does happen on a real system all right take a look here it goes to not directly to the operating system but to the memory management system a piece of hardware that is typically next to the CPU or with the CPU and the memory management translator thing of a Bob says hey I need to know what ten thousand maps to and of course who knows the kernel so the MMU asks the kernel so far so good okay how do we do it again exceptions well if we can't we try to keep these exceptions to a minimum but at least to a certain extent we're going to have to deal with exceptions because that's how we get information from the kernel and the kernel says hey this is the mapping but whom does the kernel tell yeah the MMU that's the key thing the kernel tells the MMU this is the mapping the kernel does not tell exactly so the MMU does the translation does the storing and then the process moves on its merry way key on this distinction between this slide and the last slide that take away from this slide here is the operating system still keeps the mappings but it's the hardware the MMU slash TLB that does the translation and the user never gets to touch or see those translations that's the enforcement mechanism questions comments yes okay in other words how do we know whether or not a user is allowed to access that good question as a matter of fact can you hold about 30 seconds we're going to do that other questions let's answer your question now all right let's say here ten thousand so kind of think of that like the user is over here the user needs to access location ten thousand and again what happens it gets intercepted by the MMU the MMU asks the kernel and the kernel comes back at this point the kernel now decides you know what the kernel responds positively yes this is okay ten thousand maps to a physical location what have you on this and then the translation actually happens so far so good so in other words the key part here is that the kernel responds right here with a positive affirmation and says yes here is the translation all right let's say that it now tries to access location ten thousand again remember what we were saying earlier we need to get information from the kernel by exceptions but we want to try to keep these as few as possible because it takes time we've already populated the MMU it's a cache if you will with this translation here so the second time we access this you know what it goes to the MMU the MMU already knows and bingo bingo bongo right over to virtual address so let's take a look at this again so the first time okay the MMU has to ask for help from the kernel because it doesn't know what ten thousand maps to and the kernel responds here it is and okay now accesses physical memory as opposed to the second time it now already has this mapping in there so it doesn't have to ask for help from the kernel it can go straight through on this and now let's take a look we want to access location twenty thousand let's just say that the operating system knows that it doesn't this this process is not supposed to access twenty thousand goes to the MMU the MMU says hey do I have a translation for location twenty thousand I don't so what is the MMU going to do it's going to ask the kernel exactly right does that now let's say we've already stipulated that the kernel knows that this process is not supposed to ask for that what's going to happen answer your question okay okay shrewd question on this all right so another did everyone hear that okay right now the TLB has a mapping here of ten thousand to twenty ten here and this is for let's say process a let's say process B tries to if you will access that memory location on this okay what will actually happen you're right in the abstract we have the potential for a leak here okay here's the quick TLDR on this roughly speaking the MMU is per CPU so in other words each CPU has its own translation unit for a very good reason because you know what ten thousand let's say another CPU there is running let's say process B right process B tries to access it's a few memory management unit it's not going to have that mapping so that's a partial answer to your question but let's say that we have a timer fire and we now switch to have a context switch to another process on this all right this is actually really good right and this is something we're going to be dealing with in assignment three here we still have a if you will translation in the MMU and now we're going to the next process here do we want that next process to access that translation so what are we going to have to do when we do as part of a contact switch dump it take a look at that okay as a matter of fact right this is good take a look at our West 161 code and I guess kind of mini homework assignment see how this actually is implemented in other words how do we prevent if you will another process from using the cached entries of a previous process on that side note also when you get to writing S break SBRK maybe jot this down you're going to need to do some manual stuff like this for some reasons later on okay now does that answer your question okay other questions yes every single memory access that is correct in other words like if you every time any process wants to access any memory so again remember with MIPS it's going to be load word or store word or what have you that's exactly what happens the way it's set up on a modern computer is the address lines coming out of the CPU go to the MMU aka the TLB and then from there goes to the memory location here okay oh okay not sure what you're driving up remember the stack pointer itself is a register so registers are not part of mainboard memory they don't get translated so I can move the stack pointer in the abstract up and down that's not something that's translatable what is yes yes every time you access memory outside of the CPU in other words you can think of registers as being in CPU memory that's why we like them they're extremely fast hyperfast but if we want mainboard memory yes we have to go through this translation that is exactly right which is by the way to digress a bit that's kind of why let's say there's this holy war between risk and sysco on this because you have fewer instructions that you need to translate on this and so like other instructions let's say if you if you have a a machine opcode that let's say add register a plus register b and store it in register c did we access memory no so no translation needed essentially the only if you will machine opcode that have to get translated are the ones that are load and store on that answer your question okay and now do you also see why we need hardware to do the translation because every single blooming transit access does indeed have to get translated right if we did this in software as a matter of fact again I had alluded to this in a previous lecture when you get to virtualization when let's say VMware first came out with let's say practical virtualization about like 2000 or so there's certain portions of code that actually do need to be translated in software for security and stability reasons and even though it's a small fraction that's one of the main reasons why virtualization classic virtualization is slow as frozen molasses because you're translating on the fly in software bite for bite on this and that's why in practice we do need hardware for help on this I remember when I took this class as Jeff said hardware help okay you know the user asked for help from the operating system the operating system has to kind of access if you will ask for help from the hardware on this other questions okay so we just nuke the process here alright so let's talk about one possible way we can actually put this into practice how can we translate if you will virtual to physical addresses simplest way here is we like to just call it base and bounds essentially each process gets a base physical address and a bound in other words you begin at who knows physical RAM location x and you get y megabytes yada yada on this and the checking in the translation is very very simple on this again this does work in theory and to a certain extent in practice but let's let's look at how it works out in real practice here alright so let's just say here that we've got our 10,000 again and it goes to the MMU and MMU does not have an entry so it asks for help from the kernel and take a look at what it gets back from the kernel this is this process is virtual memory that's it very simple in other words this process gets all memory beginning at location 40,600 and you get 30,000 worth of bytes of physical memory on this piece of cake two variables on that it's going to be very easy to code and software on this it's easy to conceptualize on that so in terms of what's going on here what do we do this is the block of physical memory that were actually given and take a look at this here virtual address zero maps to 40,000 virtual access at 10,000 maps to 50,000 etc that's why the translation is so easy on this so you can kind of keep going here if this were virtual address 20,000 it would map to exactly okay let's keep going let's how about virtual address 30,000 would map to okay how about virtual address 40,000 right okay so that's one thing we do have to kind of check here so 24,000 there we go okay and here we go 45,000 you see it is bigger than the bound address and guess what's going to happen in the next slide here boom there we go piece of cake understand so let's see here simple easy okay take a look at that computation at ddi whatever it is irate you get the idea here cons here what are the flies in the ointment on this do you think hmm oh yes okay I think you're getting on to something there okay the process is limited to the amount of memory now in the abstract that's good because we don't want the process to be able to go beyond the amount of physical memory that we give it but you're getting at something there okay which is what what about this multiplexing what's this problem with giving it again what was this again here right certainly okay let's take a look at this thing in terms of like bounds here okay this is a base of 40,000 in physical memory and it's a bound in 40,000 physical memory here so this is its allocation here what's the problem with this here yes yeah right I think that's what you're driving at here okay because we're allocating this big chunk here and it's going to cause problems here all right the first time when we first boot up a new machine we I've got all this free memory but as the machine grinds along it's what we talked about on Monday we're going to have all these fragmentation problems on this so this is the big thing here that let's see here okay here basically address spaces I should let me actually just skip ahead here for just a second okay did Jeff go over the slide already like with the address space like the kind of typical address space and user space okay again for those of you in the back if you put on your opera glasses or if you wear a telescope you can kind of see here that the virtual address here this is what a typical user process is given in practice down at the bottom here we have a bunch of stuff that is executable code then somewhere kind of in the middle we've got the heap and then way up here we have one or more stacks on this this is typical and this is actually done for very good reasons that are kind of beyond the scope of this class here so in other words this is what we want to give to the process here but how does this play out with what we had in that previous slide where we have that big blob of memory in order to accommodate this what do I have to do if I want to accomplish accommodate this in virtual memory what do I have to allocate in physical memory same thing right I've got a kind of bracket this entire thing here which leads to what's this going to be here exactly unused this is a waste here everyone with me on this so in other words the fact that we have in effect discontinuous virtual addresses means that we have to well in order to support it we only have one big blob of physical memory we're going to be wasting a lot of physical memory on this so fragmentation fragmentation okay we've got internal fragmentation here which means in order to support this processes are going to be very likely to say you know what instead of giving me 10 megabytes I need you to give me 100 megabytes which in turn is going to put pressure on if you will external fragmentation because the bigger the request per process then that means that you're not going to be able to support if you will request from other processes so far so good okay base bound there we go waste so going back to this again this is kind of what we would like to have some sort of a support for again this is a typical in practice production virtual memory layout so instead of supporting this whole bugger is one huge allocation let's instead you know what try to have several sub allocations and this is what we're talking about with segmentation here so each one of those things we're going to have a base segment we're going to have a heap segment we're going to have stack segments on this here so let's see we can do this okay it's still base and bounds here but we have if you will multiple segments on that okay this also leads to if you will by the way OS 161 doesn't get too much into this but it's definitely fair game for the exam in terms of policies on this and that is permissions namely if you will we've been kind of talking about whether or not an address is valid or not in other words all or nothing you can have if you will several sub flavors of permissions namely for example you probably talked about data execution permission if you will prevention as kind of a security hardening what you typically want to make sure is that certain areas of memory that stored data you can't run programs in them or other things let's say parts of memory that let's say do store executable code you don't want to be able to write to them so in other words we can have several sub flavors not just can a particular process access a particular segment but can it I can be more granular is it allowed to read but not write is it allowed to read but not execute so I can be more finely grained with that as opposed to if I have one big segment I can't drill down like this alright so once again here segment has start and base and bound and once again the math on this is essentially the same except that unlike before we had one big huge base and bound for the entire process now this time we're allowing the process to have multiple bases and bounds on this and this allows us if you will okay let's see here so 10,000 let's say a process wants to do that so MMU doesn't have an entry asks for the kernel kernel says okay you know what this matches this particular segment in other words virtual address 10,000 maps to physical address 43,000 with a bound of 1,000 on this so we can kind of see here then okay 10,000 that's the base address it maps to 43,000 on this you know if I wanted let's say 11,000 it would map to what is it like 44,000 yada yada yada on this alright let's try 400 okay 400 is it within this 10,000 to 10,000 plus 1,000 no it's not so we need to ask for help from the kernel and the kernel could reply yea or nay it turns out that yes indeed this process is allowed to access that memory address because there is a segment okay the process has multiple segments if you will that have been assigned to it by the kernel and you can kind of see virtual address 400 fits in with 100 plus 500 so yea that's in other words essentially every thing from virtual address 100 to virtual address 600 it's within that if you will segment so there we go okay and you can see let's see what we have oh yes in it actually it maps to if you will physical address 16,300 on this and let's say we try to people have gotten you've seen dead beefs already you will with assignment 3 right so by the way they're not trying to be queued it's actually a debugging tool on that if you access an address with hex d e a d b e f it means it's simply invalid it's done deliberately on it I'm sorry not so much address but if you will val well what a either address or value it it means there's something bad with it so you know what's going to come next yeah boom we're done questions on segmentation how do we set the bound in the first place okay that by the way you're going to be doing this as a sign of three on this okay a couple things here the question is how do we come up with these bounds or these segments in the first place roughly speaking there are five segments that each process have it can have more but the five classical segments are stacked keep if you will text data and b s s on this three of them text by the way which is executable code it's just called text don't ask me don't blame me alright executable code data and b s s those are static or global segments those are if you will determine at compile time and it's something that is actually obtained from the disk when you run a program take a look at load elf executable link a format and that's actually what sets up the static segments on this as far as the two other big ones the stack and the heap on this well remember the stack is something that is if you will exist dynamically as it's a call stack as subroutines get called and if you will the stack grows up and down so that's just something that we say we give it a potential range of memory and it uses as much as it might possibly do the heap that's user managed in other words the user simply makes a sys call and says give me a block of memory and then the if you will the kernel comes back and says here you go this is your new segment answer your question so quick version is where do these segments come from the static segments you get them from the disk from the elf format if you will the static I'm sorry the stack kind of takes care of itself and the heap that's something that the user itself requests for now this kind of leads to interesting questions because the user could ask for you know what give me 10 trillion gigabytes and the kernel will come back and say haha you think and so you'll get a return of an error so there's definitely some sanity checking that goes on by the way here does this kind of explain here you've been on timberlake or you've been in programming something and what's kind of what's the name of that error when you have a null pointer segmentation fault on this everyone see why now because these are segments in other words if I try to access okay right here dead beef is outside of one of these segments so I'm faulting on lack of a segment that's where the term comes from and as assignment three you get to enforce the segmentation falls and kill off the bad user processes okay so this really goes a long way towards solving our problems on this so we still it's pretty easy here translation we can also now have a whole bunch of segments and we can also kind of have more granular permissions here so it's also a much much better fit for the way we typically actually organize virtual memory on a typical process so we're really a long way towards solving our problems we're gonna have to be a little bit pickier here in terms of cons what was our biggest con with the one big honkin segment issue waste of space here now we've gone a long way toward addressing that right because we now know how longer where was that one was it okay this one here we with multiple segments we don't have to worry about wasting these red areas anymore because we can simply say give me a segment that maps to this to this and all these others it takes care of that very nicely so what's still our problem with the current approach to if you will memory management it addresses not having this here have we gone as far as we possibly can in terms of the big honkin block of memory we've improved it but what's still kind of a mini honkin block of memory there's wasted space kind of where well it's not so much even waste well you know what it's a type of waste in that these green things what are they they're contiguous in virtual memory which means they have to be contiguous in physical memory okay so in other words with the segmentation approach to memory we can map each of these things just fine here but each of these maps here still has to map to something underneath in physical memory that is essentially exactly the same size and that's going to still not address our problem of under on a production system under memory pressure good luck finding more than essentially one contiguous physical page at a time in other words we might need for the code segment let's say a hundred contiguous virtual segments we're not going to be able to find a hundred contiguous physical segments on this we're going to be able to find one here one there it's going to be all over the place because the physical memory is fragmented maybe not as bad as if we had to request this entire thing but still we have that as a problem so okay okay questions on this so far okay this is the ideal situation so we talked about kind of improvements ideally is we're trying to get rid of all this fragmentation and have things as granular as possible so let's go to the other extreme here this is our wish list map any virtual bite to any physical bite on this okay yes the operating system can't do this but hardware certainly can in theory do this and as a matter of fact here here we go TLB's and okay let's throw a cash at it essentially throw hardware at it okay this is a type of memory that does well let's look at a specific example here we have essentially this is think of it as a key value store on this in other words it's already been pre-populated with some examples here if you tell me the virtual address I tell you the physical address for any given bite on the system here all right so using that I can in one time if you tell me 800 I say look in physical address 306 for what it is you're looking for you tell me 110 in 354 if it's not on this list presumably you get a segmentation fold so like let's say I want to map 800 there we go it's going to look at all of these and in all of one time immediately return my look up on this if you've taken 495 90 with Chris Schindler he talks about a lot about this essentially what's going on behind the scenes is there's a bunch of simultaneous comparators and essentially one of them in effect raises its finger I've got a match and the others do not and then this if you will maps a one out of here and these map zeros out of here and so it determines in again all of one time exactly what mapping you want okay this works in theory what's going to be the problem here what's hardware it's fast but it is yeah well I mean it's not necessarily limited if I have the budget of who knows Fort Knox I could probably do exactly what I just described here but it's limited in size in practice here we can't make them arbitrarily large certainly within a reasonable price on this so here we go here segments are too large and but these cams if you will TLB's if we make a TLB big enough to a map the entire blooming memory it works theory in theory great but in practice it's just going to be unwieldy and impractical on this so what we're trying to get at here is we're you know we need some sort of a middle ground between the two we can't map individual brights but can we get at something that is bigger than a bite but smaller than a segment and that's kind of what we are driving at and that's where Jeff is going to be picking up on Monday on this questions I can answer for you on this so far okay so again kind of a recap on here we talked about at the beginning of the class I went over a little bit about if you will MIPS add if you will the mechanics of if you will the address space as it appears to the MIPS chip for now kind of the takeaway is essentially 0 to 7 is user space and it goes through the TLB for translation 8 and 9 is kernel direct mapped and essentially that's what OS 161 uses for all kernel addresses then we talked a little bit about well how can we actually do some of this translation what's the enforcement mechanism we can't let user space directly have the mapping to physical space on this again remember that if a user says I need let's see to access whatever memory maps to virtual address 1000 the operating system does not tell the user the operating system only tells the memory management unit and then the memory management unit kind of implicitly takes care of the translation so the user always has to go through the MMU and the MMU does not leak to the user if you will what the actual mapping is and that's also the enforcement mechanism because if there is not an entry in that when the MMU goes back to the kernel the kernel is just going to kill the process on that so that's the enforcement mechanism then we looked at how can we actually apply and we say well let's just have start off with one simple big huge honkin piece of memory with a base address and an amount of the address and that will work in theory but the problem is we have to map all of a processes virtual space into that one big honkin segment and that leads to fragmentation really really quickly so let's kind of square scale it down a little bit more and come up with segments rather than having one big segment for the entire user process we're going to have one segment for each of the segments if you will stack heap what have you on this much more granular and we don't waste the space in the that amounts to the virtual space in a particular process but we still do need to map continuously if you will physical memory that corresponds to a particular segment so we got to do better than that we looked at if you will can we map on a byte by byte basis it's great in theory but way too impractical so we're kind of homing in on the solution of if you will what just going to be talking about on Monday questions comments complaints okay hundred sixty eight well hundred seventy hours to assignment two point two okay so use us that's what we're here for that's what the office hours are for that's what this course is for good luck people enjoy Tahiti