 you again for the session 29. So, welcome you all. Let us see what we will be covering today. So, today's focus will be on MMU ok. So, basically how the virtual memory is supported by ARM. So, the MMU is the hardware module which can be integrated with the SOC if the designer would like to have virtual memory support in the system. And then if we buy a MMU module from ARM and if we need to consider it based on our needs, we should have a good knowledge of how this MMU controls the virtual memory support. Now how does it provide the support and how it can be configured similar to how we learnt about cache memory controller. So, this discussion will be focusing on what are the support features available with MMU and how to be configured ok. So, there is a whole lot of program with the sample example also in the book that one of the references that I have mentioned that is ARM systems developer type. So, what I will be doing is to make you understand what are the features supported by MMU and what are the co-processor registers which help us in achieving the configuration of MMU ok. And then if given a system requirement, how will we go and change them is a good example is there in the book and I think with the knowledge that I am going to share about the support provided by MMU, you should be able to understand that of the system in the book ok. Maybe we can take it as an assignment so that we go through that piece of software written for a better understanding very good. So, this discussion in this class we will talk about what are the differences between MMU and GPU ok which I touched upon when in the last session when I explained in detail about GPU and then stage table also you are now familiar because of the session as you know in the last session I will take the point you prior to that. So, let us see what all is support provided by MMU in those given by ARM ok, what are the stage table supported and how we can consider it, what are the control registers we need to be programming and then we will see an example system which is given in this slide and then I will just briefly talk about it here to complete this discussion on memory ok. So, this will be the third session and if you consider from the memory hierarchy this is the sixth section that we talk about memory. So, this will be the last session we talk more on memory because then we will move over to tools and any other parts of the ARM as well see ok. So, what is memory management unit? These are all you are already familiar that I will just summarize it here, MMU simplify programming of application part because it provides the resources needed to enable virtual memory ok. MMU supports to use the virtual memory in building the application as well as running it in the processor with a limited to be memory. So, an additional memory space that is independent of the physical memory attached to it. So, we can always assume any amount of virtual memory may be is a for limited for gg application, but you are free to you know decide or assume the existence of a huge memory in the system. You do not have to worry what is the physical memory in my system is it 1MB or 10MB or whatever it may be ok. As long as MMU is there we can realize what we want, but if you have a limited memory or very less memory then what happens is the performance will get impacted, but still the program can run in your system ok. Maybe some minimal is there you know you cannot have a very small memory and expected to run, but it is always possible with a limited physical memory available. If MMU is there in the system and if you are considering the MMU properly, MMU in XA it includes configuring the cache configuring the write buffer ok. And then if you have chosen a write memory map for your application then we will be able to run them in the system. And of course, the board that you have with along with the ARM processor and then the SOC and other memory you should make sure that what you assume with us will be a physical memory configuration is actually there in the system. The board should also be accordingly designing ok or you have to modify your memory map of the physical memory based on the board design ok which address it is referring to. So, that you know that where the physical memory is deciding. So, that you can configure the MMU which is coming in better way it should be in better way. So, that will effectively run your code on the board ok. So, let us see. Now, MMU access a translated which converts the addresses program addresses of the programs and data that are compiled to run in virtual memory. So, when I say it is compiled to run then the virtual memory what the tools assume ok in the inter and loader they assume some location in the virtual memory ok. Some code is deciding here at some starting delivery in some 40,000 address like X and X amount ok. Data is know may be you know is these are the 18,000 hex from addresses ok. So, accordingly the code would have been generated to access these locations because what I mean by code is generated. Suppose, the LDA instruction is there and then you have to access may be address which is pointed to by R2 ok and then we want to load it into R1. Now, what we need to know before we use this you know run this instruction we should have made close that R2 is loaded with proper address right. So, I could have sent that move immediate may be 80,000 ok. You can you know see that whether it is possible or not because if you know sorry I am going writing from top to bottom bottom to top, but I think you will be able to read it. So, let me say so move ok R2 80,000 ok. Is it possible to write this as a 0x there and actually there and then see this is then that means we are assuming that an address located at this location we are trying to move it to R1 the data which is there in this location. So, this is the address which is being used by the program inside. This is the virtual memory, but actually in a system where this program is running ok ARM is executing this instruction. It assumes that it is accessing the address as 80,000 ok, but we know that there is a memory view ok. We know that in the physical memory there is no address like 80,000 may be it may is you know from 0 to may be some 20,000 extra code of view. So, this is the memory we have and, but our program is compiled and it is already there QW is made and this is assuming that the data is in this location. Now, how can this program assuming that this address is where the data is stored will run on this system which has got a memory only of 20,000 it is still possible because we have an MMU and if the as a system programmer you have taken care of integrating the MMU properly may be whenever this guy is generating 80,000 you may refer to may be if data is starting at 10,000 assume ok, 10,000 X. Then if as long as whenever 80,000 comes you want M you know memory to MMU to generate and 10,000. So, it is a relative ok problem there to that 10,000 80,000 4 may convert it into 10,000 4. So, effectively what happens is during the runtime this address keeps on translated ok, it is going to be translated for every instruction app it is not done for once and then forgotten every access wherever there are some low LDM or HDM is involved this transition happens. See remember in this instruction the translation is not happening because this is only moving a constant into a register R2 which is stored here at 80,000 may be in the code 80,000 is stored as a part of the code that gets copied into R2. But when this instruction is executed this R2 whatever is the condition of R2 is going to be put on the address that. So, that 80,000 is going to come out when this instruction is executed. So, at that time MMU is acting looking at the address and then it you know it it translates it. So, it translates not only the addresses generated by the LDM and HDM instruction and it also translates as the code. Suppose this code is residing in the original program ok, it is residing at 45,000 assume. So, the instruction R15 will be initialized with 40 45,000 and it will be generating this address to access the instruction. Now that time also MMU comes into play and then if suppose code is loaded at 5000 does not matter as long as you know the it is matching with the size where this particular page is starting. Then this 45,000 what is generated to access the instruction also gets modified. So, every address generated to access the instruction as well as any instruction that is generating an address for the LDM or HDM instruction gets modified by the MMU in real time dynamically based on the configuration of the MMU. So, that a proper address is generated which is going to the physical memory which is in the system and it gets accessed. So, effectively what the programmer thought the code is where is the code is and where the data is going to be residing is completely different from what the assumption the programmer had. So, when I say programmer means the programmer who is building the executable and then locating the different you know parts of the program in different parts of the digital memory it is across you know real time modified to access the different locations in the physical memory. So, this is what is happening for every actors. So, you should have this you should understand this particular translation. So, to the actual physical address ok the MMU translate to the actual physical address where the programs are stored in physical memory while they are executed ok. This translation process allows programs to run with the same virtual address. So, still the processor when it generates the addresses for accessing the code or accessing the data it generates the original virtual address the programmer has to access the data card ok. There is no change because the programmer has to use the executable assuming the 80000 and 40000 as the code and data. So, the you cannot go change the code unless you build it again with a compiler and linker and know a whole lot of things I will be talking about in the description lecture. So, unless we change the code generator to accommodate the MMU address it will continue to assume that this is the address that is identified with that. So, if the code is assuming this then what on executes will also be doing that. So, we have to take care of that in the MMU. Similarly, if the programmer is or starts the code is retaining in the particular location the R15 will be accordingly as you know modified to access the original virtual address only. So, we have all the examines suppose within the code there is a branch and then the relative addresses it concludes will be based on there it assumes that code to be you know if it is an absolute you know long jump or something then it will generate the same address ok. It will replace the value in R15 with the original virtual address which was thought of the program is supposed to be retained. But because we are impediment we are moving it to some location which is different from this we have to take care of sorry it is a MMU MMU MMU. So, we are you know locating it somewhere. So, we have to take care of changing the translation ok in running runtime. That is the either it is job of OS or I assume that as a programmer you may be developing a OS code on this ok. In that case we should be aware of all these things or as an application developer you should be aware of these things happen do happen in this part ok very good. So, while being held in different locations in virtual memory the execution happens. Now, let me again you know just give you a three four view as well as the differences between MMU MMU. The primary difference between NCU and MMU is that addition of hardware to support virtual memory. So, if I say this is MMU ok it has got virtual memory support ok. And then what all you saw what all what are the features are supported by NPU is also there in the MMU ok. It is all comprehensive in one case it is done in one case in the one module. So, the MMU hardware also expands the number of available regions. So, one more additional thing is if you recall NPU I said that it there can have there can be only eight regions. So, the limitation came ok because it maintained if you remember C 0 to C 7 the secondary registers of the co-processor 15 ok of the C 6 to C 7 I I think the primary register was something and then it was doing ok some primary register. And then the secondary register was only eight registers were there and each register was holding some information about the access permission and what is the right buffer and every you know there were another registers to manage the cache and type buffer. So, all these three of them all these three units were controlling one region of the memory. So, we said that region had a 0 to 7, eight regions were there and each of them have a different kind of this based on the number the participants was more maybe higher regions will have higher remember. So, that their attributes will take over ownership ok take over as a whenever a task is running those things we talk. So, these were only limited eight regions if we have continue to have eight regions in the with the support of MMU also then it is not possible to support many tasks ok. I said multiple tasks can be run in the MMU we are bringing in because we want to bring we want to run multiple processors and when I say multiple processors they will have multiple pages codes, stack, data inside that is also the huge because they are all huge there will be so many number of pages all there for each of them. So, based on the process as well as based on the type of the memory the regions will be different. So, with just eight regions supported we will not be able to achieve whatever we want for a multiprocessing system. So, what happened is this multiprocessing system actually is possible with the help of MMU the MMU manages multiple regions not only limited to eight multiple regions because it is maintaining it in software that means it is having a page table and it is the page table itself is maintained in the main memory so there is no limit on number of registers. So, whenever that time may be one of the registers in the MMU will be configured so that that particular region just active. So, it is not restricted as in NPU ok so that you should have in mind expands the number of available regions NPU had a limit on number of regions to be eight since it was maintaining it in registers. Now MMU maintains the regions attributes in page table which are held in main memory. So, once it is the page table is in main memory the number of entries page table entries can be many so it does not matter ok how many regions you are defining for every process there I told you that every process has its own page table but in P1 will have its own page table P2 will have its own page table ok so the number of entries each entry will be corresponding to one region it could be a page meant for code or it another entry may be a page for the data if multiple code code pages will be there like that there many are there there is no limit ok in the sense there is some physical limit which I will tell you once you know the number of entries in the page table and how many pages you can have it is a it is not infinite ok any you know system in a physical world can have infinite possibilities ok, but it is not as low as eight regions that will be ok very good now whatever the attributes region attributes that we saw in NTU we said that ok we can say data multiple sizes start size we can have you know different start addresses we can give and then we said that starting from 11 to 31 ok 2 power 11 to 2 power 31 and we call five numbers you know n was there 2 power n plus 1 was there so we can we have a pages of size 4 0 by 2 4 GB ok the page sizes and then we said access permissions can be read or write or no access and then cache can be write back or copy back and then the write buffer can be enabled so all these are possible now NTU had a limit of eight regions where as in MMU we can have any number of entries the maximum limit will be the number of maximum page table entries that we can that can be supported in the future that is much more than eight there you go now let us go into virtual memory okay this picture should be familiar to you because in the last session we talked about virtual memory and physical memory and then how a page ok page frame in the physical memory so this is the virtual memory this is the physical memory so this corresponds to EMR okay it is main memory now a task now I am calling a task or a process both can be used you know synonymously so we assume that a task is starting at this location address and then it has got all the regions okay it has got code stack initialized data whatever it is all of them are there in area now whenever this task is running in the ARM processor okay the addresses that are generated for accessing the code as per the data will be residing in this area range only will be coming in this range only correct so that addresses come out and then it will all be having a base of 400 now suppose you have maintaining in the MMU here registry is called relocation register we let the call it as relocation register because we are relocating this address to some other address correct so it is called a relocation register it is basically a page table entry okay so this relocation register what is the whenever it is 400 it replaces with some say assume 800 okay now let me tell you what happens this is going to this address now what is the requirement system in the board that you have ARM is there and then physical memory is there now there will be some address decoder in the system whenever the address is coming now address is coming to the decoder from the MMU because MMU has translated it so whatever address coming from ARM processor is coming to MMU and MMU translates to now 80,000 now this decoder address decoder will look at this 80,000 okay what was this address in the MMU and then suppose the memory chips have chips like and you know whatever address available you know all those things so it will enable the chip with that memory so if in fact it maps to this address so in the physical memory we can keep the code of data for meant for this task 1 so that ARM can run it okay so this is what is happening now MMU is having this so whenever it sees this 400,000 something coming out it replaces with the address where the result actually memory is there so why would we program the MMU before programming the MMU okay this is the MMU before programming MMU we should be aware of where the physical memory in the system is and what is the address they are and which are the addresses not valid all this should be known to you then only as a programmer as a as the OS should be configured if it is done by OS or if you are writing a small micro kernel or you are having the responsibility of initializing the MMU then you better know what the MMU main memory is where the main memory is in the system how which address it is actually located and then where the programs are located and then accordingly initialize the MMU so that we get a conversion okay very important I hope this is clear to you let us see another task comprised to run at the same display number now I am bringing in another task see task 1 may be done by a person A okay task 2 you are given to another company it could be another company another person in the same company he is writing another application they do not know each other existence okay they do not even know that the program that they write the application they did they write are going to run in the same system so how will we expect them to know to not overlap the address space that they assume their code is going to be we cannot assume and we do not want the description too so what happens we have to assume whatever address I want in this space okay the whole which space is available to me as a application developer I say this is what I am assuming my program is going to be and then I will inform the OS vendor or it is not that we send a mail about this it is all captured in the executable itself okay when the executable is built it will have all the information about what their assumptions are now OS when it is loading the executable into the processor it will do the job of configuring the MMU okay it is not done manually okay it is all done using the OS okay otherwise if you are writing a kernel maybe you should do that so then what happens this addresses it does not matter task 1 is assuming this task 2 is also assuming the same address no problem as long as you know one only one processor is there either T1 or T2 will be running so whenever T1 is running I may assume the translation is like this whenever T2 is running I will assume another translation but I will configure the relocation in the page table in the MMU so that proper address is accessed or address is generated so virtual address can be run they simply change the value in the relocation register by mapping the task to a different location in the physical community now suppose T2 and T2 are not running together but they may be residing in the MMU okay suppose I want to keep task 2 okay in this location assume that this number is also there maybe it is accessed 70,000 okay I do not know it is not 70,000 700 7600 in this address it is there now what happens whenever T2 is T2 is also originally built assuming that this is the address that does not matter whenever T2 is running this relocation register will be converted to this value so what happens now this task 2 whenever it is generating a address it will go here it will be converted to 7000 7600,000 then it will start accessing from this so effectively what happens is both this and this memory core or data can reside in the physical memory and then based on who is running in the particular time that will be translated according to the input so you have to modify this entry here so that now it goes and looks at the other area so this is done by OS when it is loading the file that is what we are doing so I am sending this out context switch okay very good so whatever pages and what is converted into a page in the physical memory is a page that is the convention that is used by all the OS so we should be aware of that now multiple relocation register where do we need as I send you multiple processors may assume different we need to have a different mapping from virtual physical so an area of physical memory is known as the page memory pointed to by the translation processing on page frame and ARM MMU hardware has multiple relocation register supporting the translation of virtual memory to physical memory okay so it will have multiple relocation register it is not restricted to one register I told you that we will modify the entry there it does not we do not have to have limitation of number of registers we can have multiple registers and then if task 2 is running it will access another register and it will run on the the MME system must be able to translate many pages to many pages okay that is the requirement of MMU otherwise if it also has some limitation of 8 pages only or 100 pages only then it will be limited enabling many processors task to run in a system though it is limited by 16 pages so basically what happens we are able to enable many processors to run in the system though there is a limitation okay good now I briefly explain about this I think we should have a very clarity no good clarity on this but let me tell you once you know physical and page frame and pages so what is the translation because it is nothing but the cache but it is different from the data cache and insertion cache that we talk about okay it is totally different hardware in the system it comes into play only when there is some MMU so MMU you know it does a translation it does a translation from virtual to physical and for the translation any 32-bit address coming will be translated to another 32-bit address that means any page number here will be replaced with a frame number here now you can imagine if there are so many processors and so many pages then this to this conversion there will be so many entries now to perform this we do not want to go to the page table because page table itself is in the memory okay page table only is having this translation information we saw the real operation register page table is nothing but a real operation register except of real operation register so every time we need to access the real operation register okay and then find out what is the frame number and goes to that particular page to access the page table but that will be expensive because we have to have a claim that two memory accesses will be involved to avoid that access extra access okay first time when you have to do it anyway okay you cannot escape from there but once the first access is done we know okay if I have this particular task if I have a mapping like this particular page number comes this is a page table so this combination this mapping is actually a 4 byte entry each page table has a 4 byte entry that is called nothing though page table is too many entries are there but each entry is just a 4 byte that is also 4 byte okay so everything what you see in normal or any processor 4 byte entry so the page table entry is just a bit okay so what we do we keep it in a cache which is a high speed cache whenever any address generator it will come and check whether the translation was generally what was there in the page table is cached here or not if the page was not access then maybe this page entry will not be there if it is not then the looker said then it goes and accesses here from the page actual page table and then makes an entry here so it default it becomes an entry here so the next time when it is generating subsequent addresses it will go access from here so the translation happens in the same page that is the advantage again page it goes to page table then page 4 it gets the page table page table this is the order it will access the page table then next time onwards 1 2 and 3 okay so this going from 1 to 2 and 3 are in the single cycle it is not one memory cycle memory cycle it will be multiple cycle when I say single cycle means same clock please remember okay most of the clock is there all these kind good so tld caches translation caches it caches the translation of recently access page so it is the cache it is not caching the so we call cache caching the data or instruction it is not caching the data or instruction okay it is actually caching the translation of addresses so it could be for both instruction access as well as for data access so anything whether we want to access the data or whether we are accessing the instruction we on processor will be generating the virtual access so that address when it is generated it will have a a set of addresses will have a translation why I am saying a set of address will have same translation suppose you are in 4 kilo byte of page so the 4 kilo byte suppose this is the 4 kilo byte this will be sitting in this place right so within the 4kb offset there is no change in the mapping okay this starting address is changing to this starting address after verse plus 4 if you do the same plus 4 only you will be doing here so the offset portion only changes the main in the base address does not change so within 4 kilo byte of the page the translation is not changing all the no addresses what you whether instruction or data will be most of the addresses will be within the page so every time another instruction is accessed or another data is accessed we do not have to know newly go and find out what is the translation so once we know that within the page if any access has come it will be there anyway the translation is same so we can use this okay that is why translation of recently accessed pages please remember it is recently accessed because number of entries in the translation request is limited it could be 16 or 32 or whatever no now let us first maybe having more but it itself is not too many so recently accessed pages will be there so the replacement policy of this page is always invalid this is recently used that means we will be replacing the entry which is which address the translation which was not used for quite some time that will be thrown okay and then we will keep this translation because as per the latest entries so let the translation happen for all the recently accessed pages so it is really a fully associative patch of 50 code okay whatever the number of entries are 64 okay this is what we will be looking at okay but there are some other implementations where there may be more entries so we have to manage it with the 64 now why is it fully associative because the whatever the page address page number okay whatever is coming the after question of address you do the content addressability can so that we can find out whether that entry exist or not so if it is so then look at that value look at get the page frame number and replace it with that and let it here to access it here so it is a fully associative cache what does it contain it has a page table each of them has a page table entry and please remember page table entry is of 4 bytes okay that is more 4 bytes so there are 4 bytes of 64 entries okay so that is the page table entry okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay okay are maintaining the page table entry. That means, we can have access permission like whether write buffer has to be used or buffer has to be used, cache has to be write back or write through that decision can be taken at a page level ok. And so, they are assigned to the page, whenever they address a particular page table entry is used the access permissions belonging to that page and table entry will be looked at. So, cache and write buffer configuration further page so, that is also used here. This means that access permission and the cache and write buffer behavior are controlled by a granular at a granular type of a page type. So, it is at the granular type of a page type ok. Now, it provides finer control over the use of maintenance. So, once it is by say that a page size of 1 kilobyte or 4 kilobyte can be there, then we are having a finer control how a particular page is accessed, how it has to be, what are the access permissions are to be given, they are all controlled at the page level. So, regions in MME are created in software by grouping blocks of the pages in MME. So, a region can be a group of pages ok. So, we can have a different access permissions also at the page table a page entry entry. So, now you see a region is like this and we have a region 1, 2 and 3 and individual regions have a different pages. So, access permission could be different, but they can be all fall under one region. So, and the stack can be of different location and each will have a different page table. So, the sizes of page table also can be configured based on the MME configuration that I will be talking about. So, you see this here, a flash is used here for pointing the region which is code actually, you know text is another name used in a in the literature to mean code, you refer to the code. So, code can be in flash, the data can have been in flash ok, it can be sometimes ok. So, it is very time consuming as well as number of you know these you know erases on the flash will also be limited and it is time consuming and it involves lot of time. So, while learning the program you cannot use a flash for storing the data ok. Maybe you can either the flashing itself is a big difference ok, lots of time it takes to flash contents into the you know into the flash if you want to write into the content you know flash it takes lot of time. So, the reading is fine ok. So, code can be kept here. So, that is why this code thing is pointing to this location. So, code is running from here, typically some places even in the systems we might copy the code into the data you know RAM and then the code is just from here for a faster access ok. In this case it is shown as if you know it is the code is mapped to this flash. So, flash is a physical memory space memory. So, it is a part of the memory may be in this situation and this part is a RAM. So, sport and data are mapped to this location is a physical RAM available it could be a DRAM in the system. So, this is how the translation happens. Now, multi-tasking with the MMU let me explain this already we talked about this when we discussed NPU. So, each task let us look at the task and each task assumes the same address as a physical memory starting address ok. It could even assume you know the same size also it does not matter, but they all assume that their task is running sitting in this location. But if you see in the actual memory they are placed in a different location in the physical memory. And at this moment task only is running ok. The translation now takes this address to this entry and then it generates a physical memory address and then it is expected. But when this task is running in a processor OIS takes care of making sure that this entry is used to translate into other way apart from this you can see that the grey portion of the other area which gets protected by the background regions which we talked about in NPU. So, I do not want to repeat those things. So, this because this is you know this is you know we use as a region this the PTE entry the page state entries of this task is access permissions and all the region values are taken. So, these will not be accessed this area will not be accessed when this process is running. Similarly, when task 3 is going on it will access only its own area it will be blocked from accessing the other task area. So, this is active and this is dormant why it is active and grey is only dormant. So, this is explained earlier. So, I am just refreshing the memory like I am we are refreshing the memory here. So, active active a different page state is to execute different tasks on a context with overlapping this will be because all the tasks are overlapping each other in the own state. But when a particular task is running only this is visible this is not even there because the task is not done. So, nobody is assuming that there is only it is existent it does not see the existence of only OS knows over all there in that system ok multi-tasking and MMU. So, page table can be written in MMU try to remember and not be mapped to MMU hardware ok. Why is this transition? Page is there we can decide in main memory, but not be mapped to MMU hardware. Can we think of a reason? Let me explain. So, processor is here we always know processor is fixed to the size now for the ARM, co-pondering object is this MMU is here and then we have a main memory. What I am saying here is the page tables can decide in main memory ok good I want the page tables to be somewhere here ok. Page table you know that it is having the information about how to translate the entry. So, page table entries are here multiple page table entries. It is also resetting in main memory because we cannot be in a register because the number of entries are viewing. So, it has to be in the main memory. Now, why do you need a page table? Ok if I write it here you will know because I am writing it and typing that can you will not know what I wrote. So, PTE is here ok entries. Now, what do we what do they have? They have a translation information real-operational register translation, but if I say that even to access the page table I need the help of MMU it is like you know I mean you know expecting something which is not possible ok. We want to look at these values to configure the MMU correct because now MMU is rock it is not even configured assume that it is like a you know chip sitting there without any information about what to do when the particular address comes what do I generate it does not know it is still uninitialized. Now, to initialize it I need the entry page table entry ok because the processor does not know itself we know how to configure the MMU it has to read instructions a OS you know core and then OS will be generate you know will be already would have generated the PTE and then it will know that is the page tables are residing in the memory. So, it will say that here is this entry is from here and then accordingly you configure it. So, to do that is this we need you know the MMU entries then it is impossible to configure the MMU. So, that is why page tables are always mapped ok from virtual that is also virtual space to main memory worn on 1 to 1 comparison. That means, we have virtual memory address you know 1000 ok will translate it to 1000 only in the main memory there is no translation of this address. So, in that case if there is no translation then we do not need MMU for that. So, then we are free to access the page table of very good. Now, I am able to access the page table without the help of MMU now you see in the page table I can configure the MMU entries in that case then we need to generate some address it will translate it properly. So, it is like we need this information to configure it. So, we cannot use the MMU for doing it that is why the page tables are always mapped to the MMU hardware ok it is like let us say it is not mapped to MMU hardware and the addresses are 1 to 1. So, there is no difference between being that this virtual address and physical address of page table that is what I am saying is same. So, then we can short set it MMU right we do not need MMU. The addresses will be as soon as these addresses are generated it will not even enable MMU it will go into the you know address directly without any translation ok. That is also a job of that and you know the default is in the system. So, multi-tasking system is created using separate set of page tables each mapping the virtual memory space and each task of the system memory that I told you already to activate a task a set of page tables of a specific task is mapped into to be used by the MMU. So, whenever P 1 is running that is table will be there when P 2 is running task 2 is running another page table will be there and MMU will be considered according to to be the perfect OAS. So, OAS is residing in an area where MMU need not be there because it is residing in an area which is 1 to 1 from virtual to physical ok including the page table entries everything is 1 to 1. That is why I am running the OAS special OAS area where we do not need to access them using MMU. So, other types of inactive page tables belong to other governments that you know. Now, what are the steps in context which again I will tell you there is the active task context and there is the task in a government page ok plus the caches plus the caches is very important that means, the possibly P in the data cache if using a writeback quality see as I mentioned in the last class P is different from plus of a cache ok. P means the cache entries are made invalid ok if suppose a data cache is there and then they were all dirty bits are there ok that means, these entries are not known modified. So, we need to write this cache entry into MM. So, P means we have to make these entries invalid before that we have to make sure that it is written into the main memory ok we are writing it, but if it is a flash we do not care about writing back ok for a code instruction cache we do not have to write back this instructions are there in the cache. So, we can just say plus the entries similarly the TLB entries ok the translation look at buffer entries are basically a page table entries right. The page table entries are already there in the memory memory only thing is for speed access we have copied it into a TLB and then we are using it. So, this pxt entries are not modified there is no dirty bit associated with the TLB entries because it is it is not to modify it is the translation is decided by the while entries we should just know by what was the translation to be done. So, page table entries are not modified while they are in the cache of the TLB. So, we do not have to write back this entry into the main memory we can just flash it. So, that is why we call TLB has to be flashed and then data cache has to be cleaned and instruction cache needs to be flashed normally we do not write back the instruction. So, I hope this is clear to you let me change the color we have used this color column. So, flash the TLB entry ok we will not clean the TLB entry in the flashed where data cache has to be cleaned you hope clean it we are writing back into the main memory here we do not write back we just make the entry the invariant the entries in the cache. So, consequently MNU to use new page table translating the virtual memory execution area and data to be awakening processes. So, what we say is you we are consequently MNU for the new location which is coming awakening class whichever new process coming into the after context which what happens the new process coming. So, MNU should be using the new class page table entry. So, we will be using that for that that shows the context of awakening class restoring the context mean we are restoring it the register values which were there the values of R 13 and R 14 R 15 all those things have to be restored where is it restored from it is restored from the OS OS is maintaining some CCD it is called class control block. It maintains the information about when the class went out what was the values of the register it maintains in the class control block and then writes back into this location and then cleans the flashed and then starts this new process to run new process comes into on processor and then starts running. So, this is the job done and re-accurating this sequence so, that as you gain more knowledge on MNU and flash you will be able to appreciate this point much better. So, it will be very good to know what is happening when a context is happening let us see in the execution of restored class that means, what whatever value was the R 15 of the new class that will be now generating the address and the MNU will be translating it very very sincerely ok good. Now, we have come to the main topic of this session MNU do not get cut out with this ok. I will explain in the process around process where we have two page tables ok only two levels of page tables I will explain you what is the need for it ok. The first one is called L1 and second one is L2 the L1 is the master page table the master page table what does it do let me tell you see you know that overall there is a 4 GB of space nothing more than that virtual or physical it cannot be more than 4 GB because 2 power 32 is the maximum address we can generate so, we cannot access beyond this there is a whatever no gimmick 0 so, this 4 GB address if I tell you it is clicked into 1 MB each ok 1 MB then how many we will have there is 1024 ok into 1 MB 10 power 6 that many entries will be there that is this will be from 0 to 1 0 2 3 very 10 power 16 into 10 power 3 is 10 power 9 the GB is what 10 power 9 ok now actually ok 4 0 2 3 ok this will be 4 why because this is 1 MB only right this can go up to 4 GB so, 4 0 to 4 into 10 power 6 is a what you will get as the 4 GB if you simplify this as a 4 K 4 into 10 power 3 into 1 MB and to define it as 10 power 6 which will become 4 into 10 power 9 that means 4 GB so, if I decide to click the 4 GB of memory which memory is a virtual memory into 1 MB of 1 MB set section ok if I call it as a section ok then I will have how many entries 4 0 I think it is not going to be 4 9 4 0 9 6 as you will tell but for simplicity say we will have as a 4 K so, that many entries will be there ok correct sorry it is 9 4 0 9 6 so, that many entries will be there now the master page table is having how many entries it will have 4 K entries correct 4 K entries for 1 MB of page so, how many how much of size we need for the page table I told a page table ok we will have 4 bytes of contents correct 4 bytes only in the case if I say that to say you know since I click this 4 GB stays into 1 MB and I am keeping 1 entry for every 1 MB in the page table then there will be 4 0 9 6 4 K entries into 4 bytes each so, totally it may occupy 16 kilobytes of space it stays 6 16 kilobytes we need to keep this master page table in the memory there is nothing like less than this ok it has to be provided this much of space has to be reserved you cannot have a system it was error with the MME support and then you know you have to keep this entries because we have to know the part entries also I will talk about the part entries again that this is the kind of a space allocated and then we are having that memory a table a master table in the memory ok which is one map directly so, we are not using to access this we are not using the MME but after accessing this entry we will be using the MME ok because based on the entries what we have so, if that is the entry we will be it will be pointing to the system entries it will be pointing to the other table ok one more table is there let us not worry about it now for a moment so, this is done to actually between one MME I want to give a different access permissions ok in that case then I have to split that one MME also in the final pages so, if I need that if I want to different access permission then I need different page table entries because we have to have different access permissions for each page in that case we need to have one entry for each in that case we will be having another entry which will be pointing within that one MME look like then and then referring the past to may be 2 1 kilobyte or 4 kilobyte whatever it is that part of that one MME will be based on this entry and the next will be based on this entry like that there will be having an entry for a one MME location so, let us let me summarize there is physically or in a virtual space 4 GB of space only is possible that virtual memory is split into one MME each and that case there will be a 4090 4096 sections will be there of one MME each so, we need one table which is called the master table which will translate where this particular one MME space in the virtual space is in main memory so, we need to have one MME for each of them but please remember not all these are there physically we may assume the 4096 pages are there one MME sections are there in the virtual memory, but actually physical memory may support only 10 of this section but does not matter ok as I told you we can always bring in some value and then so, that is in the secondary memory so, we can always manage this many entries so, but this entry access should be there which is occupying 16 GB of space because each entry up you know needs the 4 GB of value so, 4 K entries into 4 bytes is 16 GB of entries so, this page table needs to be physically present in the memory which is not match to the MME so, it is one to one map this wherever this page table is can be accessed using without using the MMU I hope all my works are well received understood it but let me replace it once I show you for more example ok let us see that now page tables in MMU now let me summarize this now level 1 let us talk only about level 1 now do not bother about level 2 for a month ok ok let me ok so, I need to