 And welcome to today's lecture, we shall continue our discussion on virtual memory. In the last lecture, I have discussed about I mean various aspects of virtual memory, particularly we have covered virtual to physical address mapping techniques, what is page fault, how page fault is dealt with, then basic concept of page replacement and various entries of a page table, which I have covered in the last lecture. Today, we shall focus on four important aspects of virtual memory, number one is translation look aside buffer, which is used to enhance the performance of virtual memory, then I shall briefly discuss about memory management, what do you really mean by memory management, then I shall focus on page table organization, apart from the straight forward linear organization of page table, there are other ways of doing it, which I shall discuss, then I shall focus on various replacement algorithms that are commonly used in the context of virtual memory. So, let us start with the translation look aside buffer, as we know we stored the page table in the main memory, and if the page tables are stored in the main memory, each memory access requires two memory reads, and as a consequence that takes long time, for example the first you have to get that page frame number, and using that page frame number, you have to get where physically it is located, then you will be able to access the actual data. Obviously, this will take long time because it involves several memory accesses, how can it be I mean speeded up, one simple way of doing it is to have a cache memory, let us assume let us see how it can be done. Suppose, this is your processor, which generates the virtual page number VPN, which is generated by the CPU, and here is your page table, which is commonly accessed, and as you know in the virtual page table, I mean in the page table there are several entries, number one is valid bit, then your dirty bit, and then various access bits those are there, and in addition to that we have got the physical frame number, which is stored that physical frame number is stored here, and this is indexed by this particular virtual page number, and this is used to get the physical memory, so this points to the physical memory. So, this is used to point to the physical memory, this is your physical memory, in addition to that we have got hard disk, so whenever it is not present in the physical memory, it is available in the disk memory as we know, so if it is if the valid bit is 1, it is present in a physical memory, if it is 0 then it points to the disk memory, which I have discussed in detail in my last lecture. Now, since it is stored in the this page table, page table is stored in the main memory, now instead of doing that if we have a small cache memory, this is a cache memory, and that cache memory is known as translation look aside buffer TLB, now TLB will have similar fields as it is present in the page table, the valid bit, the dirty bit and so on, and addition to that it will have two more fields, one is known as you know you require tag field, and also you require that physical page address or that frame number. That means, it is indexed by the same virtual page number, but however as you can see, first it is checked here, if it is available then it points to this, so then it gets it from here, and in that case the access time will be reduced, that means you do not have to read it from the page table, instead of that you will get it in the TLB, so TLB is a small portion of the page table, which are commonly currently being used, those page tables information are I mean those entries are stored in this TLB. So, since this is a fully usually it is a fully associative cache memory, fully associative cache, so it makes it faster, that means if it is present you get it from here, however since only a subset of the page table is stored you may have to look at this page table, then you will have the usual delay. And of course, whenever it is if it is not present in the page table, then that data you have to get it from the disk memory, that means if it is not present in the main memory then you have to get it from the disk memory. So, this is what is being done, and how the translation look aside buffer is used, this is illustrated with the help of this diagram, as you can see the virtual address has got two fields, the page number and the offset, and page number is used for the purpose of indexing both the translation look aside buffer, and also the page table, and which is stored in the main memory. So, this is a cache memory, which is present as part of the processor, and as a consequence it will be much faster, as you can see first this is being accessed, and it is parallelly search indicating that it is a fully associative cache, and if it is not present we call it TLB miss, in such a case it will be indexed by the page table, and it will provide the frame number. So, frame number may come from the TLB, if it is a TLB hit, if it is a TLB miss it will come from the page table. So, page table from the page table you will get the frame number, frame number where the physical memory address is available, and along with that that offset is concatenated to get the real address of the main memory, and you can see this is the offset part, and this is where that frame number starts, and then you can load it from the, I mean you will get it here. Now, you may not get that entry in the page table, so in such a case it will lead to a page fault, and page fault will force you to get the information from the secondary memory. So, you can see from the secondary memory then you have to load the page, and of course you have to update the various tables. So, the page table is worked for every TLB miss, but not every TLB miss indicates a page fault. That means, if a TLB miss occurs it does not mean it will lead to a page fault, because that entry may be present in the page table what is quite natural. So, this is how the translation look aside buffer works, and let me show you another diagram where it shows that TLB and the cache all are put together. So, here you can see that virtual address is generated by the processor, and giving the page number and offset. First it checks the TLB, if it is TLB heat then you get the page frame number, and that page frame number and offset together is used to generate the physical address. And you can see here you have got the cache memory, so cache memory where the instruction and data is stored, and as you already know the cache memory has got two important fields the tag of course it will also have the valid bit and other bits. And then the remainder is used is also available this is used to get the information from the cache. So, real address is used to get the information from the cache, if it is a cache heat you get the value from the cache memory where the instruction data is stored, then if it is a miss then of course you have to get it from the main memory, and from where you will get the value. That means this is this cache corresponds to this cache memory corresponds to the data and instruction cache, data or instruction cache it may be for either of the two, and this cache memory corresponds to the TLB where a part of the page table is stored. So, you can see we have got two cache memories as part of the CPU, one is TLB another is cache. So, both these two are usually resident in the as part of the CPU non-chip available, so this is how the TLB and cache memory operates together. Now this flowchart gives you how the page faults and TLB misses are handled. So, first the virtual address is generated by the processor then we get the we perform the TLB access, if it is a TLB heat then we generate the physical address, if it is a TLB miss it generates an exception in such a case as we have already discussed then it will read the page table from the main memory and it will be done by the software, by the operating system not by hardware. And whenever it is available in the TLB then physical address is available then that operation can be a read operation or a write operation. If it is a write operation then it will check the write access bits as I already mentioned there are some protection bits it can be read only access only or it may allow read write. So, depending on the those flag bits it will decide whether the write access is allowed or not if it is if the write access bit is on that means the write is allowed then it will allow you to write try to write the data to the cache and of course the data may be in the cache or may not be in the cache memory. So, in such a case it will check the cache memory if it is yes if it is present in the cache memory then it will write the data into the cache and update the dirty bit and put the data and the address in the write buffer as we have already discussed the use of write buffer because this instead of I mean using write through technique that data is written into the write buffer and then subsequently that data is written in the main memory. On the other hand if it if the that write access is not allowed then cache miss stalls while read block. So, this happens and on the other hand if we go to the other side if it is not a write operation then it tries to read data from the cache memory it is a read operation and if it is a hit cache it then we deliver the data to the CPU that data may be instruction or data whatever it may be and if it is no then cache miss stall is generated while the block is read from the main memory then it again goes back to the cache to read the data. So, this is the flowchart which is used for handling page faults and TLB misses all are shown together. Now, let us consider the different cases say TLB is a TLB whether it can be hit or miss the page table that entry may be present or not. So, it can be hit or miss and there can be cache memory where it can be hit or miss. So, the various situations that can occur occur are shown here that means there can be a TLB hit page table hit and cache miss can it happen or whether it is possible or not which is shown with the help of this table. So, it shows that this TLB hit and page table hit is possible with a cache miss because that means in this although the this is possible because although the page table is never really checked in the TLB hits because it is not present in the cache memory, but TLB hit can occur and page table hit can occur although the data may not be present in the cache then TLB miss and page table hit and cache hit. So, this will lead to TLB misses, but entry found in the page table that means since the TLB miss occurs the that entry is not present in the TLB, but it may be found it has been found in the page table. So, it is that entry is available. So, after retry that retry means after searching the TLB then it will search the page table then from the page table it will get the address and that data may be found in the cache. So, that is signified by the hit so data is in the cache. So, this is also possible then TLB miss page table hit and cache miss. So, this is TLB misses, but entry found in the page table after retry. So, data misses in cache. So, this can also happen now all the three misses that means TLB miss page table miss and cache miss all the three misses can occur and in such a case it will lead to a page fault. So, TLB misses is followed by page fault and after retry data must miss the cache. So, that means that data I was never taken from the disk storage to the main memory or to the cache. So, this is the situation where it the TLB miss page table miss and cache miss will occur. Now, let us consider the other three where we have got a TLB hit, but page table miss and cache miss. So, this cannot happen because it cannot have a translation in the TLB page is not present in the memory. So, because of the inclusion property we know the if the data is not present in the main memory there is no possibility that it will be present in the cache memory. So, because of that this is not possible. Similarly, here TLB hit page table miss and cache hit this is also not possible because that cannot have a translation in TLB page is not present in the memory. So, if it is not present in the memory how can it be present in the cache. So, this situation is also not possible then the last situation where there is a TLB miss followed by page table miss and also there is now there is a cache memory hit this is also not possible and data cannot be allowed in cache if the page is not in the memory. So, that inclusion property is violated. So, we find that various situations that can occur and which are possible and which are not possible is given in this table. Now, let us switch to memory management. So, after discussing TLB let us now focus on memory management. So, the basic purpose of memory management is to allow sharing of a single memory by multiple processes as we already know in a multi programming environment or multi tasking environment processes of different users will be present in the memory. So, that means, more than one processes will be active. So, in such a situation to enable the operating system to implement protection the hardware must provide three basic support. So, to facilitate multi programming at the same time security of the data there are three possible I mean three basic support should be provided. Number one is two modes of operation user mode and supervisory mode. So, what do you really mean by user mode and supervisory mode CPU the processor can be either the control of the operating system which is known as supervisory mode or it can be under the control of the users program. So, whenever it is under the control of the user's program we call it user mode and when the CPU is under the control of the operating system we call it supervisory mode. That means, whenever you put the power on the CPU will be under the control of the operating system. So, it will not go to the user mode. So, two separate modes are provided supervisory mode and user mode. Now, it provides a portion of the CPU state which user can read but not write. So, some of the some information like page table. So, page table can be modified when the processor is in the supervisory mode. So, whenever it is in the user mode that page tables cannot be modified page tables or the TLB those cannot be modified. The reason for that is that no user should be able to modify those tables so that they can gain control of some of the pages of other users or other processes. So, this is the reason why this a portion of the CPU state which can be read but not write. So, you can read the TLB or page table in the user mode but you cannot modify them. So, this provision is given in processors which allows user and supervisory mode. Then third mechanism should also be there mechanism to switch from one mode to the other and context switching. That means, as I told in the beginning as you turn the power on the processor will be under the control of the operating system. So, it will be in the supervisory mode. So, from the supervisory mode there should be a way for changing the switch it over to user mode. That means, the program counter has to be changed and will be loaded by the address of the users program and some other data structure are to be modified whenever you do the switching corresponding to a particular user. So, this some mechanism is provided for to switch from one mode to the other and this is some similar to the context switching which happens in a multi programming environment as you know. In a multi programming environment we switch from when the processor works in a time division multiplex manner and there is a context switching from one user to another user program and so on. So, this is also supported as part of the memory management. So, these three basic features are available are provided by the system. So, that you can have memory management for control sharing as well as protection of the processes. Now let us look at the page table organization. We have so far what you have discussed it is a single level table called direct table. So, here we have seen you shown you a single level table. So, we may call it direct mapping. So, you have got a linear organization of the page table. So, this linear whenever we do the linear organization of the page table this is maintained. I mean in the beginning it was maintained by the hardware that means when the size of the page table was very small in the early years it was provided as part of the hardware it was maintained by the I mean it was maintained entirely in hardware. Subsequently, when the page size table size grew it was moved to the main memory as it is the present a situation. Then of course, as we know to enhance the search operation we have done page table working and this is also this also involves searching the TLB and whenever the TLB miss occurs then you go through the page table and get the page table frame number. So, how can we minimize the overhead by suitable organization of the page table suitable organization of the page table and for that purpose there are several approaches the first approach basically the approaches can be divided into two categories first one is known as forward mapped or hierarchical page table. So, in this case it is indexed by the virtual page number the second one is known as inverse mapped or inverted page table this is indexed by the page frame number. So, first we shall discuss about the forward mapped or hierarchical page table indexed by the virtual page number. So, this is how it is done. So, basic idea is a large data array can be mapped by a smaller array whenever we go for hierarchical page table. So, in this case a two level hierarchical page table is shown what is done we divide the that search into several steps. That means, initially we have got a this is known as root page table. So, say 4 k byte root page table. So, instead of a single linear page table we have divided into two steps one is your root page table then the root page table will point to other page tables. So, this will point to and here you have got the other page tables. So, this will point to another one and how it is done I shall explain. So, this is your this is called the user page table. So, in this case 4 kilobyte page table and then each will point to a 4 kilobyte page table user page table. So, this will together we will have 4 megabyte user page table. So, you have got 4 megabyte user page table, but it is done in two steps instead of a single step then you will get from here the user address space you will get from this. So, 4 GB user address space. So, in this way it is divided and then each page table is a 4 kilobyte size pages and you have got total of 4 GB page 4 GB I mean total page table size and which corresponds to 2 to the power 20 pages. So, you have got 2 to the power 20 pages actually you may recall that that address is divided into three parts I mean earlier it was divided into two parts that 20 bit and 12 bit. So, this is that offset and this is that virtual page number I mean that 2 to the power 20 corresponds to this corresponds to the virtual page number VPN. So, that 4 GB is coming from this 20 bit. So, this is, but you are doing it in two steps and how it is done will be clear from the next diagram. In fact, not only it can be restricted to two level, you can go to three level and decalpha supports a fourth tired page table, which I shall discuss later. So, how it is done is shown in this particular diagram. Here as you can see the faulting virtual address has got 30 to bits, which is divided into instead of two parts you have got it is divided into three parts 10 bit that virtual page number is divided into two parts 10 bit into 10 bit that 20 bit and of course, that page offset that 12 bit is still there. So, with the help of the higher order 10 bit a root page table is searched and of course, that base of that root page table is stored in a physical register. That means, the processor is provided with a hardware register that base of that root page table is available from that register. So, from that register this is the index with the help of 10 bit and that 10 bit gives you that page table entry from the root page table. Then this entry this page table entry is used to provide a base of the physical address of the user page table. So, this will give you the base address with respect to that 10 bit index part is used to get that page frame number of the user page from the user page table and here you get the complete that physical address and that is again used for the for searching the physical page and of course, you have to concatenate the page frame number with this page offset to get the page table. Now, the way it is being searched is known as top down access method and as you can see it will require three memory accesses. One for reading the page table root page table from the root page table I mean this entry then it will then second time then you have to access the main memory again to get the to from the user page table and then you have to get the actual data from the physical memory. So, main memory you have to access thrice to get the requested data from the main memory. Now, you may be asking what is the benefit here? The benefit is that in this case you are not storing the entire page table in the main memory, you are storing the root page table and those page tables which have been recently accessed. So, the total size of the page tables that is stored in main memory is significantly reduced. So, this is the top down traversal and this is known as this is how that hierarchical page table organization can be done search in a top down manner. Another approach is there which is known as bottom up traversal. We have seen that top down traversal requires as many memory references as there are table levels as you have seen in the previous case we require three memory accesses that can be reduced with the help of this bottom up traversal. So, it may be possible sometimes not always just having a one memory access. So, what is done? The top 20 bits of the virtual address are offset into the 4 megabyte user page table as it is shown. So, in step one that this 20 bit virtual page number is gives you a virtual as offset to the into the 4 megabyte user page table and of course, lower order bit should be 0 because it is it specifies in terms of words and the number of bits in the words is has to be made 0. So, this is how if this is successful then you will be able to do just by reading one memory access and you will be able to generate the physical address. However, the step one may not be always successful and in such a case you have to go to step two. In step two the operating system generates a second address when the virtual page table entry access fails whenever this fails then this is tried, but in this case as you can see it is somewhat similar to that top down approach where the top 10 bits of the virtual page number is used along with the base address of the root page table that is obtained by the operating system and that is how the physical address is generated by following a bottom up traversal. So, you can see it may involve at most two steps, but in most in many most of the situations just one step is will be sufficient to get to generate the physical address and that is the reason why this bottom up approach is convenient and faster than the top down approach. Now, here are the advantages of two level page table advantages only page directory and active page tables need to be in the main memory as I already mentioned instead of storing the entire page table if it is done in this way then the entire page table has to be stored, but instead of that whenever you go for this two level page table hierarchical technique then you have to store only the page directory and the active page tables which are presently being accessed and page tables can be created on demand. So, this is there is no need to have the entire page table on demand whenever the CPU generates an address accordingly the page tables are created in the main memory. So, page tables can be one page in size uniform of paging mechanism for both virtual memory management and actual memory contents. So, this is one advantage and you can use more sophisticated protection whenever you use this two level page table hierarchical technique. So, can have a different page directory per process and do not have to check owner of page table against process ID and it also facilitates control sharing of pages. So, this is this was one of the objective of virtual memory which is very much satisfied with the help of this two page level advantage. Now, this concept can be scaled up. So, this can scale up up to three level scheme and keep page directory size small. For example, that alpha deck alpha chip of the deck alpha processor has an 8 kilobyte page that can hold 1 K directory entries. So, if you have got one level of hierarchy 1 K page tables and as we have already discussed you will require total of 8 MB memory size to hold the page table. Whenever you go for two level hierarchy then 1 K directory page table and 1 K page tables you have to multiply. So, you can have 8 gigabyte of page table I mean, but actually you will store in the main memory very few of them because whenever you store the entire page table then most of the entries are not really empty and in this case that does not happen. So, 8 GB page table can be maintained with a smaller size I mean K page directory size with a small page directory size. You can go up to the three level 1 K page directory into 1 K intermediate directory and 1 K page table. So, 8 terabyte. So, that will actually correspond to 43 bit virtual address space with 8 kilobyte pages. So, this is what is being used by deck alpha. So, you see this hierarchical page table organization is very advantageous. So, you can go from 2 level or 3 level hierarchy you can use. Now, there is another technique which is used which is known as inverted page table. As we have already seen the number of I mean that the number of page table is size of the page table is very large, but very few of the pages are present in the main memory. Now, can we do something by which only those page tables which are present in the main memory those are present in the table. That means, instead of all entries only those physical page tables which are present in the main memory those are present in the table. So, that has lead to what is known as inverted page table. Obviously, in such a case size of the page table will be quite small. So, the inverted page table has one entry for every page frame that is present in the main memory and it is called inverted because it indexes page table entries by page frame number. We have seen earlier it was getting the indexing was done by the virtual page number, but now what we shall be doing we shall be using the page frame number which we actually read it from the page table that page frame number will be used. So, it will be completely different from this page table that we have discussed. So, page frame number rather than the virtual page numbers. So, it is compact in size and good candidate for hardware managed mechanisms. So, since the table is small there is a scope for handling it with the help of hardware rather than by software which is commonly done whenever you have got a conventional page table. And this will lead to very few memory references as we shall see how it is done. And of course, you will require collision chain mechanism for multiple mapping which I shall explain shortly what is collision chain mechanism. First, let us have a look at the inverted page table. So, here you have got the virtual address. So, you are using a hash function and that hash function is giving you a hardware page table as I have already mentioned that page frame number that page frame number is being stored here. So, the hashing function is used for getting the page frame number and that is being used for the purpose of indexing. As I have already told this page table contents only those entries, only those pages corresponding to I mean which are present in the main memory. So, these are the page frame numbers of the hardware page frame numbers that is present in the main memory. So, the indexing is done in this way and question naturally arises since you are doing hashing. There is a possibility that it will go to multiple entries that means that same page number that you are generating can have multiple entries present in the page table. So, how that can be resolved? So, what is being done in this particular case? This page number I mean you do the indexing then you check the page frame number. So, page frame number is compared and page frame number is compared and if the page frame number is does not corresponding to what is being searched then it will go to another entry this is known as that collision chain mechanism. So, it will in this way will keep on creating another entry and there is a pointer available only for the first entry. So, first entry will give a pointer and that pointer will store another entry where you will get the frame number and that frame number is used for storing the that will generate you the physical address. So, in this particular case as you can see the total number of entries present in this table will be equal to the total number of pages that is present in the main memory and as a consequence it will be much smaller and this that is the reason why inverted page table is very suitable and preferable for storing the information of page table. So, after discussing that inverted page table now we shall focus on another technique that is called segmentation. So, far we have discussed about paging. So, paging is a technique where as we have seen that virtual memory is divided into a large number of pages physical all memory is also divided into a large number of pages and then a particular program or user process can have large number of pages consist of a large number of pages and whenever it is stored in the main memory they are scattered depending on the availability at the time of replacement depending on wherever it is available it is stored. So, in the physical memory they are stored in a scattered manner and which has no correspondence with the user's program. So, from the user point of view I mean user does not know which pages correspond to his program and it is not available in a contiguous memory. So, it creates a little gap with the user and the way it is stored in the page table. So, another alternative is to use segmentation. So, segment is a set of logically related instructions of data elements of variable size associated with a given name. So, that given name may correspond to a user's ID. So, and the advantage is it simplifies the handling of growing data structure. So, what we are trying to tell that a particular whenever we go for segmentation then this is your main memory. You can have one segment corresponding to one user, you can have another segment corresponding to another user of variable size and they are in contiguous. So, this is for user 1, this is for user 2. So, this is segment 1, this is segment 2, S 1, segment 2 and they may be corresponding to different users and as you can see there is no I mean they are in contiguous memories. So, advantage in this case is it simplifies the handling of growing data structure because you do not have to store those page table information and size of the page table is quite large that you do not have to handle and it allows programs to be altered and recompiled independently. Since they are stored you know the starting address and end address and that entire program can be recompiled whenever it is necessary can be altered and recompiled independently which you cannot do whenever you use paging technique. So, it lends itself to sharing among processes that is that also can be done. So, as paging gives you efficient memory management and on the other hand segmentation provides you more user convenience. So, it is a paging allows you good memory management on the other hand segmentation allows you more user convenience. So, how can you combine the advantages of both best of both the words that means we are trying to get the advantage of segmentation that is user convenience and also good memory management that is achievable with the help of paging scheme. So, we can combine segmentation with paging and how it is done I shall explain briefly and this shows about the segment table one anti corresponds to each segment present in the main memory. So, just like page table we have to also maintain a segment table however the size will be quite small here because it will essentially correspond to the number of processes that is running in the system. So, each entry contains the length of the segment a bit is needed to determine if segment is already in the main memory and another bit is needed to determine if the segment has been modified since it has been loaded in the main memory. So, here also those various segments are originally created with respect to the disk memory they are stored in the disk then from the disk they will be taken to the main memory whenever they are being used by the C processor when it is executed and for that purpose you will require a segment table and the way it will be accessed is explained here. The processor will generate a virtual address providing segment number and offset and the table will have that segment table will have these entries first of all segment base length of the segment has to be specified length of the segment then you will have two flag bits one is p another is m p corresponds to whether a particular segment is present in the main memory whether it has been transferred from the hard disk memory to the main memory or not and whenever it is transferred to the main memory it has been modified or not it is very similar to the dirty bit that is used in the paging scheme. And the way it works is shown in this diagram. So, this processor generates a virtual address that segment number and offset and that segment number goes to is applied to the segment table and there is a segment table pointer a hardware resistor is used to get the base address of the segment table and that is along with that segment number the segment table pointer is used to generate the segment number and which is used for the purpose of indexing in the segment table. So, segment table has got I have already shown you the different fields particularly it will have the base and length. So, base with the help of the base and that offset d which comes from the virtual address generated by the processor the segment that address is generated that address corresponds to the main memory address. So, as you can see this is the segment that is stored in the main memory as I told the entire segment is stored in the contiguous memory and d is the offset with respect to that base address with respect to the base address d is there and this is the size of the segment this is the length of the segment which is provided here. So, those violations can be checked if the length is the address and that offset is more than I mean exceeds the length of the segment those things can be done and from the main memory you can get the data or instruction in this way. So, this is how the address translation is done in the segmented scheme. Now, you can use combined paging and segmentation. So, paging is transparent to the programmer because it is done by the hardware on the other hand segmentation is visible to the programmer as I have already mentioned and each segment now can be broken into fixed size pages to get the advantage of the paging along with segmentation. So, this is the virtual address here you have got segment number page number and offset. Now, whenever we are using segmentation along with paging we are combining two things segment number and page number and along with offset and the segment table entry will have the segment base as usual the length and various control bits that I have already mentioned. And the page table will have the frame number various control bits then the p and m bits p stands for the present bit and m corresponds to the modified bit. And how this is being done is shown in this diagram we have stated with the help of this diagram. So, this is the virtual address generated by the processor comprising segment number page number and offset. So, segment number is used for indexing of the that segment table. So, you can see the segment table pointer and the segment number together is used to get the that segment number and here you get that entry and that entry is used that segment number is used along with the page number to get the I mean for the purpose of indexing the page table. So, the segment number and the page number that is used for the purpose of indexing from the page table. So, page table as you know stores the page frame number. So, that page frame number and offset these two are used I mean offset is concatenated with the page frame number to generate the physical address and that physical address as you can see this is the page frame. So, here we are showing a page not a segment. So, earlier the entire segment was present here now you have got a page and this is a page and within that page the offset is there and you can access from the main memory. So, you can see we have combined segmentation and phasing together in this particular in this way and how the translation occurs that is shown with the help of this diagram. And particularly Pentium II memory address translation mechanism is used as this segmentation with phasing. So, and it gives you several alternatives there you can have several alternatives unsegmented unpaged memory where virtual address is same as the physical address. So, several alternatives can be used in Pentium II or you can have unsegmented paged memory where memory is used as a paged linear address space. So, you have got 32 bit address. So, you have you can have 4 GB virtual memory. So, this corresponds to unsegmented paged memory. Now, you can have segmented unpaged memory memory is considered as a collection of logical address spaces as it is done whenever you use only segmentation. And finally, you can have segmented paged memory where the combined segmentation and phasing is done the address size can be 46 bit. So, 64 terabyte physical address is 32 bit 4 GB. So, you can see virtual address and physical address sizes are differing and whenever we are using this segmented and paging in a combined manner. So, you can have different alternatives this flexibility is provided in Pentium memory address translation. And how it is done is shown here first the virtual address is generated segment and offset and from the segment table you get the segment number and that it is concatenated with the offset to get the linear address. So, here you have got that page directory and page table and with the help of the page directory you get the information. Actually, this has to be I mean concatenated with the page number this particular thing is missing here. So, this has to be concatenated with this here. So, there is an error here. So, this has to be concatenated to get the entry from the page table and this will get this will be concatenated with the offset to get the physical address and we can access the main memory in this way. So, you can see this part is corresponds to segmentation where the segment table is used and for the and this part you have got the paging. In the paging you have got two entries as I already mentioned page directory and the user page table these two are present and each of them is having one k entries that is present and that generates the physical address and you can access it from the memory and this is from the main memory and this is how it is done in Pentium. I think let us stop here today in my next lecture I shall discuss about the various and the fetching policy then the various other policies that is used for page replacement I shall discuss in my next lecture along with the way the disk memory is organized. So, disk memory organization along with those things I shall discuss in my next lecture. Thank you.