 Hello, welcome to the session on Direct Address Mapping Technique, this is Dayanan Patil working as assistant professor in the CAC department at the Institute of Technology, at the end of this session, we will be able to explain direct mapping technique and its advantages and disadvantages. So before going to the direct address mapping technique, what is address mapping? So this is the smallest unit of address data that can be mapped independently to an area of virtual address space. In the previous video lecture, we have seen while running an application that particular program is loaded from main memory to cache memory. We know that cache memory size is less than the main memory size. All the blocks of main memory is mapped into the cache memory. So while mapping that main memory address to the cache memory, so we are using the different techniques. So there are usually three techniques, direct mapping, associative mapping, set associative mapping. In this video lecture, we are discussing the direct mapping technique. So before going to discuss technique, we are assuming that the memory size or considering the example. So we know that the line size of cache memory equal to the block size of main memory. So usually we consider that the block size is four words. And that four word means that one word size is one byte. So if there are four words, so that four equal to two square, so if we need to identify that particular word in that block, so we are using the two bits to identify that particular block in that particular word in the particular block. We have seen this in the previous video. The last two least significant bits are used to identify that particular word in that block. That is called as displacement address. So here we are considering the displacement D. So that is the last two least two significant bits. And the cache memory size is assumed to 16 words. If there are 16 words, each cache line will be, that size of each cache line is four words. Then that 16 word or four equal to four cache lines. There will be four cache lines. So that four cache lines can be addressed using two bit binary numbers. We are considering that is r equal to two bits. We will see in the next slide what is r, what is d in detail, where the time being we are taking r equal to two. And that main memory size is assumed 64 bytes. If there are 64 bytes, that 64 bytes can be, that is, that 64 equal to 2 raised to 6. So if there are 64 bytes, means there will be 64 words in a main memory. If we need to uniquely identify those 64 bytes, then we need that is 64 equal to 2 raised to 6 means that total address length of the main memory is 66 bits. If it is 6 bits, the two least significant bits are used to identify the word and remaining four MSP bits identifies that particular block in the main memory. The total number of block in main memory is that if it is 64 bytes, then each block will be containing four bytes, that is, four words. So that is total number of blocks in main memory is 64 divided by four equal to 16. Here we can see how that direct mapping will take place. So here we can see. So these are the cache lines 0, 1, 2, 3, 4. And totally there are 64 memory words as we have assumed in the previous example. So there are 64 words and which block of the main memory is placed to which cache line that is the direct that is mapping technique. The mapping technique means which block of main memory will be mapped to which of the cache lines in a cache memory. So in direct mapping technique, those blocks from main memory are mapped in round robin manner. Here you can see there are only four cache lines 0, 1, 2, 3, 4. But there are totally 64 memory words in a main memory. So first 0 is mapped to 0 line of cache. First memory word is mapped to first line. Second is mapped to second cache line. Third block to third cache line. Once we reach that third cache line, we'll go next. Next block of main memory will be mapped to the again 0th line. So fifth will be 2, 1. So in a round robin manner from 0 to 3 again 4 to 7 and 8 to 11. So in this manner, those main memory blocks are mapped to cache lines in a direct mapping technique. So 0, 4, 8, 12 are mapped to 0th line. 1, 5, 9, 13 are mapped to first line. So on. So here we can see one thing. If 0 is already existed in the cache memory, if the process need to access the 8th, 12th, or 4th. So it will override the 0th line only. Because even if the 1, 2, 3 are free, it cannot use. Because that particular block of main memory will be copied into that particular cache line only. So this is in a round robin manner. So here we can see. So in the last two, that totally that for the previous example for this one, total length of the address is 6-bit. So the two least significant bits are used to identify that particular word in that particular block, that is displacement. And these remaining four of these, these are representing that most significant bit. So these, if there are totally 16 blocks in main memory, so that 0 to 15 blocks. So these four MSP bits will identify the block number. Again in direct mapping technique, the next two bits, that is r bits, the size is four lines, it is r equal to two. If it is four bytes or four words, then r equal to three. It depends upon the memory size. So the next r bits are used to identify that particular cache line. So here we can see there are totally 0 to 16 blocks. So this is 0, 1, 2, 3 in that binary number. So the last two bits will be used to identify the cache line. Here we can see that 0 block from the main memory is mapped to 0th line of the cache. The first block of main memory is mapped to the first line of the cache. The second one is mapped to second. Third one is mapped to the third line. Again, next the fourth one will come. The fourth block of the main memory is mapped to the 0th line again. The fifth is mapped to the first line. Sixth is mapped to the second line and so on. So in this one, so these are mapped in this way. The next, so in this way, so the last two bits are used as the least significant bit are used to identify that displacement address. The r bits are used, that is two for this example. Two bits are used to identify that that block should be matched to which cache line. And the last two bits in this example are used to identify which block is present in that cache line, whether 0 is present, 4 is present, or 8 present. So last two bits are used to recognize that which block is present in that particular line of the cache. So this is the format. For this example, the two bits for the tag, two bits for the line, and two bits for the displacement. Here we can see the direct mapping. This is the memory size. There are from 0 to 15. And these are the words. 0th line contains 0, 1, 2, 3 memory words. The next block containing 4, 5, 6, 7, and so on. There are 0 to 16 blocks. Each block containing 4 memory words from 0 to 63. So these are mapped into the cache. So first one is mapped to 0. Next, second one is mapped to next line. Third one, fourth one. So it is in round, round, mid-passion. So on, that 8, 9, 12 will be mapped to 0. Like that previous slide, in the previous slide, we have seen here, the 0, 4, 8, 12. So here we have seen. So likewise, so in this slide, these are mapped in the round, round, and mid-passion. So it will continue so on. This is time to reflect. Now pause the video and answer for the question. The question is the cache size is 64 bytes. Main memory size is 16 bytes. And block size is forward. We need to calculate the address mapping fields, length of the address mapping fields. The block size is 4 means that displacement is equal to 2 bits. And main memory size is 16 megabytes. So that 16 megabytes means that 2 raised to 24. So total length of the address is 24 bits. In that 24 bits, cache size is 64 kilobytes. That 64 kilobytes means 2 raised to 14. So r equal to 14. That r field is 14 bits in this example. So that s equal to total length minus r, that 24 minus, sorry, that 24 minus 2 is the displacement. The remaining bit is 22. 22 minus r is, that r is 14. So s equal to 8 bit. So for this example, that tag field is 8 bit, that s equal to 8, that s minus r equal to 8 bit. Line size that r equal to 14 bits. And displacement d equal to 2 bits. The advantage and disadvantages. Here we can see it is easy to implement because it does not require any searching technique to search a block in cache. Because we know if we need to search that 8th block, whether it is present in the cache, it will directly go to the 0th cache line and it will search for it. Because that always that 8th block of main memory will be mapped to the 0th line of cache only. So that is in round robin question, we know already. So it will directly go to that particular cache line and it will find that block whether it is present or not. And the disadvantage is that fixed cache location is for any given block because, so we need to the map that 0th, 4th and so on, these blocks to the 0th line of the cache memory only. Even if there is the cache line next, the cache line first, second, third, cache lines are free, we cannot map that 8th block of main memory to the 0th line of the cache. So here we are not utilizing the cache memory fully. So this is the disadvantage of the direct mapping technique. These are the references I have used. Thank you.