 Hello. Today we are going to discuss on the topic memory management policies. Before going to the memory management policies, first we will see the learning outcome of this topic. At the end of this session, students will be able to understand different approaches to memory management policies including swapping and demand paging. In the last slide we have discussed swapping. Now in this we are going to discuss the swapping algorithm and demand paging. First of all, what is the swapping? Swapping is the process of moving some pages out of main memory and moving others in is called swapping. It consists of bringing each process in physical memory and entirely and running it. It is the process no longer in use and the process will be terminated or is swapped out to the disk. This we have already seen in the previous lecture. Now we will go for the algorithm swapping process in. In this algorithm, process 0 that is called as a swapper is the only process that swaps processes into main memory from swap device. At the conclusion of the system initialization, the swapper goes into an infinite loop where its only task is to do process swapping. It attempts to swap processes in from the swap device and it swaps processes out of its needs space in the main memory. The kernel schedules the swapper to execute just it schedules the other processes but swapper only execute in kernel mode. The swapper make no system calls but uses internal kernel functions to do swapping. In the swapper wakes up to swap processes in it examines all processes that are in the state ready to run but swapped out and selects one that has been swapped out the longest. If there is enough free memory available, swapper swaps the process in reversing the operation done for swapping out. It allocates physical memory reads the process from swap device and freeze the swap space. If the swapper successfully swaps in a process, it searches the set of ready to run but swapped out processes for others to swap in and repeat the above procedure that is shown in this algorithm. Now we go for the another type as a fork swap. Here we can see the description of the fork system call assumed that the parent process found enough memory to create the child constate this is actually the meaning of fork. For creation of child process fork system call is invoked. At the time of processing the fork call by parent the child process is created. In case of shortage of memory the child process will be sent to the read to run state in the swap device. When the swap is complete, the child process exists on the swap device. The parent places the child in the ready to run state and it returns to the user state. Since the child is in the ready to run state, the swapper will eventually swap it into memory where the kernel will schedule it. The child will complete its part of the fork system call and return to the user mode. This is actually working of fork swap in the memory management. Let us see an example of fork swap. Here we will take the example as a copy on write in fork. So actually what is a copy on write in fork? When a fork system call is issued, a copy of all the pages corresponding to the parent process is created loaded into separate memory location by the OS for the child process. In such cases, a technique called copy on write in short it is called COW is used. With this technique when a fork occurs, the parent process page are not copied for the child process. Instead, the pages are shared between the child and the parent process. Whenever a parent or child modifies a page, a separate copy of that particular page alone is made for that process, whether it is parent or child, which performed the modification. This process will then use the newly copied page rather than the shared one in all future references. The other process that is the one which did not modify the shared page continues to use the original copy of the page, which is now no longer shared. This technique is called copy on write since the page is copied when some process writes to it. It is a very important fork swap example where kernel manages all the things while exuding copy on write in the fork system call. Now we go for another type called as expansion swap. What is expansion swap is? At the time when process requires more memory than it is currently allocated, the kernel performs expansion swap. To do this, kernel reserves enough space in the swap device. Then the address translation mapping is adjusted for the new virtual address space. But the physical memory is not allocated. At last, kernel swaps the process into the assigned space in the swap device. Later, when the kernel swaps the process into the main memory, this assigns memory according to the new address translation mapping. Finally, it swaps the process out in normal swapping operation, zeroing out the newly allocated space means the space is free for the newly allocated. Now whatever you have seen till now, based on that there are questions. The first question is in which more swapper operates? Here the answer is the swapper operates only in the kernel mode because swapper always resides in the kernel, he takes the care of fork swap. So that's why swapper operates only in the kernel mode. Second question, does swapper uses system calls? The answer is it does not use a system call, instead it uses kernel functions for swapping. Again there are questions based on the previous contents. What are the requirements for a swapper to work? Because swapper is always inside the kernel, what the requirement swapper does have? The answer is the swapper works on the highest scheduling priority. Yes, it takes the care of scheduler process and it works only on the higher scheduling priority. Next question is what are the criteria for choosing a process for swapping into memory from the swap device? The answer is the resident time of the processes in the swap device. It is very important because the priority of the processes and the amount of time the process had been swapped out. So that's why the answer is resident time of the processes in the swap device is the main criteria for choosing a process for swapping into memory from the swap device. Now we go for demand paging which is very important concept in the Unix fast system. A demand paging is quite similar to a paging system with swapping where processes reside in secondary memory and pages are loaded only on demand, not in advance. When a context which occurs, the operating system does not copy any of the old programs pages out to the disk or any of the new programs pages into the main memory instead. It just begins executing the new program after loading the first page and fetches that program's pages as they are referenced. While executing a program, if the program references a page which is not available in the main memory then reference as a page fault will occur because it was swapped out a little ago so that the process treats this as an invalid memory reference. So that's why page fault will occur and transfers control from the program to the operating system to demand the page back into the memory, not all page of processes reside in main memory. Here we see the principle of locality and working set, page fault, page heat and page replacement strategies that is called as LRU least recent usually and OPR. Here the kernel suspends the execution of the process until it reads the page into memory and makes it accessible to the process in the kernel. So what is the advantage of this demand paging? Advantages are large virtual memory, more efficient use of memory and there is no limit on degree of multi-programming. And what is the disadvantages? Number of tables and the amount of processor overhead for handling page interrupts are greater than in case of the simple page management techniques. So what is the data structures required for demand paging? So these are the data structures. One is page table entry, second disc block descriptor, third page frame data table that is PF data and swap use table. We will see these all in the figure. So this is the figure of the data structure for demand paging where we show the page table entry and disc block descriptor. Here the region which is totally pointed to the page table where page table entries are made and all the pages information are stored in the page table. So page table always contains the page address then edge that is the since the number of pages are copied from the main memory to the disk then copy on write then moderate then references then validity and then protections. These are all the information contained in the page table entries. Then disk block descriptor we have swap device block and then typing converting the file into the blocks. So these are the contents of the page table entry which contains the physical address of the page and the following bits that is valid. Valid means what? Whether the page content legal reference means whether the page is referenced recently modify whether the page is content is modified copy on write just we have discussed that this kernel must create a new copy when the process modifies its content required for the fork system call edge edge of the page and protections where we use always the permission for read and write. These are all the references for this topic thank you.