 Hello friends, I am Rohit Kumar R. Vakhtarikar working as an assistant professor in computer science and engineering department at Walsh Institute of Technology, Swalapur. Today we are going to learn about the multiprocessor scheduling, learning outcomes. At the end of this session students will understand how multiple processor scheduling is used for load balancing to reduce the load of CPU. Now let's see the basics of multiprocessor scheduling. The name it will determine everything. It's a multiple processor scheduling, means in our single CPU we'll have the multiple processors for the execution purpose. Now when it comes for the multiple processors means we'll have the n number of processes for the execution purpose. So now when n number of processes will come for the execution on a single processor at that time that processor will get a heavy load and while executing all this process the throughput of that processor will be very poor. So now to improve this thing the load balancing concept has been introduced. To perform this load balancing we need to use the multiple processor scheduling here. It means here we'll have the multiple processors with multiple processes. So now here this concept is more complex than the single processor system. In multiprocessor system all the systems are nothing but the homogenous system. Homogenous system means all systems have the same configurations. Then we can easily move the process from the one system to the another system. So now while performing this movement or moving this process from one processor to the another process all these processes must be in the ready queue. So let's see the different approaches for the multiple processor scheduling. There are the two approaches. The first one is asymmetric multiprocessing. Second one is symmetric multiprocessing. In asymmetric multiprocessing we'll have the two types of system. One is a master system and another one is a slave system. The master system is nothing but the system who is going to decide everything. Like scheduling and IO processing things and other processes will just execute the user code. So now this concept is nothing but the asymmetric multiprocessing. This is a simple and it will reduce the need of data sharing. Because each system has its own data and is going to execute its own code. So now here let's see the example. Now in this diagram we'll have the process P1, process P2, P3 and P4. We'll have the four process. In that way we'll have one master process who is going to manage this slave process. Means while executing the different processes first this master process will schedule all this processor's task and then only the process is going to be executed. So let's see the next. Symmetric multiprocessing. In symmetric multiprocessing all processes may be in a common ready queue or each process may have its own private queue for ready processes. The scheduling process proceeds further by having the scheduler for each processor examine the ready queue and select the process to execute. Means in this symmetric process each processor has its own memory and its own queue for the execution of task. So now while executing the scheduler will take the charge for the scheduling of different process. Let's see its diagram here. Here we'll have CPU1, CPU2, CPU3 and CPU4. Now here all the CPUs are connected through one bus. Now here we'll have the different process P1, P2 and P3, P4 process. So now while executing this different process they can share their data to each other. So that is nothing but the symmetric multiprocessing. In asymmetric multiprocessing CPU are not going to share anything but in symmetric multiprocessing the memory is going to be shared between the different process. Now let's see the processor affinity. The system tries to avoid the migration of process from one processor to the other process and tries to keep a process running on the same processor. So that is nothing but the processor affinity. Means when many processes come for the execution of one system and that process is not able to execute the further task. So at that time that process needs to migrate its process from its processor to the another process. The process which is idle. So now here the processor affinity means avoiding the migration and try to keep a process running on the same processor that is nothing but the processor affinity. So there are the two types of processor affinity. First one is the soft affinity and second one is the hard affinity. The soft affinity when an operating system has a policy of attempting to keep a process running on the same processor but not guaranteeing it will to do. So this situation is nothing but the soft affinity. Means system is not going to allow further migration and still he is saying like there is no guarantee that the process will execute completely. So that is nothing but the soft affinity. Now hard affinity. Some system such as Linux also provides some system calls that supports hard affinity which allows a process to migrate between the different processors. So that is nothing but the hard affinity. So let us see the load balancing means what? Load balancing is the phenomena which keeps the workload equally distributed across the different processors in a symmetric multiprocessing system. Load balancing is necessary only on the system where each processor has its own private queue of processor which are eligible to execute. So these statements will define the criteria of symmetric multiprocessing. So now for the load balancing there are the two approaches. The first is the push migration and second one is the pull migration. Let us see the push migration first. In push migration, a task routinely checks the load on each processor and if it finds in an imbalance then it evenly distributes the load on each process by moving the process from the overloaded process to the ideal or less busy process. So that is nothing but the push migration means here it allows for the migration of the process from one processor to the another process and another way the pull migration means if this pull migration occurs when an ideal processor pulls a wetting task from a busy processor for its execution means that processor is going to take the other processors process for the execution purpose. So that is nothing but the pull migration. Now let us see this question how workload is equally distributed across all processors in an SMP system. So dear friends pause this video and write your answer. So let us see the answer. By using load balancing concept we can perform these things. In load balancing work is equally divided into the processors by using push migration and pull migration. Now here let us see the some basic introduction of multi-core processors. In multi-core processors multiple processors core are placed on the same physical chip means on a single physical chip will have the more than one multiple processors. So now each core has a register set to maintain its architecture state and thus appears to operating system as a separate physical processors. And now the next one is memory stall when processor access memory then it spends a significant amount of time waiting for the data to become the available. This situation is called the memory stall. Basically it occurs because of the cache miss means when any process is looking for the data then that kernel will first upon look into the cache and if the cache miss happens then that process needs to access that data from the main then secondary memory. So at that time here that process needs to wait. So that time or that situation we can call it as a memory stall. So multi-core processors. So basically to solve this problem recent hardware design implemented multi-threaded processor core in which two or more hardware threads are assigned to each core therefore if one thread stall while waiting for the memory core can switch to the another thread. So this thread will use for to avoid these memory stalls. Now let us see the references the operating system concepts by Galvin then system programming and operating system by Dumdery. Thank you.