 Hello and welcome to the session on Mutual Exclusion in Distributed Computing Environment. At the end of the session, students will be able to uniquely identify mutual exclusion in distributed computing environment. There are several resources in a system that must not be used simultaneously by multiple processes. If program operation is to be correct. For example, the file must not be simultaneously updated by multiple processes. Similarly, use of unit records peripherals such as tape drivers or printers must be restricted to a single process at a time. Therefore, exclusive access to such a shared resource by a process must be ensured. This exclusiveness access is called mutual exclusion between processes. The sections of a program that need exclusive access to shared resources referred as critical sections. For mutual exclusion means are introduced to prevent access from executing concurrently within their associated critical sections. For mutual exclusion means are introduced to prevent processes from executing concurrently within their associated critical sections. An algorithm for implementing mutual exclusion must satisfy the following requirements. First is mutual exclusion. Given a shared resource accessed by multiple concurrent processes at any time, only one process should access the resource. That is a process that has been granted the resource must release it before it can be granted to another process. Second no starvation. If every process that is granted the resource eventually release it. Every request must be eventually granted. Think and answer. A process can enter into its critical section. The answer is see when it receives a reply message from all other processes in the system. In this approach, one of the processes in the system is elected as the coordinator and coordinates the entry to the critical sections. Each process that wants to enter a critical section must first seek permission from the coordinator. If no other process is currently in that critical section, the coordinator can immediately grant permission to the requesting process. However, if two or more processes concurrently ask for permission to enter the same critical section, the coordinator grants permission to only one process at a time in accordance with some scheduling algorithm. After executing a critical section when a process exits critical section, it must notify the coordinator so that the coordinator can grant permission to another process if any that has also asked for permission to enter the same critical section. An algorithm for mutual exclusion that uses the centralized approach is shown here with the help of example. As shown in the figure, suppose that there is a coordinator process PC and three other processes P1, P2 and P3 in the system. Also assume that the requests are granted in the first come first serve order for which the coordinator maintains a request queue. Suppose P1 wants to enter a critical section for which it sends a request message to PC. On receiving request message, PC checks to see whether some other processes is currently in that critical section. Since no other process is in critical section, PC immediately sends back a reply message granting permission to P1. When the reply arrives, P1 enters the critical section. Now suppose that while P1 is in the critical section, P2 asks for permission to enter the same critical section by sending a request to PC. Since PC is already in the critical section, P2 cannot be granted permission. The exact method used to deny permission varies from one algorithm to another. In this algorithm, let us assume that the coordinator does not return any reply and the process that made the request remains blocked until it receives the reply from the coordinator. Therefore PC does not send a reply to P2 immediately and enters its request in the request queue. Again suppose that while P1 is in the critical section, P3 also sends a request to PC asking for permission to enter the same critical section. Obviously P3 cannot be granted permission. So no reply is sent immediately to P3 by PC and it request queued in the request queue. Now suppose P1 exits the critical section and sends a release message to PC releasing its exclusive access to the critical section. On receiving the release message, PC takes the first request from the queue of deferred request and sends reply message to the corresponding process granting its permission to enter the critical section. Therefore in this case PC sends a reply message to P2. On receiving reply message P2 enters the critical section and when it exits the critical section it sends a release message to PC. Again PC takes the first request from the request queue in this case the request of P3 and sends a reply message to the corresponding process P3. On receiving the reply message P3 enters the critical section and when it exits the critical section it sends a release message to PC. Now since there are no more requests PC keeps waiting for the next request message. The advantages of centralized approach this algorithm ensures mutual exclusion because at a time the coordinator allows only one process to enter a critical section. The algorithm also ensures that no starvation will occur because of the use of first come first serve scheduling policy. The main advantage of this algorithm is that it simply to implement and requires only three messages for a critical section entry, a request message, a reply message and a release message. This advantage however it suffers from the usual drawbacks of centralized schemes that is a single coordinator is subject to a single point of failure and can become a performance bottleneck in a large system. In the distributed approach the decision making for mutual exclusion is distributed across the entire system. That is all processes that want to enter the same critical section cooperate with each other before reaching a decision on which process will enter the critical section next. The first such algorithm was presented by Lamport based on his event ordering scheme. Later Ricardt and Agravala propose a more efficient algorithm that also been required a total ordering of all events in the system. An example of distributed algorithm for mutual exclusion Ricardt and Agravala algorithm is shown here. Here we assume that Lamport's event ordering scheme is used to generate unique timestamp for each event in the system. When a process wants to enter a critical section it sends a request message to all other processes. The message contains the following information. The process identifier of the process, the name of the critical section that the process wants to enter, unique timestamp generated by the process for the request message. On receiving a request message a process either immediately sends back a reply message to the sender or differs sending a reply message on the following rules. If the receiver process is itself currently executing in the critical section it simply queues request message and differs sending a reply. If the receiver process is currently not executing in the critical section but is waiting for its turn to enter the critical section it compares the timestamp in the received request message with the timestamp in its own request message that it has sent to other processes. If the timestamp of the received request message is lower it means that the sender process made a request before the receiver process to enter the critical section. Therefore the receiver process immediately sends back a reply message to the sender. If the receiver process neither is in the critical section nor is waiting for its turn to enter the critical section it immediately sends back a request message. Here is my reference. Thank you.