 Hello and welcome to this session on threads in distributed computing environment. At the end of this session, students will be able to uniquely identify the thread mechanism in distributed computing environment. Threads are popular way to improve application performance through parallelism. In traditional operating system, the basic unit of CPU utilization is a process. Each process has its own program counter, its own register states, its own stack and its own address space. On the other hand, in operating system with threads facility, the basic unit of CPU utilization is a thread. In such operating systems, a process consists of an address space and one or more threads of control. Each thread of a process has its own program counter, its own register state and its own stack. But all threads of a process share the same address space. Threads share a CPU in the same way as processes do. That is, on a uniprocessor threads run in time sharing. As on a shared memory multiprocessor, as many threads can run simultaneously as there are processors. Here as shown in figure A and figure B. However, traditional processes thread can create child threads. And so can block waiting for system calls to complete and can change states during the course of execution. At a particular instance of time, thread can be in any one of several states like running, block, ready or terminated. Due to these similarities, threads are often viewed as mini processes. Threads are more referred to as lightweight processes and traditional processes are referred as heavyweight processes. The main motivation for using a multi-threaded process instead of multiple single-threaded processes for performing some computation activities are the overheads involved in creating a new process are in general considerably greater than those of creating a new thread within a process. Switching between threads sharing the same address space is considerably cheaper than switching between processes that have their own address space. Threads allow parallelism to be combined with sequential execution and blocking system calls. Paralism improves performance and blocking system calls make programming easier. Resource sharing can be achieved more efficiently and naturally between threads of a process than between the processes because all threads of a process share the same address space. One of the following three threads models may be used to construct a server process as a single thread process. This model uses the blocking system calls but without any parallelism. In this method, define server gates client file access request from the request queue. Check the request for access permissions and if access is allowed, check whether a disk access is needed to serve service request. If the disk access is not needed, the request is serviced immediately and a reply is sent to the client process. Otherwise, file server sends a disk access request to the disk server and waits for a reply. After receiving the disk server's reply, it services the client request. Send the reply to the client process and goes back to get the next request from the request queue. Here, the programming of the server process is simple because of the use of the locking system call. After sending its request, the file server blocks and until a reply is received from the disk server. However, if a dedicated machine is used for the file server, the CPU remains idle while the file server is waiting for a reply from the disk server as a finite state machine. In this model supports parallelism but with non-blocking system calls. In this method, the server is implemented as a single threaded process and is operated like a finite state machine. An event queue is maintained in which both client request messages and reply messages from the disk server are queued. Whenever the thread becomes idle, it takes the next message from the event queue. If it is a client request message, a check is made for access permission and need for disk access. If disk access is needed to service request, the file server sends the disk access request message to disk server. However, instead of blocking, it records the current state of the client's request in a table and then goes to get the next message from the event queue. This message may either be request from a new client or reply from the disk server of previous disk access request. If it is a new client request, it is processed as discussed above. On the other hand, if it is a reply from the disk server state of the client request that corresponds to the reply is retrieved from the table and client's request is processed further. Third as a group of threads. This model supports parallelism with locking system parts. In this method, the server process is comprised of a single dispatcher thread and multiple worker threads. Either the worker threads can be created dynamically whenever a request comes in or a pool of threads can be created at startup time to deal with as many simultaneous requests. The dispatcher thread keeps waiting in a loop for request from the client. When a client request arrives, it checks it for access permission. If permission is allowed, it either creates a new worker thread or chooses an idle worker thread from the pool and hands over the request to the worker thread. The control is then passed on the worker thread and dispatcher thread state changes from running to ready. Now, the worker thread checks to see a disk access is needed for the request or if it can be satisfied from the block cache. This method achieves parallelism while retaining the idea of sequential processes that make blocking system calls. Pause the video, think and answer. Here the answer is A. A process having multiple threads of control implies it can do more than one task at a time. There are three commonly used ways to organize the threads of a process first dispatcher workers model. In this model, the process consists of a single dispatcher thread and multiple worker threads. The dispatcher thread accepts requests from clients and after examining the request dispatches, request one of the free worker threads for further processing of the request. Each worker thread works on different client requests. Therefore, multiple client requests can be processed in parallel. The model is shown in the diagram. In this model, all threads behave as equal in the sense that there is no dispatcher worker relationship for processing client request. Each thread gets and process client request on its own. This model is often used for implementing specialized threads within a process. However, each thread of process is specialized in servicing a specific type of request. Therefore, multiple types of requests can be simultaneously handled by the process. The example of this model is shown in the diagram. Pipeline model. This model is useful for application based on the producer-consumer model. In which the output of data generated by one part of the application is used as input for another part of the application. This model, the threads of the process are organized as a pipeline so that the output data generated by the first thread is used for processing by the second thread. The output of the second thread is used for processing by the third thread and so on. The output of the last thread in the pipeline is the final output of the process through which the grades belong. Example of this model is shown in this diagram. Here are my references. Thank you.