 Hello and welcome to the session on client server paradigm. So at the end of the session, students will be able to introduce client server paradigm. Students will be able to discuss peer-to-peer paradigm and its applications. Here the purpose of a network or an internet work is to provide services to users. A user at its local site wants to receive a service from a computer at a remote site. One way to achieve this service is to run two programs. A local computer runs at a program to request a service from a remote computer. The remote computer runs a program to give service to the requesting program. This means that two computers which are connected by an internet must each run a program one to provide a service and one to request a service. So when we are dealing with such environment at first glance, it looks simple to enable a communication between two application programs one running at the local site and the other running at the remote site. But many questions which arise when we want to implement such approach and there are some questions that we may ask that are as follows. Should both application programs be able to request services and provide services or should the application program just do one or the other? One solution is to have an application program called the client running on the local machine request a service from another application program called the server running on the remote machine. In other words, this task of requesting a service and providing a service are separate from each other. The second question is that should a server provide services only to one specific client or should the server be able to provide services to any client that request the type of service that it provides? The most common solution is a server providing a service for any client that needs that type of service not a particular one. The server-client relationship is one too many. Third question should a computer run only one program that is client or server? The solution is that the computer connected to the internet should be able to run any client program if the appropriate software is available. Next question, when should an application program be running? So all the time or just when there is a need for service, generally a client program which request a service should run only when it is needed. But the server program which provides a service should run all the time because it does not know when its request will be needed. Should there be only one universal application program that can provide any type of service a user wants or should there be any application program or each type of service? Here in TCP IP services needed frequently and by many users have specific client server application programs. Server. A server is a program running on the remote machine providing services to the clients. So when it starts, it opens the door for incoming request for clients. But it never initiates a service until it is requested to do so. A server program is an infinite program. So when it starts, it runs infinitely unless a problem arises. It waits for incoming request from clients. When a request arrives, it responds to the request either iteratively or concurrently. Client. A client is a program running on the local machine requesting service from a server. A client program is finite which means that it is started by the user and terminates when the service is complete. A client opens the communication channel using IP address of the remote host and the well known port address of the specific server program running on that machine. After a channel of communication is opened, the client sends its request and receives a response. Even if the request response part may be repeated several times, the whole process is finite and the eventually comes to an end. Then concurrency. Both the clients and the servers can run in concurrent mode. First, the concurrency in clients. The clients can be run on a machine either iteratively or concurrently. Running clients iteratively means running them one by one. One client must start, then run and terminate before the machine can start another client. Most computers now today allow concurrent clients. Means that two or more clients can run at the same time. Concurrency in servers. An iterative server can process only one request at a time. It receives a request, processes it and sends the response to the requester before it handles another request. A concurrent server on the other hand can process many requests at the same time and share its time between many requests. Concurrency in servers. The servers use either UDP, a connectionless transport layer protocol or TCP or STTP, which is a connection-oriented transport layer protocol. Server operation depends on two factors, the transport layer protocol and the service method. But theoretically, we can have four types of servers. Connectionless iterative, connectionless concurrent, connection-oriented iterative and connection-oriented concurrent as you can see in this figure. Now connectionless iterative server. The servers that uses UDP are normally iterative, which means that the server process one request at a time. Whereas server gets the request received in the datagram from UDP, processes the request and gives it to the response to the UDP to send to the client. The server pays no attention to the other programs. These datagrams are stored in a queue waiting for service. The server uses one single port for this purpose, which is well-known port. And all the datagrams arriving at this particular port wait in line to be served as shown in this diagram. You can see here in this particular diagram, the client once send a request, then the client to send two requests, then client three send two requests. The first request which is made by the client one, that is goes into the incoming queue to the server. Then first request from the client two goes to the incoming queue to the server for processing. Then you can see here the first request of the client three goes to the incoming queue. After that the second request of the client two and after that the second request of the client three. All these requests are in the incoming queue waiting for the server time to process it. And here the server uses the well-known port. Here the server uses the well-known port and which is of the UDP. Now all the incoming requests from different clients are put into the incoming queue. After that they are being processed one by one. So that is what connectionless iterative server. Now connection oriented concurrent server. The servers that uses TCP or STTP which are normally concurrent, which means that the server can serve many clients at the same time. The communication is connection oriented, which indicates that the request is a stream of bytes that can arrive in several segments and the response can occupy several segments. A connection is established between the server and each client. And the connection remains open until the entire stream is processed and the connection is terminated. So this type of server cannot use only one port because each connection needs a port and many connections may be open at the same time. Many ports are needed but a server can use only one port which is the well-known port. The solution is to have one well-known port and many ephemeral ports. The server accepts connection requests at the well-known port. A client can make its initial approach to this particular port to make a connection. After the connection is established, the server assigns a temporary port to this connection to free the well-known port. Data transfer now takes place between these two temporary ports. One at the client side and the other at the server side. The well-known port is now free for another client to make the connection. To serve several clients at the same time, the server creates a child process which are the copies of the original process that is parent process. The server must also have one queue for each connection. The segments come from the client are stored in its appropriate queue and will be served concurrently by the server as you can see in this particular diagram. You can see here the server creates child server process to which the clients are being attached. The client to this particular child server process, client to this particular child server process, all these child servers are ephemeral port numbers and the connection is going to be built in these particular two ephemeral port numbers and first the client sends the request and which is going to be the queue to this particular child server. After the second client sends two requests which are going to be here, the second child process to which this client to ephemeral port number and the child server to ephemeral port numbers are bind. After that the client three send two segments which are to be in the input queue of this child server client. The child server process after that send the request to the parent server and the parent server will process that particular request and then it will send the request as an acknowledgement to the respective client. So this is the scenario of the connection oriented concurrent server. So here pause the video, think and answer. The answer is, so here is the reference. Thank you.