 and welcome to this last lecture of this lecture series on high performance computer architecture. In this lecture, I shall consider the topic cluster, grid and cloud computing. So, these are the three topics which are little interrelated, but definitely they are not same and I shall consider them one after the other and as we cover different topics, we shall highlight their differences also. First, let us look at the motivation behind cluster computing. You will find that there are many applications that require high performance computing. So, numerous applications are nowadays available or present which requires high performance computing and that can be satisfied with the help of cluster computing and some examples of these high performance computing requirement, I mean which requires high performance computing is given here. Number one is numerous scientific engineering applications like modeling, simulation and analysis of complex systems like climate, galaxies, molecular structure, nuclear explosion and weather forecasting and so on. And also, there are applications which require high performance computing in business and internet applications such as e-commerce and web servers, five servers, databases and so on. And for these applications, if you want to have a very dedicated computer, custom made computer with custom software, it is pretty costly and moreover, this custom hardware and software does not allow, I mean not only they are very expensive, they cannot be, they are not extensible. That means, they cannot be extended as the requirement goes up. So, I mean some kind of supercomputers are needed to satisfy this high performance computing requirement, but unfortunately they are not extendable. The cost effective approach is to use cluster grid and cloud computing for these high performance computing to satisfy this computing requirement and as we shall see how these are performed in these three situations. And as I have already mentioned examples applications of cluster computing and which has, I mean you may be asking in which situation you will use cluster computing. These are the several situations like where you require large run times, real time constraints are to be satisfied, large memory usage is there, then you require high IOU usage, then you require fault tolerance and also you want to have high availability, in such situations you will go for cluster computing. So, as I mentioned high performance computing is the main motivator and it is an alternative to symmetric multiprocessing to provide high performance and availability and as a consequence, clustering has become a very hottest topic in computer system design. You may be asking how do you define cluster computing? Actually, it can be defined as coordinated use of interconnected autonomous computers in a machine room. That means, you are interested to perform the computing with the help of a interconnected array of computers inside a single room. It is not that they are dispersed throughout the country or throughout the large geographical area, but in a machine room. And you notice another term, autonomous. Autonomous by autonomous we mean each and every computer is a complete system, can work independently without the need of others, that is why we call it cluster computing. So, you can say collection of stand alone workstations of PCs that are interconnected by high speed network, may be a local area network and work as an integrated collection of resources. So, you find in addition to this, you know I mean the important feature is you have a large number of systems, but at the same time they work in a integrated manner and they provide a single system image. They have a single system image spanning all its nodes. So, this is a very important concept. You will require a special software layer to provide this as we shall see. Actually the cluster computing has become popular because of two reasons. Number one is nowadays you can have personal computers, which are pretty powerful at a very affordable price. Similarly, you can have computer networks like switches, different types of switches, which also are available at a very low price. So, these two is the main motivation and so that means it is now possible to connect cluster of workstations with latencies and bandwidths comparable to the two tightly coupled machines because of the advancement of your networking technology. So, we can build a system by using commodity of the self components. What are the commodity of the self components? We shall be using standard PCs or workstations and we shall be using standard networking components. So, clusters started to take off in 90s. Clusters of IBM's and Decox stations are connected by 100 megabit Ethernet LAN, HP clusters and so on. So, you can see we are using standard local area network to link different nodes. As I have already mentioned, although you have got several interconnected computers, but it gives a single system image and it gives you high performance and it is very inexpensive or you can say this is low cost alternative to inexpensive computers. So, this single system image makes a cluster appear like a single machine to the user. So, you can say if you look at different components of a cluster, it consists of stand alone machines with storage, a fast interconnection network, high speed LAN, low latency communication protocols. So, not only you will be using hardware, I mean fast interconnection network you have to use suitable software which will provide you low latency for communication and software to give single system image that is known as cluster middleware. And of course, you will require a host of programming tools to utilize the cluster. And there are possible different possible configurations, one is passive standby to make it reliable or to make it more available and active secondary, then you can have separate servers, you can have servers connected to disk, servers with shared disk as it is shown in this diagram. Here you have got two computers shown here, I mean here you have got one you can say a number of processors connected to a shared bus and this is connected to another system which is also having multiple processors and with shared memory. So, these two are shared memory multiprocessors, these two are connected together with a high speed messaging link to work as a cluster. This is another configuration in which you can have not only high speed messaging link, but you may have a shared hard disk, so in the form of redundant array of inexpensive or independent disk. So, here a red is shared by these two systems through IO buses, so this is stand by server with shared disk. And for a cluster, you must be having operating system and these are the various design issues in the context of clusters, first of all you have to take care of failure management. So, whenever you are having a large number of computers, some of them may fail, so it will tolerate failure, then it will recover from failure. So, you will require software for operating system will take care of failure management, then you have to do load balancing, the load has to be uniformly distributed among all the nodes in a cluster, then you will require parallelizing computation, because you will be doing a kind of parallel processing with the help of these the number of computers. So, you will require parallelizing compiler, parallelizing application, which is amenable to applications, which are amenable to parallelism, then parametric computation. So, these are the operating system design issues in the context of clusters, then you can here are the two basic categories or types, one can be non-dedicated clusters, so they are essentially you may say general purpose in nature. So, for example, a network of workstations, use pair computation cycles of nodes and background job distribution is done, individual owners of workstations, so these are non-dedicated, then you can have dedicated clusters with joint ownership, having dedicated nodes and which will also allow parallel computing, then you have got you can have homogeneous clusters, another way of classifying is homogeneous and heterogeneous. So, homogeneous cluster will have similar processors, all the nodes will be identical or similar software operating system and software present in all the components all the nodes in a cluster, alternatively you can have heterogeneous system with different architecture, data format, computational speed, system software, that means operating system and so on. So, let us see what are the advantages and disadvantages of clusters, obviously main advantage is it gives you high availability and resilient to failures, that means since you have got large number of computers, if one or two fails, still it will continue to function and as a consequence it gives you high availability and then second important feature is incremental extensibility. So, as your requirement keeps on increasing with time, you keep on adding nodes in a cluster, so you can increase the number of nodes in a cluster in a incremental manner and desktops are cheap and ubiquitous, so this is another important advantage. So, you are able to provide high performance computing at a very low cost, no need to buy dedicated expensive hardware, so you can use commodity of the self type of desktops for your application for developing a cluster. So, these are the advantages, of course you have got several disadvantages, number one is administration complexity, so since you have got your administering n node cluster, I mean administering n node cluster is close to administering n big machines, the reason for that is each machine is autonomous with hardware operating system, application software and so on. So, as a consequence each of these nodes are to be administered independently and separately, so administering n node multiprocessor is close to administering one big machine. So, whenever you have got multiprocessor systems a shared memory multiprocessor system, there you can have only it may be considered as one big machine, but that is not true in the context of clusters, so and also it gives you higher cost of ownership. And second important disadvantage is these computers are connected through IO bus, so in case of you know we have seen shared memory multiprocessor system or massively parallel processing systems, the processors are connected through memory bus. So, memory bus has got higher bandwidth and smaller latency, on the other hand whenever they are connected to connected using IO bus, the bandwidth is smaller and latency is larger, so lower bandwidth and higher latency you can say, so higher part is missing here, so it will be higher latency. So, this is the disadvantage and another important disadvantage is n machine cluster have n independent memories and n copies of operating system as I mentioned earlier. So, since each is autonomous you will be having independent memories, so that they can work independently and n copies of operating system, so because each can again operate independently without the need of others. And here is another advantage large computers have small volumes, so whenever you are building a large computer, then cost has to be amortized over few systems, so this results in higher cost, on the other hand in case of clustering, you have got large number of low cost systems as a result the cost is much smaller. So, administrative complexity can be mitigated construction of shared memory multiprocessors and keep storage outside the clusters, so you can keep the storage outside the clusters as it happens in case of storage area network. So, these are gives you some components which are used like the local area network that is being used, you can use high first ethernet or gigabit ethernet or myri-net, so these are the type of networks that can be used for connecting or infini-band, which gives you 100 megabits per second. So, you can see these are the rates 11.25 megabits per second or 110 megabits per second or 200 megabits per second or 800 megabits per second, so the speed is quite high and distance maximum of course, distance is limited you can it has to be within 100 meter, that is why I said that it has to be within a room, but it did not be really within a room, so 100 meter is quite long distance and you can connect about 100 nodes. So, storage area network tries to optimize network of performance for short distances for example, as it happens in case of infini-band. So, you have much lower protocol overhead and much less security concern as it happens whenever you do the communication through internet. So, let us have a look at the taxonomy of clusters, one is network of workstations example is Weyulf clusters, where it use course of course, species that we commonly I mean commodity commodity of the selves of the self species with storage area network and you can have cluster forms existing species on a land, which when idle can perform work or super cluster or constellations, where you can have cluster of clusters that is within a campus. So, these are possible ways in which you can build clusters, so these are some of the examples the top diagram shows any processor clusters each processor here you have got a single processor with its own dedicated memory and IO. So, you have got a large number of such processors connected through a ethernet switch 1 GB ethernet switch, here you have two way symmetric multiprocessor cluster, because each node is having two processors with a shared memory and IO and these are connected through 1 GB ethernet switch. And the third diagram that is shown is a 8 way SMP cluster, here you have got 8 processors in each node with a shared memory and IO and which are connected with the help of again 1 GB ethernet switch and of course, ethernet switch may be connected to internet. So, these are the 3 different types of cluster shown here based on uniprocessor nodes 2 SMP nodes or 8 way SMP nodes and this is how the connections can be done flat neighborhood networks, this is how the interconnection can be made and this particular topology is known as FNN or flat neighborhood networks. So, this diagram shows the layout of 24 node FNN of flat neighborhood networks, here as you can see you have got 24 nodes 1 2 up to 23 0 to 23 24 nodes each having processors memory and IO devices and these are connected with the help of 3 16 port gigabit switch. So, 16 port gigabit switch, but one point you should notice is that each node is connected to 2 switches. So, you have got multiple network interface card NIC stands for network interface card that means, each node is having multiple in this particular case 2 network interface card, each network interface card is connecting to 1 of the switches. So, 2 NIC is connecting to 2 switches. So, this is how this is this in this way the communication between any 2 node can be done using a single hop. So, it requires you can get it requires special routing to be set up at each node 348 port switches can be used to connect 64 pcs each using no more than 2 NICs per pcs. So, this is a very interesting topology and it can be used to realize clusters. This is an example Beul cluster NASA built first Linux pc cluster in 1994. So, it is a low cost network pcs computers connected to one another by a private ethernet network. So, connection to any external network is through a single gateway computer and configuration is you are having commodity of the self components such as inexpensive computers and blade components are used. That means, computers mounted on a motherboard that are plugged into connectors on a rack. So, you will be using a rack on the back plane. There are connectors and each of the blades can be pushed to connect to the system as I shall show you and shared this can shared nothing model is possible and whenever you go for this type of cluster. This shows a single Beul cluster as you can see this is the connector which can be connected to the connected to the back plane. Here it shows two processor this is one processor this is another processor to two processing modules which are present in a single blade. This Beul project of NASA used Linux and public domain software. So, they did not go for any custom software. So, custom software is not being used and standard commodity of the self software has been used. So, made some changes to Linux kernel to support things like channel bonding. So, we have seen multiple ethernet channels are used to connect, but to connect into a single virtual channel to overcome bandwidth limitations. We know that the ethernet has a limited bandwidth like 1 megabit per second or 10 megabit per second whenever you go for gigabit ethernet. But, if you want higher bandwidth you can use a special type of technique and this is what is being done known as this it can with the help of this special technique you can combine multiple ethernet channels into a single virtual channel. So, you can have more than 1 gigabit bandwidth. So, this will overcome bandwidth limitation and they will clusters have become very popular and you can have nodes up to 1000 nodes in a single system. Another example is Google infrastructure. Google serves on an average 1000 queries per second. All of you are familiar with I mean use of I mean you are using the service provided by Google for email that is Gmail and all those things. So, that is serviced with the help of this infrastructure. So, in addition to this it is serving the queries a search engine must crawl the web periodically to have up to date information. So, this is used for web search web servers more than 6000 PCs and 12000 disk give 1 petabyte of disk corage rather than using red. This Google infrastructure relies on redundant storage and sites. So, each PC is a Linux and this is only the biggest source of failure is a software in case of this infrastructure because you have got a lot of redundant PCs, computers and other things. So, only source of failure is software. So, this diagram shows Google infrastructure. You can see here different racks on. So, 10 racks in here 10 racks here. So, you have got 40 racks connected by 4 copper gigabit ethernet and that links 1828 into 128 switches. So, you have got 128 by 128 switches and one rack contains 80 PCs. So, each rack is containing 80 PCs and you have got 40 such racks. So, you can imagine the total number of such PCs are present here and you are using that OC 12 and you have got OC 45 that gives you 622 megabits per second or 24988 gigabits per second that I mean that is the rate at which you can communicate with this end switch. So, end switch gives you faster access to the entire infrastructure and you have got 2 such end switches at both ends. So, this is the diagram which shows the way this is a single rack and close the view of one rack PC. So, this diagram shows you Google infrastructure. This is another view of the cluster that is implemented with the help of Google infrastructure. Then you can have cluster of shared memory multiprocessors. So, high bid cluster of shared memory multiprocessors can be used this is often called constellations. So, this is efficient shared memory programming at the individual nodes is possible because H is having a shared memory multiprocessor. So, this is better suited for implementation of distributed shared memory multiprocessor and this is meaningful only P way node not much expensive than P individual nodes. So, this is what has been observed that if you implement a P way node that means each node is having P computers that is cheaper than to have P individual nodes and then this is meaningful and that is what is being done. An example is Orion system. So, we have discussed about one very important topic that is your cluster computing and how clusters are implemented various topologies and at the end we have shown some example. Now, we shall focus our attention to another way of computing, bid computing. So, why the name grid computing? So, this the term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid all of you are using electric power. So, as you know there is a that electric line that electric line is connected to each and every house and we can access it without much of botheration. So, we know that each plug point will provide you 220 volt or 120 volt depending on where you are and that is the main feature. So, power grid is transparent to the appliance. So, whether you will be running a computer or you will be running some other appliance, it does not matter you get the power and you can run different types of systems without bothering where from the power is coming and how the power is available and it is all pervasive. So, it is available everywhere and any power point provides same power that means you get 220 volt at each and every point. So, that is power availability is same at different points. So, that is what is the main feature of grid computing. So, availability is uniform from different places. Similarly, just like power grid in compute grid is transparent to the user. So, this compute grid is transparent to the user although it is distributed throughout the country that means it is accessed through wide array network and it is pervasive and compute power is independent of where a job is initiated. So, you can initiate a job from anywhere and you will get the same computing power for performing an application. So, this question naturally arises what do you really mean by grid computing. So, this is alternative to traditional large parallel or distributed systems which can provide better computational power for high end applications this is known as grid and this is an innovative extension of distributed computing technology. So, grid computing is a kind of distributed computing, but it is not the way traditional distributed computing is done. So, it leverages a combination of hardware and software virtualization and the distributed sharing of those virtual resources. These are the two basic concepts first of all hardware and software virtualization second is distributed sharing of these virtualized resources and these resources can include all elements of computing. So, like hardware software applications networking services pervasive devices and complex footprints of computing power. So, you can see you can have different types of different elements which are virtualized and available. So, it can be actually sometimes grid computing is also referred as CPU scavenging or cycle scavenging or cycle stealing or shared computing creates a grid from unused resources in a network of participants. So, what is happening here you have got large number of participants and all of them are performing their own job. However, each of them may have some additional or additional or extra computing power available and which is being utilized in grid computing by others through the internet. So, where worldwide or internal to an organization. So, it can be this grid computing can be done through I mean across the world or internal to an organization. So, IBM's grid computing has put forward that actually basic idea was use open standards and protocols to gain access to computing resources over the internet. So, you are gaining access of computing resources over the internet and you are using large number of small systems spread across a large geographical area region and it presents a unified picture to the users. So, here also it is giving some kind of unified picture to the user. So, you are not really consciously accessing each of the each and every different computer. So, you are accessing a grid. So, where from you are getting that computing power is transparent to you and this availability of high speed networking surmounts the distance problem. You may be asking how you are able to get high performance through wide area network. The reason for that is you know because of the advancement of wide area network technology. Now you can have high bandwidth of access through internet and that is how this availability of high speed networking is possible and it surmounts the distance problem. So, the grids can be divided into several types. One is your data grid, another is your compute grid. So, data grids must focus on data location, data transfer, data access and critical aspects of security because you are handling data, then data collection, storage retrieval from a large number of bases. So, this is the function of a data grid. Similarly, you can have compute grids. It provides users with compute power for solving jobs. So, the ability to allow for independent management of computing resources, the ability to provide mechanisms that can intelligently and transparently select computing resources capable of running a user's job. The understanding of the current and predicted loads on grid resources, resource availability, dynamic resource configuration and provisioning. So, these are the features, then failure detection and failover mechanisms. So, this ensures appropriate security mechanism for secure resource management, access and integrity. An example is SETI at home. This is an example of a grid computer. And you can have three different types of grid environments. One is global grids, second is enterprise grids, third is cluster grids. So, global grids are collection of enterprise and cluster grids available across the globe. Then, enterprise grids are multiple projects of dependent departments here, resources within an enterprise or campus. So, this is the enterprise grids. So, it does not address security and global policy management issues because since it is restricted within an enterprise, it is not necessary to address security and global policy management issues which is required in place of global grids. And third type is cluster grids. It is the simplest form of grid. It provides compute service at the project or department level. The key benefits of cluster grid architectures are number one is maximize use of compute resources such as renaissance machines and increased throughput for user job. So, these are the two key benefits that you achieve with the help of cluster grids. And cluster grid is a superset of compute resources such as renaissance clusters. So, cluster grids can operate with a heterogeneous environment with mixed server types, mixed operating systems and workloads. So, this is another very important feature. So, you can have heterogeneous environment. So, this is a, I mean, comparison between P to P computing and grid computing. All of you are familiar with peer to peer computing which involves a large number of nodes connected loosely and this P to P computing is primarily used for sharing files. So, you can distribute files over a large number of nodes. Then for efficient sharing, you distribute. Then you access from different nodes. That is the basic idea of peer to peer computing. On the other hand, grids provide unified view. So, target solving some specific types of problems such as scientific research, business logic support, etcetera. So, it is not restricted to only file sharing. So, here is a comparison between cluster computing versus grid computing. So, as I have mentioned, cluster computing can be said to have a subset of grid computing because cluster nodes are in close proximity and interconnected by LAN as I have mentioned and grid nodes are geographically separate and connected through wide area network or when. So, clusters provide guarantee of service. Nodes are expected to give full resource. On the other hand, clusters are usually a heterogeneous set of clusters. So, since it is distributed throughout the network, availability and performance of grid resources are unpredictable and request from within an administrative domain may gain more priority over request from outside. So, this is another feature that has led to, I mean, that differences in availability of different systems. Availability and performance of grid resources are unpredictable because of many reasons and this is one of them. So, this is that example, Sethi at home that was developed for to detect and alien signals through a any SIBO radio telescope was largest telescopes. So, this used the ideal cycles of computers to analyze the data generated from the telescope. So, over 500,000 active participants, most of whom run screensaver on home PC. So, from these active participants, the computing power is being used to analyze the data generated from the telescope and performance on the average over 20 teraflops per second. So, you can see, you can achieve massive performance because of large number of participants although each of them is probably giving only a fraction of their computing power. Now, the big question is at some level and all applications here common needs, how it is being done? So, how to find resources, how to acquire resources, how to locate and move data, how to start and monitor computation and how to manage all I mean manage it all securely and conveniently. So, for that purpose the software that is being used is known as grid middleware. So, a single software infrastructure supports all these all the above features. So, these are the different middleware components number one is grid information service EIS and support this supports registering and querying grid resources grid resource broker and user submit their application requirements to this grid resource broker. Then grid resource broker discovers resources by querying the GIS, GIS means this grid information service. Then you have got grid fabric which manages resources like computers, storage devices, scientific instruments etcetera. Then you have got core grid middleware and this offers services like process management, allocation of resources, security and quality of service and so on. And user level grid middleware and this offers services like programming tools, resource broker, scheduling application tasks for execution on global resources. So, you can see this grid computing is feasible over a large geographical area with the help of this grid middleware components. Now, we have come to the last topic that is your cloud computing. Cloud computing is also a kind of distributed computing. I mean some are similar to cloud computing to grid computing. In case of grid computing we have seen with grid computing one can provision computing resources as a utility that can be turned on and off. So, here in grid computing it is done in this way you can provision a resource and after the provisioning is done you may utilize the entire resource or you may utilize it partially and then if you do not need it then you can turn it off. That means a particular resource provisioning can be done and you turn it on or off, but whenever an utility is being used it is not guaranteed that you are utilizing the entire resource. And this cloud computing goes one step further with on-demand resource provisioning. So, here we are not really a particular resource is turned on and then it does not allow instead of turning it on and then you can using it partially or not in this case a concept called resource provisioning is done. So, resource provisioning means you need whenever you need it and as much you need it you use that. So, this is the on-demand resource provisioning in contrast to on-off type resource process provisioning done in grid computing. So, cloud computing is internet based computing whereby shared resources, software and information are provided to computers and other devices on-demand. And one big advantage of this on-demand resource provisioning is that you pay for the bandwidth and server resources that one needs and when the requirement is over then the turn the whole thing off. So, you are paying only for that part you are using and here are the benefits of cloud computing customer avoids capital expenditure of the company. That means here what you are doing you are not really deploying costly infrastructure. So, you are using third party infrastructure to perform your job. So, this is one very important concept. So, this reduces the cost of purchasing physical infrastructure by renting the usage from a third party provider. So, you are only renting a part of the resources provided by a third party. So, instead of in contrast to grid computing this eliminates over provisioning when used with utility, utility provisioning utility pricing. That means instead of as I mentioned whenever in grid computing it may lead to over provisioning, but in case of your cloud computing over provisioning is not done. And as a consequence your you pay only a very small part and when used with utility pricing that means you are using utility and as much as you use you pay for that. And it also removes the need to over provisioning over provision in order to meet the demands of millions of user. You see in not only you are paying less, but it is also helping others this you should understand. So, you are taking or you are consuming only as much as you can. So, the remaining part is available for others. So, in this way grid computing can afford to give service to a large number of users. And with grid computing companies can scale up to a massive capabilities in an instant without having to invest in new infrastructure, train a new personnel or license new software. So, you can see a company whenever you need massive computing it can scale it up by hiring resources from a third party. And as soon as the computing in the third high performance computing is over you simply turn it off. Similarly, you can use massive storage whenever you need it and whenever you do not need it you simply turn it off. And also you do not you can use a special software there is no need for to license a new software. So, even not doing the licensing the license is not your name license is in the name of the service provider. You are simply using it and you are paying for your for the for your usage. So, without having license of the software without having without purchasing the software without purchasing the hardware you are able to use them for your to your advantage to your benefit. This is the benefit of cloud computing. And there are three important segments of applications. First one is known as applications. So, applications software as a service. So, which is known as SAS in short SAS provides a complete turn key application. So, complete turn key application such as enterprise resource management through the internet. So, you what you are doing here you are using a an application complete application through internet provided by a third party. Second type of second segment of cloud computing is platform. So, platform as a service. So, which is known as PAS. So, PAS offers full or partial application development that users can access. So, it a platform is is provided to you for for for full or partial application and that can be used and with the help of this segment second segment platform segment third is your infrastructure. So, infrastructure as a service or IAS. So, a consumer can get service from a full computer infrastructure infrastructure through the internet. So, without procuring it you are able to get full service full computer infrastructure through the internet provided by a third party. So, these are the three main segments which are mainly available through cloud cloud computing. Now, let us look at the different benefits and advantages of cloud computing. So, the benefits are you pay per use that means as much as you use you pay for only that and you can have instant scalability as soon as your requirement increases you can scale it very quickly and you can scale down very quickly as in an instant because you are not deploying it deployment is it is deployed by others. So, based on your requirement you can you can scale it up and scale it down and only you only you pay for the for your use. Then you can have high security high security is provided through cloud computing and you can have high reliability as well because when it is in cloud computing as you have seen you have got large number of computers and systems and it gives you high reliability and it also gives you high APIs. So, these are the these are the benefits of cloud computing and the advantages of cloud computing is listed here it provides you lower cost of ownership. Since, you are not deploying anything you are not buying any infrastructure hardware or software your cost of ownership is very small. So, it provides you lower cost of ownership and reduce infrastructure management responsibility. So, infrastructure management responsibility is also not with you since it is not part of you even not really performing the infrastructure management. Infrastructure management is done by a third party you are simply using it and this is one advantage and it allow for unexpected resource loads. So, whenever you have got unexpected resource load we have got large number of load you can very easily scale it up to meet the load then faster application roll out. So, because of these advantages you are working on an application by using the infrastructure and the sources of cloud computing using cloud computing you can complete it and you can roll out your application very quickly. So, these are the main advantages and this cloud economics is based on multi-tenanted. So, that means you can have multiple resources you are using as tenants and virtualization lowers cost of cost by increasing utilizations. As I have mentioned hardware and software are virtualized and it lowers cost by increasing utilization of the resources and economics of scale afforded by technology. So, since large number of people are using it you know that the as a result cost is divided among a large number of users. So, economics of scale afforded by technology. So, technology itself is providing this sharing of cost by a large number of users and automated update policy. So, that means that as and when as the requirement keeps on increasing the infrastructure infrastructure is also upgraded to satisfy the requirements of users whether it is hardware resource or software resource the resources are kept on increasing. So, these are the different aspects of cloud computing. So, to summarize in this lecture we have discussed three important types of computing. First one is cluster computing that is primarily used for high performance computing and we have seen computers are within a small geographic area that is LAN. On the other hand grid computing and cloud computing in case of grid computing and cloud computing the sources are distributed through internet and of course in grid computing it is done in one way and in cloud computing it is done in one way and because of many benefits and advantages of cloud computing it is becoming increasingly popular. Thank you.