 Now that we are comfortable with the architecture, essentially the service broker, to provide cloud services, let's look at models to optimize the delivery of services. For that I would refer to the ITUT recommendation y.3508, again it is between 3500 to 3599, which is the cloud computing recommendations. We are going to look at the deployment perspective, how cloud could be distributed amongst different stakeholders and then we are going to look at their specific models. So what essentially necessitates or triggers the requirement of having different models? First of all, the environment of cloud service delivery is challenged because it is pay as you need model, so for an arbitrary number of customers which could vary significantly over time and over space, it is a challenged environment. The challenge becomes more complex once we have real time services. In real time services, the delivery in time is a challenge between the provider and the consumer. Add to this the burden of load balancing because the consumer is connected to the cloud infrastructure through a certain interface in one geographical proximity. So the load could be coming on a certain data center. So is it possible to migrate or offload the burden on a particular data center by distributing it? It is very intuitive to think about different distribution models and the most logical way is going to be divide and conquer that is the nearest consumers should be handled by the providers in a geographical proximity. So the distributed cloud services could be seen as an international or a global management activity which manipulates the cloud resources in a distributed manner by aggregating the infrastructure end to end that is we would have entities like core, cloud, regional cloud and edge cloud. So the core cloud as the name suggests is what mediates or what provides transit between multiple regional clouds and the regional clouds in a again hierarchical manner would provide connectivity and the mediation between multiple edge clouds. So we have the core clouds which have the largest resources and the global management interfaces. The regional clouds could be optional because once we have the core cloud, so the core cloud could be present in every nook and corner of the world and could actually provide services directly to the consumer well that could be one way but there are multiple stakeholders then we have different administrative jurisdictions so we need to have some kind of regional presence of cloud as well which could have boundary easily identifiable from the core cloud. So it is an optional entity which is deployed in a certain geographical region for load sharing and service quality enhancement. Of course as we have seen and as we agree that the more hardware components and the more infrastructure we have, the better cloud services are going to be. So the service quality is enhanced through the participatory role of the regional clouds. Then we have the edge clouds again the edge as the name suggests is the closest cloud presence to the consumer and it has the smallest resources and why would that be because the users are essentially subscribing to the global cloud. The global cloud could and could not delegate the responsibility of executing a certain service to the edge cloud. The advantage of having edge cloud is that edge cloud could have local customization which would have no bearing on the global disposition and the global configuration of the core cloud. So this actually helps the customers to have their own flavor of service delivery while not bothering the entire outlook of the global cloud. Let's look at the hierarchical relationship between the edge, regional and core cloud. So all these essentially turn into some configuration arrangements or configuration models. Let's start with the leftmost side. So horizontally we have the cloud computing model in which we have the end device which is directly interacting with the provider A which is a core cloud. Well, this is what we see moving on the x-axis from the cloud computing model to the distributed cloud model 1, model 2 and model 3. So in the second situation that's model 1 we have the end device which is actually talking to the distributed cloud which is split between the core cloud and the regional cloud with their own array of shared responsibilities. Then we have the edge cloud talking directly to the core cloud and in the last situation that is the cloud service provider D scenario. We have model number 4 here, number 3 where we have the end device talking to the edge cloud. Edge cloud in turn is talking to the regional cloud and the core cloud. So this is more akin to if you recall the Cisco service architecture. Similar to we have the edge layer or the access layer then we have the distribution layer and then we have the core layer. So essentially the reachability and the coverage is enhanced by deploying more clouds and the work is also offloaded so the load balancing becomes fair. So all these advantages become very natural. Let's look at an interesting example of cloud computing by using one of the models that is the model number 3, the fourth model. Here we have machine learning service where we have the end devices on the rightmost side that's the cloud service consumer which has some sensory data like on Amazon cloud we can we could upload or we could migrate data. So the end devices actually send their data to the cloud. Now the edge clouds actually pass this on to the regional cloud and the regional cloud passes this on to the core cloud. Now the core cloud is now going to train on the machine learning algorithm on this particular data set this corpus. Now this after having trained it shares the learned neurons or the machine learning trained algorithm it shares that with the regional cloud which encaches it and then subsequently shares it with the edge cloud. So the edge cloud has the testing data phase so we have the collecting, training, caching phases which precede the testing phase and the test phase or the actual utilization phase is executed on the edge network. So the edge network first acts as an intermediary for sharing the data or uploading the data right at the core. The core is most computing intensive, learns the patterns, shares those with the regional acts as an intermediary passes it on back to the edge. The edge acts as the computing environment where this machine learning learned algorithm actually provides its services to the end device. Now again this is from the ITUT recommendations 3508 overview and high level requirements of distributed cloud.