 Hello everyone, today we will be discussing about the performance metrics in cloud computing and we will be more specifically focusing on the performance metrics for those cloud platforms where we will be hosting some of the IT services regarding internet of things. So by the end of this session students will be able to assess the performance metrics for IoT based cloud platforms. So to begin with it would be a good idea for beginning the things always with an example. So I'll be first of all introducing you with what is cloud computing. I mean it's going to be a short revision of what we have seen in our earlier videos and then we'll be focusing on some of the most important performance metrics. So as you can see here we have the cloud computing model that we have used even in our earlier videos which is broadly classified into three layers. Now the moment whenever we are switching from a general cloud computing perspective towards an internet of things kind of a perspective, the things that differs cloud computing with internet of things is that we have some sort of analytics in the back end. Like we have seen in our earlier video if you could just recall by passing this video that we have seen a couple of IoT based frameworks which are offered by Amazon Web Services, Microsoft Azure and even IBM's Blue Cloud. So depending upon which specific framework you are choosing for your IoT based application you are going to decide the protocols and those protocols in turn are going to be analyzing and adding the weight to the overall security of the system. It would be the overall easiness of your system. So some of the things like scalability, the bandwidth, the overall throughput and the amount of time it takes between the failures and the amount of response time, all these things actually count the performance metrics. As you can see in this example if you consider any monitoring application which comes somewhere in application layer like one of the examples that we have seen called AWS analytics. Either you can go with AWS analytics or if you want to begin with a small example which is free of cost then you can go to thingspeak.com and register for a free account where you can use the APIs for attaching your sensors with the help of those predefined APIs and then you can write a firmware for your controller and sense the data. So specific to a monitoring application or you are trying to collaborate maybe by using Google Docs based online documentation or maybe you are trying to communicate with people based on an online chat program if all these things come under application layer then probably you need to focus on those protocols that reside under the application layer of your cloud computing platform. So the availability of the system and the service as a whole definitely matters and this is one of the top most performance metrics that you need to assess before choosing any cloud service and if you have already chosen you need to analyze the following things because it never happens like your chosen a particular cloud platform for your IoT specific application and you are whether I mean whether you have to be going for the premium version or you want to switch back to some other service from another vendor it has to be completely decided by analyzing the performance metrics. Second is you have reliability I mean reliability can be counted by analyzing two times that is mean time between failures and mean time to repair. So the mean time between failure in the sense for example if I have a complete one day server consider we are analyzing this reliability of the server for consider we are analyzing the reliability of the server for one complete day that is 24 hours. So if let us say that somewhere at 11pm to 11 15pm the service was down. So in these cases we will be saying that out of 24 hours 15 minutes you need to negate because that was the time where we were trying to maintain our server or maybe we were trying to repair our server maybe due to it could be enormous amount of issues. So what we need to focus here is the mean time between failures is total 23 hours and 45 minutes I mean my service I mean my server is actually giving me a service for completely 23 hours and 45 minutes whereas 15 times I mean 15 minutes it was completely down due to some sort of maintenance and other issues. So reliability is something where the MTBF factor needs to be very high in comparison with the MTTR which stands for mean time to repair and also response time which is a third performance matrix is equally important for instance if I am trying to play a video on my server where I have hosted it and then if it is taking me too much amount of time for responding irrespective of how many clients are parallely trying to access it and all. So you need to take care like response times need to be really quick as compared to very slow moving sites. Next is security for instance you would never want your Gmail account or personal email based accounts or personal document based accounts or services being hacked nobody would like to get their bank accounts hacked. So the reason is everybody wants a very good security so whenever you are talking about the security aspect of a particular service you are hosting some important data on your web service you don't want those I mean that data to be hacked and manipulated so because of this you must be choosing a service where security comes on a top priority where there are very strong I mean where there is an availability of very strong encryption models like maybe RSC or some other similar encryption models. So throughput is also sometimes known as bandwidth for example you are hosting some hundred videos on your website and if your service is I mean if your service provider hosting service port is unable to fetch I mean upload and download the data in a given bandwidth then it is probably mismanagement of bandwidth where the speed even though consumer is having a very high rate of internet I mean very high speed internet even then your service being slow he would be feeling that even his internet connection is slower. So it should not happen like a person who is trying to fetch the data from your service maybe like the speed of two MVPs should not find it that your service or your hosting service is not giving you that much amount of throughput I mean if you if he is trying to access the data at two MVPs you are giving the data at one MVPs then definitely there is going to be a drop in the throughput and the overall bandwidth of the service. So point number 678 that is capacity, scalability and latency these three things play an important role in identifying the overall quality of your system along with the service. So what it says is that the capacity of how much storage it is giving you like maybe one GBP is or with a speed of one GB per month or it is giving you a scalability feature or not whether there is any latency being offered by your service maybe for example you have a consider that you have a plurator data of one GB on your hosting server and later on if you say that you have an analytical model being run on your service in the back end if it is taking too much amount of period for processing then definitely it is going to be counted somewhere in the reliability and the response time model. So if a particular service is being offered at a very less latency I mean the latency is too high then definitely your response time and reliability and all together your service is going to be degraded. So that's all for this video and these are the references used for the resources. Thank you.