 Hello and welcome to the next lecture in the course on Introduction to Computer and Network Performance Analysis using queuing systems. I am Professor Varshapte, I am a faculty member in the Department of Computer Science and Engineering, IIT Bombay. So, we will talk today about some results of MG1 queues and memoryless arrivals. As usual going to show the slides that just remind you of the open queuing systems and the parameters of these queuing systems. By now, you should be very familiar with this. This is just for your reference. These are the matrix that we are interested in. And this is actually showing the summary of pretty much everything that we have done till now for in a sort of a completely general GG say Ck sense. Some of these matrix if you just put k going to infinity, then these give you the GG C with the infinite buffer. Those are the matrix it gives you. For example, we know that there is no loss in infinite buffer systems. So, P L becomes 0 for infinite k. And loss probability for any value is basically just 0 if there is an infinite buffer. And here also this becomes lambda if k is infinity. And this is also equal to lambda tau by C if k is infinity. So, all of these have nice values. But we know that these values all we really have is some relationships between these. And we know that the last one or two lectures, we have actually been studying Little's law and we have actually done some examples and case study for Little's law. And we know that we can relate for example, this and this and this, the number of customers and response time. We can relate the number in queue and waiting time. These we can relate. And actually, I want to also remind you that even the utilization law is nothing but an application of Little's law for the server. So, it is actually if you relate lambda the throughput with the service time, then you get utilization. So, we know how these are related to each other. You can get utilization if it is a GG CQ you actually can get the exact utilization of lambda tau. But these they are only related to each other. We neither know this nor this yet, neither do we know this yet nor this yet. So, can do we have anything, any result that can help us get at least one of these. So, turns out actually response time that is average response time formula does exist for the MG 1Q. What is MG 1Q? Interarrival time is exponential meaning memory less. So, the M remember it is for memory less service time is general. That is the G and we have a single server and infinite buffer for this Q actually a formula for response time exists. The name is Polashkinchin formula. We are not going to prove it in this course. We are I am just going to state the result and maybe just do some examples so that you can understand it. So, this is the formula. Let me explain to you what comes in this. So, let us see a square with the squared coefficient of variation of service time. Now, what is the squared coefficient of variation? So, I am going to show you another slide for this. So, for any random variable actually coefficient of variation is defined for any random variable. Suppose in our case of course, we are looking at service times. Suppose it has mean tau and variance everybody knows variance of course, sigma squared S. So, squared coefficient of variation is defined as sigma squared S divided by tau squared. What is this? This is basically variance by mean squared. What is this? Why is this particular metric useful? It is basically like a normalized metric of variance. So, what do I mean? So, let me give an example. Assume there is a random variable X whose variance is given by 10. So, there is one random variable X whose variance is given by this. So, this is going to be sigma squared X. This is given by 10 and let us suppose its mean is 100. Now, assume there is a random variable Y whose variance is now this is going to be sigma squared Y. Let us say this was tau sub X and this is again 10. It is the same variance. But suppose the mean of this is around 10000. So, tau Y is 10000. So, if you look at these numbers, I would like to ask you as to which of these random variables just feel a little more variable to you which is inherently more variable X versus Y. So, X is showing a variance of 10 on a mean of 100. Whereas Y is showing a variance of 10 on a mean of 10000. It is kind of intuitive that X is the more variable of the random variables here because 10 on 100 is a little more significant than 10 on 100000. So, this is what is captured by this coefficient of variation which is that it takes. So, in this case for example, this will have coefficient of variation of 10 divided by 10,000 and this will have 10 divided by actually this will be this is 10 raised to 4. So, this is going to be 10 raised to 8. So, now this is greater than this 10 divided by 10000 is greater than 10 divided by 10 raised to 8 obviously. So, this is showing this is a metric that shows that X has more inherently more variability relatively more variability than Y. So, that is what this captures. And so, it is coefficient of variation which is used in this polystech kinshin from response time formula. So, response time is equal to of course, we have the service time component there the tau which is the service time. And this is basically the waiting time. So, it is lambda tau squared in bracket C a squared plus 1 divided by 2 multiplied by 1 minus rho. So, what does this formula say, right? When you see a formula you should always try to understand the story that the formula says. The story of this formula which is by the way instead of polystech kinshin sometimes we just call it pk formula. So, the pk formula what is it telling you? It is telling you that response time increases non-linearly with utilization, right? As rho increases 1 minus rho will decrease. If 1 minus rho decreases then 1 over 1 minus rho will increase, right? That is the factor here 1 over 1 minus rho that factor will increase, but that factor increases non-linearly because it is this 1 minus rho is in the denominator. So, it is not a linear increase. So, there is a non-linear increase with respect to increasing utilization. And if the mean is the same. So, I have the same mean and arrival rate if it is the same. The mean is the same, mean of the service time is the same, arrival rate is the same. So, that means utilization is going to be the same, right? If utilization mean and service time is the same, still the response time will be higher if the variance of the coefficient of variance of the service time is higher. So, that is basically high, you know, sort of normalized variance, not just variance, but if the normalized variance variance is higher. So, clearly if the mean service time is the same, then whichever random variable has the higher variance will also have the higher coefficient of variation. So, if that these quantities are higher then response time is higher. So, this is a very interesting point you should think about it as to why is it the case that even if the mean service time is the same, the arrival rate is the same, just if the variance of service time is different then the response time is higher. And this is because if you think about a queue, the waiting time especially response time of course, just has this fixed service time, the average is the same of the service time. The waiting time increases of a customer that is let us say here, let us say tau for example, is 10 milliseconds. But if the service time has high variance, then maybe all these by some randomness, let us say this customer may have 20 milliseconds service time, this may have 15, this may have another 10 and maybe this has 1. So, it is just, it is entirely possible if it is highly variable that there are some high service time customers in front of this customer who has a service time of 1 millisecond, but this customer gets stuck behind all these and this customer's response time is actually going to be greater than 45 millisecond. So, response time increases with variance because the compounding effect there is an adding effect of high service times which get go into the waiting times. And so, you get more samples of higher waiting times if there is higher variance. So, you can think about this. But yeah, so let us see some examples. We will first look at an example of md1q. In fact, this is an example of a service time that has no variance. So, sigma squared S is 0. So, C S squared is also 0, it is fixed deterministic D is for deterministic, so this is fixed. So, if it is fixed, there is no variance. So, this factor goes away. So, you just get tau plus lambda tau squared divided by 2 multiplied by 1 minus rho. And if you plot it, this is what it looks like. It is just to show, remember the non-linearity is there. This is the non-linearity. As lambda increases, it is the utilization that is going to increase and the non-linearity is there because of this. And initially the growth is slow and then the growth is fast. We will look at another example of mm1. So, this is basically exponentially distributed service time. So, when service time is exponentially distributed with mean tau, the variance is actually given by tau squared. This is just the property of exponential distribution that the variance is actually mean squared. You can look this up in any sort of statistics book. So, if variance is equal to mean squared, then coefficient of variation becomes tau squared divided by tau squared, remember. So, this is equal to 1. So, actually coefficient of variation of exponential service time distribution is 1. So, now you stick that into the pk formula. This is the derivation. This is where the C s squared was and that is where we have put 1. So, lambda tau squared 1 plus 1 divided by 2 divided by 1 minus rho. This 1 plus 1 and 2 cancels actually and you get and then if you multiply this here, you get tau minus tau rho plus lambda tau squared. We just write it like this to then write it as rho tau. This lambda tau is going to be rho for mm1q and then this minus t rho and this is also t rho, this cancel and you get tau divided by 1 minus rho. So, this is actually a very important formula to remember even if you do not remember anything else much in this course. If you remember this, it is very useful. Again, this is the one that captures the nonlinearity and let us look at the graph. So, this is a graph again for tau equal to some 0.5 and this gives us actually mu as 2. And just for a mu equal to 2, we have taken lambda varying from just a low lambda 0.1 to 1.9. It still has to be less than or equal to mu. So, we have just taken it to some high value less than mu. And this shows actually the comparison of mm1 versus md1 and you can clearly see that this is the graph corresponding to Ca squared equal to 1 and this is the graph corresponding to Ca squared equal to 0. So, obviously, this graph is lesser than this. So, higher the variance, response, everything else being the same. The mean was taken as the same and we are varying lambda also in the same way. It is the same lambda and the means are the same, but later you see a bigger divergence. Initially, both the values are a little low, but later you can see that the mm1 response time is going to be greater than the md1 response time because Ca squared is greater than, for mm1 is greater than Ca squared for md1. So, we will do some more examples and exercises in your practice problems, but this is the basic story of the response time. So, once you have response time, those formulae n equal to lambda r, n is equal to lambda r and q is equal to lambda w, especially for mm1 q, since we know that the throughput is equal to rival rate, from these two formulae, you can get n and q. Waiting time is nothing but r minus tau. So, this we get from the pk formula response time, this tau is given. So, we can get waiting time. So, we get r and w and from that we can get n and q. So, at least for mg1 q, we actually have all the answers that we need. Now, actually this response time formula is actually available for mmcq also and actually also mmck. We are not going to talk about this in this course, but you can refer to any queuing systems textbook and these two formulae you can find. But no formula actually there is this mgc and mgck is actually just not solved, you can get approximate answers, but you do not get exact answers. So, that was about the response time. We are also going to talk about some other results today. These are about arrivals and let us get into them. So, there is a very interesting relationship between when inter-arrival time is exponentially distributed and the Poisson distribution. So, what is that relationship? So, suppose inter-arrival time is memory less, which is that means that it is exponentially distributed with the rate lambda and if x is the random variable denoting the inter-arrival time and f of t is the cumulative distribution function, this simply defines what the cumulative distribution function looks like. This is just for completeness. I am telling you again we are not going to do this proof. I am just stating this result without proof, but I am just showing you what exponential distribution means. This is what it means, this is the probability that the inter-arrival time will be less than t. This is the probability that inter-arrival time is less than t. Similarly, suppose the number of arrivals is denoted by At and number of arrivals At denotes the number of arrivals in interval 0 to t. Suppose At denotes the number of arrivals to a q in the interval 0 to t. If the inter-arrival time is memory less, then the number of arrivals has Poisson distribution. This is the relationship basically and it is vice versa also. If the number of arrivals At in an interval 0 to t has Poisson distribution with parameter lambda t, then the inter-arrival time is exponentially distributed. What is Poisson distribution? You might be remembering that if Poisson with some parameter alpha, somewhere random variable m has the Poisson distribution with parameter alpha, then probability that m is equal to k is e to the minus alpha alpha to the k divided by k factorial. So, this is actually Poisson distribution with parameter lambda t. So, that says that the probability that there are At takes the value k is e to the minus lambda t, lambda t to the k divided by k factorial. So, again to repeat what this property says is that if inter-arrival time is exponentially distributed, then the number of arrivals in an interval 0 to t has Poisson distribution. So, suppose this is a timeline showing arrivals to a queuing system and let us say this is the interval 0 to t. So, let us say there will be some arrival here, arrival here, arrival here, arrival here. So, these times, these inter-arrival times are going to be samples of an exponential distribution and this number. So, this number right now is 4, this is going to be a sample from a Poisson distribution with parameter lambda t. So, we have to remember this, anytime you see that the queue says m in the inter-arrival distribution, that just means that the number of arrivals in an interval 0 to t has Poisson distribution and sometimes as a short cut, we describe this as, these arrivals are termed as Poisson arrivals with rate lambda. This is how we call it, we do not say a number of arrivals at in an interval 0 to t has Poisson distribution, this is too long. We just say arrivals are Poisson or the queue has Poisson arrivals with rate lambda. This is very useful, you will see later why this relationship is useful. Now, let us move on to the other properties. Now, for the arrivals itself, there are some 4 properties and we will be going through these one by one. So, first one is Poisson arrivals see time averages and it has a very nice short form called pasta and this is a very interesting property. This is why Poisson arrivals are important actually because if we can assume Poisson arrivals, then there are lot of nice properties. So, first of all, there is a phrase here called time average. So, let me explain what a time average is. So, let nt be the number of customers in the system at time t. The long term time average of nt is given by the limit as t tends to infinity integral 0 to t nt dt divided by t. So, this is just an integral formula, maybe it is difficult to understand. So, I will just explain what that means. So, suppose this shows the graph of nt versus t. So, let me use another color here. So, suppose there was an arrival here and arrival here, arrival here, then maybe it stayed like this for some time and let us say it goes, there was a departure and then there is another departure here. Let us assume that this is our time frame that we are looking at from here 0 to t and this is our overall time frame that we are looking at. Now, the question we are asking is what is the time average? So, what is the time average of nt in this period? Let me call this maybe capital T actually, let us call this capital T in 0 to capital T. And remember this is 1, this is 2 and let this be time unit t1 and this t2 and this is say some t3. So, the time average here is basically, so if we were just going to take an arithmetic average, we have one sample of 0 here, then we have nt was 1 for some time, then it was 2 for some time, then it is 1. At the end of this t, it ended with 1. So, just a simple average here would be 0 plus 1 plus 2 plus 1 divided by 4 and it is actually 1. But you can see here that actually the value 2 was held by nt by a longer time. So, all this time the value of nt was 2. So, it makes does not make sense that the average here is 1. So, basically time average is something that takes the time that a value is taken for, it takes that into account. So, what will this be? Let us give these things numbers so that it becomes a little more clear. So, let us say t was 10 and this is 1 and this is 2, t1 is 1, t2 is 2 and t3 is 9. So, now we will take these durations into account. 0 value was held for about 1 unit of time, nt was 1 for another 1 unit of time, then nt was 2 for this is 9 minus 2, 7 units of time and nt was 1 again for another 1 unit of time from 9 to 10 and all of this we divide by the overall time horizon. We have 1 plus 14 plus 1 which is 16 divided by 10 which is 1.6. So, now it is making a little bit more sense since the number 2 was actually the number of customers, there were 2 number of customers in the system for almost 7 units of time. So, that has given this whole numerator some weight and it has brought up the average to 1.5. So, this is a better way of calculating averages of values that are changing over time. So, that is what is and now in this case if this is this time averaging is done for a long period of time where this time t is large that is when it is a long-term time average. So, that is what long-term time average is. So, what is a C in average we are talking here also about C Poisson-Arabel C time average. So, what is this C time average? So, again so, let us consider the number of customers in the system at time t as given by Nt. So, a C in average is actually an instantaneous average, it is what an arrival sees and it is an intense instantaneous average as seen by an arriving customer which is so, it is the average of Nt conditioned on an what is called an arrival epoch. That means, we are only taking those average samples that those samples of Nt that are actually seen by an arrival. So, for example, again I am going to draw that same graph. So, we had Nt going like this approximately. So, there was an arrival here and there was an arrival here and this is 1 and this is 2. So, this arrival would have seen actually 0 in the system and this arrival would have seen 1 in the system and then we have not in this particular time horizon we do not really have more arrivals. So, actually for these 2 arrivals the average would be 0 plus 1 by 2 which is 0.5. So, this would be actually the arrival as the average of Nt as seen by arrivals. Now, you can see the difference right. What Poisson arrivals see time average is what this property says is that these 2, the long term time average and the instantaneous average as seen by an arriving customer these 2 are the same if arrivals are Poisson. Only for Poisson arrivals these are the same. So, again it seems a little non-intuitive that if I am observing a graph of that Nt is not on an average at least on an average you can have different values variable, but on an average should not an arrival see whatever is the average the system has. But this is not the case I will give an example. So, consider a dd1q which service time is equal to 1 second and inter arrival time 2 seconds. I am going to draw a timeline. So, this is let us say this is a server timeline suppose and let us start with an arrival here itself. So, if there is an arrival here then now for the next one second the server is going to be busy this is 1. The next arrival we know is not going to happen until time 2. This is where the arrival is going to happen. Now the server will again be busy. Now the next arrival is going to happen here at 4 then again the server will be busy. Again now the next arrival is at 6. If you ask the question as to what is the time average here? What is the long term time average? You can see that actually this frame just repeats this frame is basically a frame of two time units where half the time the server is busy server has number of customers in the system is 1 the rest of the times it is 0 and this is repeating. So, actually the average is going to be this 1 multiplied by 0 plus 1 multiplied by 1 this is 1 multiplied by 0 plus 1 multiplied by 1 divided by 2 and this frame just repeats so it is enough to get this average and it is intuitive it is kind of obvious to everybody that even the long term average is just going to be this is this frame just repeats. But what is it that is seen by arrivals? What does this request see? It sees 0 customers. What does this request see? It is 0 here also 0 here also 0. So, every arriving request always sees 0 in the system. So, no time average and instantaneous average as seen by an arrival are not the same and they are only same if the arrivals are poisson and this result is called pasta. So, let us now go to the remaining properties. We have seen this property now and we will see these three. The first of these says that splitting of poisson arrivals remains poisson. What does that mean? Consider an example. For example, let us say this stream here of poisson arrivals represents some packets that are coming to a router. And then these are two links this is some link and this is some other link and the router basically sends them out on two different links and suppose that this can be captured as a probability that some with probability alpha some of the packets go here and with some probability 1 minus alpha some of the packets go here. So, what this property says is that if this arrival stream of the packets if that is poisson then this split stream also remains poisson with the intuitive rate which is this rate is alpha lambda and this rate is 1 minus alpha lambda. So, it is again a very useful property that we will see later currently you might think what is the use of it. But I can give an example of a straightforward use of it that if we wanted to model this link as a queuing system let us say. Then we know that arrivals here are also poisson. So, if this was if you could assume for this for example as that this is an infinite buffer single server queuing system then we could model this as mg1 even though there has been a split here we can be confident that this is also an mg1 queue and here it is the same that this is also mg1. So, if there is a single poisson stream it is getting split we can assume that the split stream is also poisson and for the if there is a queuing system after this split that we need to model then we can assume poisson arrivals to those queuing systems also that is what the use of this property is. Similarly, there is another property about superposition of poisson arrivals. Now, if there this is the opposite basically if there are two streams of poisson arrivals coming into one place where they merge. So, now this you can think of it as again you can say packets coming from different links and getting merged into one link the sum of these two streams the merger merging of these two streams or they in other words the superposition of these two sort of arrival streams also remains poisson. So, again why is this useful because if this link we want to model as a queuing system then we will be able to model it as mg1 and then we have the advantage of having poisson arrivals. The last property is this one the output of an mm1q is poisson. So, you can see basically these are all properties about poisson arrivals. First one was that poisson arrivals see time averages that is very very useful. So, we do not have to do different mathematics in terms of what an arrival sees in the system on arrival at arrivals versus any other time. This is very convenient because if we can find for example the average number of customers in a system which is just the time average. Suppose we can do the maths and find the time average of the system then if arrivals are poisson then we can confidently say that that is exactly the average that an arrival is also going to see and that in fact this is used in derivation of the pk formula. So, again we have not done the derivation but you can see some textbooks and you will see that this property is actually used in the derivation of the pk formula. And these we will use in the some subsequent classes where we are going to learn something called queuing networks. So, what is this last property now? Basically it is about this inter departure time distribution. So, if you have a queue to which there are some arrivals and let us imagine that this is we are looking at a departure timeline here. There are going to be some departures here between let us say sometimes 0 to t. There are going to be some departures. We can ask both questions what is the inter departure time distribution or we can ask if there are dt is the number of departures in time 0 to t. We can also ask what is the probability distribution of this number. So, for an mg1 queue we know that inter arrival time distribution is exponential and that means arrivals are poisson. So, the question is what can we say about departures? Can we if we know that this is poisson, can we say something about the departure process? So, turns out in general we cannot say anything but if the service time is also memory less then departures are also poisson. So, except for mg1. So, if this is poisson and this is exponential then departures are also poisson. Again this is a property that helps us in putting together queuing networks which can be analyzed and this we will see in some subsequent classes. So, this law actually has a name it is called Burke's law. It is basically called says that the output of an mg1 queue is also poisson. So, this concludes this lecture and next we will be doing some examples. We have learnt a lot of results in this lecture today. So, we will do some examples and we will also continue our case study of the web server. Thank you.