 Yeah, in the last class, we discussed about the central limit theorem. So, we said that if you have a sequence of random variable, if you take their sum and do the centralization and the normalization and take the limit as n goes to infinity that limit is no more a constant, but that limit is actually a random variable which is going to have Gaussian distribution with mean 0 and variance 1, ok. And now, we need to see that how this limit theorem is going to be useful and as we will go later like you will see more usefulness of this, but before that I will just highlight few of its usefulness. One thing is suppose if you take some a real number and want to compute what is the probability that that Sn minus n mu divided by n sigma square this is less than or equals to a. And here what is mu? Mu is the mean of that random variables that is a common mean. And now, how to compute this? One approximation we can do is when n is very large when n tends to infinity this quantity here behaves like a Gaussian distribution with mean 0 and variance 1, ok by the way we are going to call it this Gaussian distribution with mean 0 and variance 1 as normal distribution because this comes in use so many times this has been a special name called normal distribution, ok. We know that an n tends to infinity this quantity is a random variable with normal distribution, but when n is not tending to infinity when n is some finite quantity we can assume that it is almost normal distribution and because of that we can get some approximation. So, what is this? This quantity I know that this can be treated as integration of this pdf. What is this? This is the pdf of normal distribution, right and I am just integrating it between minus infinity to a and that will give you this probability, ok. And again the function the integral of this form where I am going to integrate this quantity the pdf of normal distribution minus minus infinity to a this also appears many times to us we are going to encounter it many times because of this this has also been given a special notation called phi of a. For us phi of a means it is a integration of my pdf of normal distribution in the interval minus infinity to a or I can say that phi of a is nothing but the area under the normal distribution till the point a, ok. So, if you are going to have this, this is my, so my is going to look like this, right, this is 0 and this is the mean value and the width of this pdf is going to decide its variance, but we are going to say suppose let us say a is here this area till this point we are going to call it as phi of a. On the other hand let us say if a is here then we are going to consider all the area till this point and in this case this is going to be called phi of a. So, basically as you see phi of a is nothing but this is basically cdf of your normal distribution, right, ok. Phi of a is the cdf of normal distribution computed at point a. Now, this function phi of a has one nice property that if you are going to take a and minus a. So, suppose let us say this is a and this is minus a, ok that is a symmetric around minus a. Now, the claim is if you add these two areas phi of a and phi of minus a that is going to be 1, ok. So, why is that? Because of the symmetry, right. So, it is because if you are going to add if you take area till minus a, right that is as good of this area above above a because they are symmetric. So, if I am going to add till this point phi of a like this is my phi of a and this phi of minus a is going to be same as all the remaining area. So, we know that area under pdf has to equals to 1. So, because of that if you are going to add this plus phi of minus a that is going to be 1. Now, in often this comes by the way this do not have any closed form expression. You cannot write this is like an integral just represented we do not have closed form expression, but you can compute it numerically, ok. So, because of that often this function appears in the form of a tables like you have lock tables, right lock tables for different values of a you know like that this phi of a is given in terms of the tables and you can use this tables to compute such probabilities, ok. Now, try to get this function in some nice ways, ok. I am interested in probability that this quantity is less than or equals to a. So, I did simple manipulation like S n by n and here we know that sigma square I am taking sigma square to be 1. So, right now assume that I have this x 1, x 1, right x 1, x 2, I have this n random variables. All of them expectation of x i is mu and expectation of sorry variance of x i I am going to take it as 1 for simplicity. You can this variance of x i's need not be 1. Let us say variance is known the only parameter I am interested is in the mean now, ok. Now, what I will do? Now, S n is taken to be some of this x i's. Now, I will do simplification like I will I want to now not even like necessary like let us take even this is sigma square. So, what I have done here is a basically did a manipulation of this. I have written in such a way that S n minus n and minus mu comes on the left hand side rest of the terms go on the other side, ok. Can somebody check that this manipulation is correct here, ok. Now, this S n by n, ok. What is S n by n? This is the S n by n is average of these samples, right. I am going to call this as mu n hat. The sum of this x i's divided by n I am going to take it as the estimate of this parameter mu and now I am able to write it as this mu n minus mu n hat minus mu. I know that when n goes to infinity what this quantity goes to it is going to go to mu, right. Why is that? Because of that law of large numbers. But now I am only dealing with finite number n it is not going to infinity. So, that is what I am instead of calling I cannot call it as mu I am going to call it as mu n hat. And now I am trying to what you see that now I am looking at the difference of the finite average I got with its limiting value which is mu. How is this difference, ok. Now, we are saying that this difference being smaller than this quantity is approximately phi of a and where is this phi of a is coming? Phi of a is coming from our previous relation, ok. So, by using this central limit theorem I have now a way to characterize how far is mu and hat when mu n hat for a particular n compared to its limiting value, ok. I know that this is going to be approximately phi of a, ok. Now, let us see how this relation will be useful to us. Now, let us go back to the example we studied in our last class where we talked about this factory output, right. So, the factories producing certain number of items and its weekly average is about 500. Now, let us say I said earlier that the average I assume that average of 500 items every week that is the mean value, but mean value suppose let us say you do not know and now you want to find out what is the mean value mean number of items produced from my factory. How you can do it? You can get every time you observe today how many are generated, how many outputs came out from factory like that every day you get 100 samples. Let us say, let us say these are all 100 you observed it let us say what we call this as a weekly, weekly. So, we are talking about weekly output of the factory, right. Let us say you observed for 100 weeks what is the output from the factory and now you are going to call them as 100 samples and assume that it is the same factory, right. The factory is going to generate these outputs according to the same underlying distribution. So, they are all identically distributed and let us assume that they are also identical every week of factory like started afresh. So, that the effect of the output from one week is not influencing the effect what is that is produced in the next week. Now, from this 100 samples you want to generate you want to identify the mean value that is output by this factory every week. So, what we basically are trying to do is here trying to find out the mean output mean output by observing 100 weeks of data, right. 100 weeks I have monitored what is the output which I am denoting it as let us say let us call them x 1, x 2 up to x 100. Now, I want to find out what is that mean value that is I want to find out find what is the exact value of x i this is some value mu which I do not know I want to find out. One thing I know is if I take this value i equals to 1 to n divided by n this goes to mu right, but unfortunately I do not have n going to infinity I can only take i equals to 1 to 100 x i and divided by 100, ok. I will get this let us call this value as mu 100 hat. Now, I want to know that how much is this from the true value mu. So, that is why I am trying to find out mu 100 hat minus mu will this be this difference will be below 0.1 and greater than minus 0.1. I want to now calculate this probability whatever I estimated what is the probability that the difference between mu 100 mu 100 hat minus mu this is going to be less than or equals to 0.01. So, alternately what I am asking the question that the mean value I obtained by averaging this 100 values will it be within 0.1 error of the true value I want to ask this question. Now, and you want to ask what is the probability that the error is going to be remain within 0.1.