 ज़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़़� ॐॐॐ ॐॐ ॐॐॐ oOO ॐॐॐॐ ॐॐॐॐ ॐॐॐॐ ॐॐॐॐ And we also said that the whole purpose of doing statistics, the reason for following so much of science of statistics, is to understand the population through a random sample. We said that if the population distribution function is known to us in up to a level of its form, then we say that it is a case of parametric estimation, where we have to estimate the parameters of the distribution. But if the form of the distribution itself is not known then we call it a non parametric case. In the present course, we are going to consider only the parametric case. Then we introduced what is known as sample mean. को कुर्वाऄ ज्बूठने सू好 आठ organisations or distribution, so we assumed that the population distribution f has a mean mu and the standard deviations, सिघ्मा and then we introduced what is the quantity, a statistic called Sample Mean. वी बनक्यते nearer to the expected value and its variance, वी भी वालताते आभाँ, through central limit theorem. वह वह आश्वाँ, we basically say that its expected value is same as the population वह आश्वाँ, is the population variance divided by its size of sample. And the central limit theorem we said that as if n is very large, the population mean will tend to a normal distribution with mean as population mean I am sorry. ग़ात्ते हैं भी ब्रत्वार, मैंत की तिताना हैं, मैंत की Wagat koi दिए की यह ज़ू़ के रहाता है, एक विस्ग रहाता हैं, अगर भी विस्ग ग़ाता हैं, श़म चल। sempre the enter Charlotte दर थश्धind token श़म अक श्ध चल तर्ग और उर उरकन Israel वी और � always वी तो तर्ग तत नद身 ठ सब क़ाष्ट. और घैंगे खॉने गाश्ढते, इवरे ना दरमद Firebase वी धॶiovascularंने वी श़म॥ít Left Inval Variators of ëSum off independent normal variablesí and Soms off independent chi-square random variables, we will use these properties to derive the sampling distributions from a normal population and we will see that if the sample is drawn from the normal population what is the distribution of sample mean and तुब बज़े बनाच ज़ूँ कर तुभ तुफ शुलग करतेग सर्फ आप आप इस पर toplane aamak aamak karetele तुफ शुलग तुफ ज़ूँ करतेग सर्फ आप आप यो यो तु� valve8 तुफ ज़ूँ करतेख सर्फ आप यो आप करतेख। atiya karkahtyamak karetele। । first few lectures in descriptive statistics. अ। । । । । । । । is exactly the same definition. प्छ वन प्छत्ःय श्ब इस या जाए। । । X1, X2, X3, Xn take a common distribution । । । । । । । X2, X3, Xn be a independent random sample from a common distribution । । f with a mean value । । । । । । । and a variance sigma squared, then । the sample variance is defined as shown here. त । । । । it is a square is equal to 1 over n minus 1, multiplied by summation of xi minus x bar, x bar is a sample mean, xi minus x bar whole square, this can be simplified in this simplification form and it can be written as n minus 1 capital S square is equal to summation xi square minus n times x bar whole square. Let us try to find its expected value. So, with this we get that n minus 1 times expected value of S square is equal to expected value of summation of xi square minus n times x bar whole square. So, you take the expectation of each of these component and we end up with n times expectation of x 1 square minus n times expectation of x bar square. Please understand why summation xi square expectation turns into n times expectation of x 1 square because x 1, x 2, x 3, x n all are distributed identically as f with a mean value mu and variance sigma square. So, they are identical. So, instead of taking summation of n 1 of them, I can as well take n times the expected value of x 1 any one or if this is x 1, I can even take x 2. So, x 1 is not important, but what it says is that the common expected value from the distribution f is being taken. Now, we apply the general rule of random variable that expected value of any random variable w square is variance of w plus expected value of w whole square and applying these to each one of these component of expected value of s square, we find this and it finally simplifies which you can verify very easily through simple algebra. It is expected value of s square that is the expected value of sample variance is the population variance. We will learn in future this is called an unbiased estimator of sample variance. When a statistic like s square its expected value is exactly the population variance and s square is the sample variance, then it is called the when expected value of s square is equal to the population variance, it is called unbiased estimator, s square is an unbiased estimator of sigma square, we will learn about it more in future. Now, let us recall few things which we have mentioned in the past and in case it has not been, it is the first time let us start it. Let x 1, x 2, x n be independent normal random variables with a mean mu 1, mu 2, mu 3, mu n and variance is sigma 1 square sigma 2 square sigma n square. One way I have to put a comma so that it is not mu 1, mu 2, it is mu 1, mu 2, mu n and variances are sigma 1 square sigma 2 square sigma n square etcetera. Then summation of these random variable is also distributed as normal with mean as summation of means and variances summation of sigma square. Please remember when the two random variables are they are independent then the sum of variance sum of the random variable variance of sum of the random variables is sum of the variances. So, this is what we have used here. Next we would like to see in the case where x 1, x 2, x 3, x n are independent chi square random variables with degrees of freedom k 1, k 2, k 3, k n then summation of the random variables x 1, x 2, x 3, x n is also chi square with degrees of freedom as a summation of degrees of freedom. This should be very obvious because the chi square with k degrees of freedom itself has been defined as a sum of squares of standard normal random variable. So, the it just the additive nature of the independent chi square random variable comes very naturally. Now, let us consider so far what we have been doing. We said that f is some distribution with mean mu and variance sigma square. Now, I am defining the form of f and I am saying that it is a normal distribution. So, I am saying that now I am taking sampling from the normal population. So, let x 1, x 2, x n be independent unnetical distributed normal random variables with mean mu and variance sigma square. Then what is the distribution of sample mean? Well, sum of independent normal variable is distributed as normal. Therefore, sample mean is also distributed as normal with mean and variance as this because remember here I think I am telling something very obvious. Expected value of A x is A times expected value of x. So, this gives you that expected value of x bar is mu and expected a variance of x bar is sigma square over n which we have already proved. And therefore, if the sum of the independent normal variable is distributed as normal therefore, sample mean is distributed this and further normalizing it the random variable x bar with respect to is mean and variance we get x bar minus mu divided by sigma square root n is distributed as normal 0 1. Please remember x bar minus mu divided by sigma square root n is called normalizing random variable x bar. It is also known as standardizing random variable x bar. So, for any random variable if you do any random variable w minus expected value of w divided by variance of w is the normalization refers to normalization. So, it says this is of course this is not just normalization it is actually a normal distribution. So, this defines the distribution of a sample mean when the population itself is a normal population then the sample mean is distributed normally as mean as a population mean and variance as a population variance divided by size of the sample and if you take x bar minus mu divided by sigma square root n it is distributed as normal 0 1. Now, if you take a distribution of sample variance we have to do some calculation in order to understand it. So, let us start. Sample variance is nothing but I mean we will start with x i minus mu whole square summation of x i minus mu whole square we have a so I can add and subtract mean sample mean from it and then this will be simplified to this value. Please remember here what it is using is summation of x i minus x bar is equal to 0 this is identity which is used here to get this term and therefore if you divide throughout by sigma or sigma square what you get is x i minus mu divided by sigma whole square is equal to summation of x i minus x bar divided by sigma whole square and then I am taking square n out if I take n out as the square root n comes out because I am taking a square value. So, this becomes square root n x bar minus mu divided by sigma whole square. So, I have basically divided this by sigma square and this by sigma square and I get this identity. You see this is very beautiful. Can you see that because x i is distributed as normally with mean mu and sigma square x i minus mu over sigma whole square is a chi square variate because this itself is a standard normal variate. Do you see this? This is distributed as normal 0 1. Agreed? This we have already shown that it is distributed as normal 0 1 and these each individual one are distributed as normal 0 1 and therefore we are taking summation and then you are squaring it sorry you are squaring it. So, the whole item will become a chi square and it is only one normal standard normal variate. So, it will be chi square with one degrees of freedom. While here you are summing up chi squares each individual if you look at this whole individual it is distributed as chi square as 1. They are all independent because x i's are independent and therefore the summation of n chi squares will give you chi squares with n degrees of freedom. Shall I explain it again? Let us start. From x i minus mu whole square summation i runs from 1 to n. I have not written it here, but I think it is obvious this is i is equal to 1 to n i is equal to 1 to n ok. So, then what first we do is we take difference x i minus mu whole square summation i is equal to 1 to n. I add and subtract sample mean x bar and then it simplifies to summation of x i minus x bar whole square plus n times this n comes because of the n summation because this does not depend on n. So, it is n times x bar minus mu whole square sample mean minus population mean whole square. Then I divide both the sides by sigma square and then what I get is summation i is equal to 1 to n x i minus mu divided by sigma whole square plus summation of x i minus x bar divided by sigma whole square plus I take the I want to take a square out. So, I take square root of n x bar minus mu divided by sigma whole square. Now, what am I saying is that this inner part x i minus mu over sigma because we have said that x i is distributed as a normal distribution it is coming from a normal population with mean mu and variance sigma square. We get a x i minus mu divided by sigma as a standard normal random variable. So, x i minus mu over sigma is distributed as normal 0 1. Similarly here we know that x bar is distributed as a normal random variable with mean mu and variance sigma square divided by n. Therefore, x bar minus mu divided by sigma divided by square root n is also distributed as normal 0 1. In other words, standard normal random variate. Now, we take a square of it. So, this is where we take a square of it. See if you take a square of a one single standard normal variate, then it is distributed as chi square. Here we take n of this standard normal variate and we take square of it and we sum it up. Now, remember each x i is independent. So, x i minus mu divided by sigma is also independent and therefore, x i minus mu divided by sigma whole square are independent for i is equal to 1, 2, 3, and therefore, you are summing up n independent chi square random variates and therefore, it becomes chi square with n degrees of freedom. The question is what is the distribution of this? Now, if we use the fact that sum of two independent chi square random variable with degrees of freedom n and m is a chi square random variable with a degree of freedom n plus m, if we use that very reasonably we can say that this should be distributed as chi square n minus 1. I repeat, we know that if the two independent chi square random variables are distributed with degrees of freedom respectively n and m, then the sum of the two random chi square random variables, independent chi square random variable will be a chi square random variable with degrees of freedom as sum of the degrees of freedom. So, it will be n plus m. So, if you consider that this is 1 degree of freedom chi square random variable which is added into something which gives you a n degree of freedom chi square random variable. In that case, we can reasonably understand that this has to be chi square random variable with n minus 1 degrees of freedom. And this is what is our argument. I have written it down again. This is the identity that we have got. This is the same as a previous one. We see that this as we argued before is chi square n minus chi square 1 degree of freedom. This is chi square with n degrees of freedom. Sum of two independent chi square random variable is also chi square with degrees of freedom as sum of their degrees of freedom. It is reasonably reasonable to conclude that the center one is also chi square with n minus 1 degrees of freedom. So, what it says if you look at this carefully, this says that sorry, this says that the s sorry, let us go up again go to arrow. We go back and we use the pen. So, now what we have is remember that this quantity is s square divided by sigma square. And we are saying that this is distributed as chi square n minus 1 degrees of freedom. Here we are saying that s square over sigma square is distributed as chi square n minus 1 degrees of freedom. So, quickly if we see we saw that if you assume that the population distribution is normal distribution with mean mu and variance sigma square, then the sample mean is distributed also as a normal distribution with a mean mu and variance as sigma square divided by n the size of the sample. And the sample variance in that case is distributed as a chi square distribution with n minus 1 degrees of freedom when it is divided by sigma square. In other words, this also shows that expected value of s square, I am sorry here there is n minus 1. Now it makes sense, expected value of s square becomes sigma square, which is what we had shown earlier also. Here there should be n minus 1 because this divided by n minus 1 is s square. So, s this is n minus 1 times s square divided by sigma square. This brings us to another distribution. You remember we introduced a t distribution. In t distribution we said that if there is if z is a standard normal variate and y is a chi square random variable with n degrees of freedom, then z divided by square root of y by n is distributed as a t distribution. Now you see what we have is a z is a sigma that is sample mean minus population mean divided by sample mean variance of standard deviation of sample mean is divided by is distributed as normal 0 1. We call y as n minus 1 sample variance divided by the population variance. Then we know that it is distributed as chi square n minus 1 and therefore this t random variable which we can defined as x bar minus mu divided by s divided by square root n is distributed as a t distribution with n minus 1 degrees of freedom. Please recall, I mean look at a certain similarities, similarities with this definition of random variable and this definition of random variable. You see that when sigma is unknown in future we are going to do that. If the population variance is not known population standard deviation is not known then if you replace it by its estimated value which is sample variance or sample standard deviation then instead of a normal distribution the standardized or a normalized random variable having a standard normal distribution it will have a t distribution with n minus 1 degrees of freedom. Again this is what we are going to use in future with respect to interval estimation as well as we are going to use it with respect to hypothesis testing. So please remember what I have I said is that if you have a sample n sample of size n from a normal distribution with mean mu and standard deviation sigma or variance sigma square then the sample mean minus mu divided by sigma divided by square root n that is the standardized or a normalized value of x bar this is a normalized sample mean that is distributed as normal 0 1 but in case sigma is not known and you replace sigma by the sample standard deviation then the same normalized sample mean with the estimate of or the estimate of a population standard deviation it becomes a t distribution with n minus 1 degrees of freedom. So now let us summarize we first introduced here the standard the sample variance and its expected value by not assuming any form of the distribution we only said that the population distribution is f with a common mean mu and the standard deviation sigma. Then we made an assumption that the population is a normal population with mean mu and variance sigma square then we said that the sample mean is distributed also as a normal distribution with mean mu and variance sigma square by square root n sorry this should be sigma square by n or there is a mistake here please correct it it should be sigma square by n then we found that sample variance is distributed as a chi square distribution with n minus 1 degrees of freedom with certain multiplication please remember and i think i should make correction here also because it gives a wrong impression and this should not happen what we really mean to say is that n minus 1 s square over sigma square is distributed as chi square sorry chi square n minus 1 degrees of freedom please make this correction sorry for this mistake and then we revisited the t distribution by stating that the ratio of sample difference between sample mean and the population mean to the sample variance by square root n is distributed as a t distribution with n minus 1 degrees of freedom next we will consider further values on sampling distribution thank you