 Assalamualaikum students. Today we are discussing the random sample and the expected values in multivariate analysis. Here we are using let x1, x2 up to so on xn be a random sample from joint distribution which has mean vector mu and variance covariance matrix sigma. As we know before we are discussing we have discussed the mean vector mu and the variance covariance matrix sigma. The sample mean vector x bar because this is the sample mean values is an unbiased estimator of mu. You must have seen how we did the expected value of x bar in univariate case which is equals to mu. Similarly in multivariate we have expected value of x bar vector which is equals to mu and variance of x bar which is equals to variance covariance matrix divided by small n. That is expected value of sample mean which is equals to the population mean and the variance of the sample mean which is equals to sigma over n. Also if s of n sample variance which is equals to 1 over n sum i varies 1 to n x i x i prime prime means transpose minus x bar into x bar transpose. Sample variance. Then expected value of sample variance which is equals to n minus 1 over n into sigma. This is what we have in the theorem. Basically we have to prove it that expected value of x bar mu is equal to variance of x bar which is equals to sigma over n. Now prove we know that we know x bar which is equals to 1 over n. This is the vector form. So how we wrote x vector 1 into x vector 2 up to so on x n vector. Now when we want to take the mean, what do we do? We apply its expectation. Now apply expectation on both side which is equals to expected value of x 1, x 2 up to so on x n which is equals to 1 over n. In this expected value you have within bracket. How can we write sum i varies 1 to n x i vector. This is what we took. So this is in the form of sum. And from where i varies 1 to n. 1 over n sum expected value will be applied on random variable. Now we know that the expected value of x i which is equals to mu, mu is equal to 1. And mu you know mu vector is constant. So when we apply sum on constant, it is multiplying n times. Now this is the n times of mu divided by n. Now cancel out this. So expected value of x bar which is equals to mu. First part our proof is over. Now for the next part, this is the cross product terms. Now x bar minus mu into x bar minus mu prime which is equals to, I am opening it. If I open it, we have got 1 over n. Where is sum going? n. We are taking sample value sum n times x i minus mu into this factor value 1 over n sum i varies 1 to n x i vector minus mu transpose. Now if we combine it and write it as 1 over n into 1 over n, we have taken it as comma, 1 over n square sum i varies 1 to n. First term is x i minus mu and second term is x i minus mu transpose. So this is the cross product term. Covariance. Now co-variance of x bar by definition co-variance is expected value of x bar minus mu into expected value of x bar minus mu transpose. By definition we have the co-variance value. Further we have opened it as we did earlier. 1 over n square sum i varies 1 to n expected value of i because sum we have opened is x i minus mu into x i minus mu transpose. Since x i and x j are independent, now we have variable if it becomes independent. So for independent what we have? Its expectation is basically 0. So the expected value of x i minus mu, x i minus mu prime which is equals to 0 q 0 because it is independent. So we have independent co-variance of 0. For all i is not equals to j. What is the case here? i equals to j. And if i is not equals to j, then what will happen? The co-variance of x bar which is equals to 1 over n, we have replicated this factor here. 1 over n square x i i varies 1 to n expected value of x i minus mu expected value of x i minus mu prime. Now we have seen this factor previously, we have seen who we have equal to. 1 over n square sum i varies 1 to n sigma. Now where did this sigma come from? Basically in summary statistics we have done all these things first. So in summary statistics, I have entered the value here directly. So this is equals to, you know this is constant. It is multiplying with constant. So this will be n times. Then cancel out this term. So we have co-variance of x bar which is equal to sigma over n cake. Further what we have to do? To contain the expected value of sample variance, S of n sample variance, if we take the expectation, then what is the equivalent? We just note that x ij, ith and jth unit minus x bar j, x ik minus x bar k is the j comma k, the element x i minus mu, x j minus mu. Okay, now we are taking two units because i is not equals to j here. The metric representing sum of square and the cross product can be written as. So we have given the previous sum of square. Now we have seen the sum of square cross product terms. This is the cross product terms. Open this. We have seen this in summary statistics. In the previous lecture also we have seen how this is opening. In this you have sum x i, x j because i is not equals to j and next we have the n times of x bar, x bar transpose. So here expected value of x i, x j transpose which is equals to sigma plus mu, mu transpose. We also solved this in summary statistics. Also the expected value of x bar minus mu, x bar minus mu prime which is equals to 1 over n, sigma. We have also found the previous result. Now the expected value of x bar, x bar prime minus mu. Now look at these values here. x bar, x bar prime minus mu, mu transpose which is equals to sigma over n. Now we have given this in the previous slide. From here expected value of x bar, x bar prime, we have given the value. You will have this minus term. Now we get the result of expected value of x bar, x bar transpose which is equals to this. Here the expected value of i varies 1 to n, x i minus x bar. So x i minus x bar transpose. We have opened this in the previous slide and it is equal to this term. Now applying expectation inside the bracket, we have applied it. We have determined its value and we have determined this value. Just put the value in this expectation, put in. After putting it, we are simplifying it. Look, we have constant term and constant term. The sum is varying. What is the answer? n times of sigma plus n times of sigma, n times of mu, mu prime minus n, n cancel out because the first term is cancel out. Here you have sigma and the second term is n, multiply. So further we have its simplification after n sigma minus sigma. Now the expected value of i varies 1 to n. This is the cross product terms, again which is equals to n minus 1 of sigma. Since the s n, now s n is equal to sample variance which is equals to this. So n minus 1 over n sigma is an unbiased estimator of sigma. Previous, if you remember, we had expected value of capital S girl with sample variance which is equals to the population variance. So similarly, we have this thing here as well. Expected value of sample variance, in multivariate case, expected value of sample variance which is equals to population variance with some constant multiply. And this was the condition that we used to hold when we had sampling without replacement. If there is a replacement, then there is some constant multiply with it. That is some constant basically multiply concept is univariate. But in multivariate, if we have the expectation of sample variance, then it is equal to the population variance. Some quantity is multiplied with it.