 So, far we have discussed the problem of estimation of parameters from the point of view of providing a point estimator for the. So, by point estimator means that we assign a value as I was mentioning that or in we have seen various examples like when we say we have a normal distribution with mean mu we consider x bar as an estimator. So, this is assigning a single value because based on a sample x 1 x 2 x n x bar will be one value, but then there are some other concerns for example, this one value may be accurate or it may not be accurate because the true value is not known. Therefore, one may consider providing a range of values in place of a single value. That means, we consider an interval based on the sample and then we now if we assign an interval then certainly there is a probability associated with that interval and therefore, we have a generalized concept it is called confidence intervals. So, we consider say interval estimation in the interval estimation we consider confidence intervals because we may assign any interval say a to b for estimating a certain parameter g theta, but then we have to qualify this interval by something for example, I may propose for average longevity an interval of 55 to 65 somebody may propose 58 to 62 and so on. Therefore, to compare between various intervals we need to introduce the concept of probability here. So, now let us consider. So, we have x 1 x 2 x n a random sample from a population with distribution say p theta where theta belongs to theta. Then let let us consider say t 1 x and t 2 x be. So, here x is actually denoting the sample x 1 x 2 x n. Let x 1 x 2 x n be the random sample and we denote x by x 1 x 2 x n and let t 1 x and t 2 x be two statistics such that probability of t 1 x less than or equal to say theta less than or equal to t 2 x is equal to 1 minus alpha. Then t 1 x 2 t 2 x this is called 100 1 minus alpha percent confidence interval for theta when x is equal to x is observed. So, basically it means that by the 100 1 minus alpha percent confidence we will mean that if 100 times we do the sampling then 95 percent of the time or 100 1 minus alpha percent of the time my true value is likely to lie in the interval t 1 x 2 t 2 x. So, now naturally the question is that how to find out this interval. So, there are two optimality criteria for the confidence interval one is shortest length confidence interval for fixed confidence coefficient. So, this is called confidence coefficient. So, that means if I fix this one then what is the shortest interval which will have this probability 1 minus alpha and another one is that for a fixed length what could be the various functions for which I can have the minimum probability of coverage. So, that is the minimum probability of coverage. So, ah Neyman he related this problem of shortest length confidence interval to the optimal tests are the best tests for the hypothesis he connected this problem to optimal testing problems. Now, in this particular course we will consider only the main problems of confidence interval estimation that means the problems related to normal distribution etcetera. Actually the procedures which are developed here they are basically the best procedures or you can say shortest length procedures for fixed confidence coefficient. However, I will not be describing the full method for deriving this one rather we will use a method called a method of pivoting for deriving the confidence intervals and you can see that this method is extremely simple it is based on the sampling distributions that have been developed for the normal populations. So, let us consider say let x 1, x 2, x n be a random sample from normal mu sigma square population we will find confidence interval for mu. Now, there can be two cases sigma square is known in that case this is a one parameter problem if sigma square is known let us consider x bar. So, x bar follows normal mu sigma square by n. So, we can construct a square root n x bar minus mu by sigma that follows normal 0 1. So, if we consider the normal curve here standard normal distribution. So, we consider the z alpha by 2 and minus z alpha by 2 that means, this probability is alpha by 2 this probability is alpha by 2. So, the middle probability is 1 minus alpha. So, we can write down the statement probability minus z alpha by 2 less than or equal to square root n x bar minus mu by sigma less than or equal to z alpha by 2 that is equal to 1 minus alpha where by this z beta denotes the that is the upper 100 beta percent point on standard normal curve that is probability of z greater than z beta is equal to beta if z follows normal 0 1. So, now this statement let me call it 1 probability of minus sigma by root n z alpha by 2 less than or equal to x bar minus mu less than or equal to sigma by root n z alpha by 2 that is equal to 1 minus alpha. Now, this can be further written as probability of x bar minus sigma by root n z alpha by 2 less than or equal to mu less than or equal to x bar plus sigma by root n z alpha by 2 is equal to 1 minus alpha. So, if we compare this statement with probability of t 1 x less than or equal to theta less than or equal to t 2 x is equal to 1 minus alpha then you can see that this x bar minus sigma by root n z alpha by 2 acts as t 1 x and x bar plus sigma by root n z alpha by 2 acts as t 2 x. That means, you have the confidence limits for the mean of a normal distribution. So, x bar minus sigma by root n z alpha by 2 to x bar plus sigma by root n z alpha by 2. So, in place of capital x bar if you put a small x bar that will become the observed confidence interval is 101 minus alpha percent confidence interval for. Now, in this case it may happen that sigma is unknown. If sigma is unknown then I cannot make use of this confidence limits. So, in this case we consider S square also. So, then take if you remember in the case of sampling distributions I introduced the distribution of S square. So, if I am taking say S square as 1 by n minus 1 sigma x i minus x bar square then n minus 1 S square by sigma square follows chi square distribution on n minus 1 degrees of freedom. Also x bar and S square are independently distributed. So, if we consider square root n x bar minus mu by sigma divided by square root n minus 1 S square by sigma square into n minus 1 that is equal to root n x bar minus mu by S that has t distribution on n minus 1 degrees of freedom. Now if you look at the nature of the t distribution frequency function then this is also symmetric and if we consider the t alpha by 2 sorry t n minus 1 alpha by 2 and on this side we take minus t n minus 1 alpha by 2 then this probability is 1 minus alpha. So, we can construct the confidence interval using this here we have we can then write probability of minus t n minus 1 alpha by 2 less than or equal to root n x bar minus mu by S less than or equal to t n minus 1 alpha by 2 that is equal to 1 minus alpha. So, now as in the previous case you can manipulate this to get minus S by root n t n minus 1 alpha by 2 less than or equal to x bar minus mu less than or equal to S by root n t n minus 1 alpha by 2 is equal to 1 minus alpha or probability of x bar minus S by root n t n minus 1 alpha by 2 less than or equal to x bar plus S by root n t n minus 1 alpha by 2 less than or equal to mu less than or equal to this is equal to 1 minus alpha. So, x bar minus S by root n t n minus 1 alpha by 2 2 x bar plus S by root n t n minus 1 alpha by 2 this is 101 minus alpha percent confidence interval for in a similar way we can obtain the confidence intervals for sigma square also. In this case again I consider two cases case one when mu is known. Now, if I am considering x 1 x 2 x n following normal mu sigma square then by the linearity property you are having suppose I consider y i is equal to x i minus mu by sigma then that follows normal 0 1. So, y 1 y 2 y n are independent and identically distributed normal 0 1 random variables. So, if I consider sigma y i square that is sigma x i minus mu square by sigma square that follows chi square distribution on n degrees of freedom. So, you can see this here I am mu is known. So, the numerator is known quantity and this is involving the parameter sigma square for which the confidence interval is required. So, if we look at the nature of the pdf of chi square distribution. So, if we consider two limits now there is a difference from the t n normal distributions they were symmetric distribution in the case of chi square they are not. So, we may consider in fact two points suppose I take this probability as equal to say alpha 1 say chi square n alpha 1 and on this side I take chi square n and this probability I take to be alpha 2. So, I take 1 minus alpha 2 for example. So, this is becoming 1 minus alpha 2 then in between this is 1 minus alpha that means I am considering alpha 1 minus alpha is equal to 1 minus alpha 1 plus 1 minus alpha 2 that is equal to 1 minus alpha 1 minus 1 plus alpha 2 that is equal to alpha 2 minus alpha 1. So, one practical solution is one practical solution is to choose alpha 1 is equal to alpha by 2 and alpha 2 is equal to 1 minus alpha by 2. In that case you can see here that this will contain 1 minus alpha here. So, because there can be many solutions here whereas, in the case of confidence interval for mu we had the shortest length, but here shortest length is not ensured. So, you can actually choose many different choices, but a practical solution could be this. This is also to take care of the usage of various the probability tables related to chi square distribution because the percentage points of chi square are tabulated. So, if you have to make use of that then this is much better solution. So, then we can write probability of chi square n 1 minus alpha by 2 less than or equal to sigma xi minus mu square by sigma square less than or equal to chi square n alpha by 2 is equal to 1 minus alpha. So, this is equivalent to probability of sigma square being greater than or equal to sigma xi minus mu square by chi square n alpha by 2 and less than or equal to sigma xi minus mu square by chi square n 1 minus alpha by 2 that is equal to 1 minus alpha. So, we have got sigma xi minus mu square by chi square n alpha by 2 to sigma xi minus mu square by chi square n 1 minus alpha by 2. This is a 101 minus alpha percent confidence interval for sigma square. This is the case when mu is known, but if mu is unknown then we cannot use this and then we make use of s square. We consider the case when mu is unknown. In the case of mu is unknown we consider n minus 1 s square by sigma square that is following chi square distribution on n minus 1 degrees of freedom. So, in place of, so now I consider chi square n minus 1 alpha by 2 and chi square n minus 1 1 minus alpha by 2. So, we have the probability of chi square n minus 1 1 minus alpha by 2 less than or equal to n minus 1 s square by sigma square less than or equal to chi square n minus 1 alpha by 2 that is equal to 1 minus alpha. So, arguing as before this is equivalent to probability of n minus 1 s square by chi square n minus 1 alpha by 2 less than or equal to sigma square less than or equal to n minus 1 s square by chi square n minus 1 1 minus alpha by 2. This is equal to 1 minus alpha. So, the confidence limits for sigma square in this case turn out to be that is n minus 1 s square by chi square n minus 1 alpha by 2 to n minus 1 s square by chi square n minus 1 1 minus alpha by 2. So, this is 101 minus alpha percent confidence interval for sigma square. Let us take one example here. Suppose we are having the battery capacities, so suppose the data is recorded as say 1 for 30, 1 for 36, 150, 144, 148, 152, 138, 141, 143, 151 that is n is equal to 10 here from normal mu sigma square population. Let us calculate a confidence interval for sigma square in this particular problem. So, we can check here s square turns out to be 32.23. And we will need chi square n suppose I take alpha is equal to 0.01. So, I will need chi square on 9 degrees of freedom 0.005. So, from the tables of chi square distribution we can check this point is 23.59 and chi square 9.995 that is equal to 1.73. So, we can calculate here n minus 1 s square by chi square n minus 1.005 to n minus 1 s square by chi square n minus 1.995. So, you can check that this is equal to 12.30 to 167.21. So, these are the confidence limits for sigma square. So, this is 99 percent confidence interval for sigma square. Now, this is about one sample problems which when the underlying population we have taken to be normal distribution. Actually, this method that I have shown here is actually applicable to other distributions also. Basically, we are constructing a function whose distribution turns out to be independent of the parameter and the function itself includes the observations as well as the parameter of interest. If we are using both of this then we are able to get the confidence interval easily. This is called the method of pivoting. Let me give an application where we are dealing with some non normal population. Let us consider say non normal population. Suppose, x 1, x 2, x n is a random sample from uniform distribution on the interval say 0 to theta. Now, we can actually construct the confidence interval in various ways, but I will consider the sufficient statistics. So, x n is the sufficient statistics and we know the distribution of x n. In fact, in the previous lecture I have given the form of the distribution of this one. Let us consider say y is equal to x n by theta. Then the probability density function of y is given by f y is equal to n y to the power n minus 1 for y lying between 0 to 1. Now, let us choose say two points. Let me call it say g 1 alpha and g 2 alpha. The probability of g 1 alpha less than y less than g 2 alpha be equal to 1 minus alpha. Since, here the integral will give you y to the power n. This is becoming equivalent to g 2 to the power n alpha minus g 1 to the power n alpha is equal to 1 minus alpha. So, if we choose say g 2 is equal to 1 and say g 1 is equal to alpha to the power 1 by n. Then we are getting probability of alpha to the power 1 by n less than x n by theta less than 1 is equal to 1 minus alpha, which is equivalent to saying probability of theta greater than x n less than x n divided by alpha to the power 1 by n is equal to 1 minus alpha. So, 101 minus alpha percent confidence interval for theta is x n to alpha to the power minus 1 by n x n. Of course, you can see here that this twice is quite arbitrary here that I have taken for g 1 and g 2. We may take in some other way also. So, that this probability is 1 minus alpha and that would lead to different confidence intervals, but all of them will have the confidence coefficient is equal to 1 minus alpha. Let me take one more example of the non normal populations. Say let x 1, x 2, x n be a random sample from say exponential distribution exponential lambda. That means, I am considering the probability density function to be lambda e to the power minus lambda x. Now, if we consider the sufficiency then sigma x i that is equal to say y that is having gamma distribution with parameter n and lambda. Now, if we write down the density of this and we make function of this, let us consider that if I consider the density of y that is equal to lambda to the power n divided by gamma n e to the power minus lambda y y to the power n minus 1. I define twice lambda y is equal to say w. Now, what is the density of w? Then that is lambda to the power n by gamma n e to the power minus w by 2 w by 2 lambda to the power n minus 1 1 by 2 lambda that is equal to. So, here lambda to the power n cancels out and you will get 1 by 2 to the power n gamma n e to the power minus w by 2 w to the power n minus 1 for w greater than 0. We can represent it in this form 1 by 2 to the power 2 n by 2 gamma 2 n by 2 e to the power minus w by 2 w to the power 2 n by 2 minus 1 for w greater than 0. So, what we have proved that w is actually chi square distribution on 2 n degrees of freedom. Now, you can see again we can make use of w because w involves the observations in the form of y here y is sigma x i and we are it is also involving the parameter of interest. So, we can construct the confidence interval for lambda for 1 by lambda etcetera by making use of the this chi square distribution on 2 n degree of freedom and once again for convenience we may take chi square 2 n alpha by 2 and chi square 2 n 1 minus alpha by 2. So, that this probability is 1 minus alpha. So, you get probability of chi square 2 n 1 minus alpha by 2 less than or equal to w less than or equal to chi square 2 n alpha by 2 is equal to 1 minus alpha. So, this is equivalent to probability of chi square 2 n 1 minus alpha by 2 less than or equal to 2 lambda y less than or equal to chi square 2 n alpha by 2. So, for lambda we get chi square 2 n 1 minus alpha by 2 by 2 y also if we want for 1 by lambda then we can consider the reciprocal here 1 by lambda is between 2 y by chi square 2 n 1 minus alpha by 2 2 y by chi square 2 n alpha by 2 what is equal to 1 minus alpha. So, confidence intervals for lambda as well as 1 by lambda can be obtained in the terms of the sigma x i and the percentage points of the chi square distribution on 2 n degrees of freedom. So, this pivoting method is extremely practical method for obtaining the confidence intervals for various distributions. Here I have considered one sample problems. Now, there are many situations where we are dealing with two populations and our interest is to compare the say for example, means of the two populations you can think of say average income level of two different countries which I call them mu 1 and mu 2. Now I look at the difference if I want to compare mu 1 and mu 2 then a simple measure is mu 1 minus mu 2 and therefore, we would like to estimate mu 1 minus mu 2 and we may require the confidence intervals for mu 1 minus mu 2. Similarly we may consider say variability for example, there are two different instruments for measuring something. Now if we are measuring something then mean is the same, but variability may be different because the precision of the two machines may be different depending upon their makeup. Now if the makes are quite different then sigma 1 square and sigma 2 square may be different and we would like to consider the relative precision. For example, what is sigma 1 square by sigma 2 square and therefore, we would like to set up confidence interval for the ratio of the variances. So, let us consider these two sample problems related to normal populations. Suppose we have random sample say x 1 x 2 x m from normal mu 1 sigma 1 square and y 1 y 2 y n say this is a random sample from normal mu 2 sigma 2 square and we assume that the samples are independent. If we assume that samples are independent we want confidence intervals for sigma 1 for say let me write it say xi is equal to mu 1 minus mu 2. Then let us consider the different possibilities. First case is that sigma 1 square and sigma 2 square are known. So, in this case we consider say x bar following normal mu 1 sigma 1 square by m and y bar follows normal mu 2 sigma 2 square by m. Then if we look at the difference then x bar minus y bar this will follow normal mu 1 minus mu 2 sigma 1 square by m plus sigma 2 square by n. So, x bar minus y bar minus xi where this xi is denoting mu 1 minus mu 2 divided by this quantity let us call it normal xi tau square. So, this tau square is nothing but sigma 1 square by m plus sigma 2 square by n. So, this divided by tau that will follow normal 0 1. So, once again now we can use this as the pivoting quantity and we can look at the standard normal curve. So, z alpha by 2 and minus z alpha by 2. So, that this probability is 1 minus alpha here. So, to construct the confidence interval for xi we consider then probability of minus z alpha by 2 less than or equal to x bar minus y bar minus xi divided by tau less than or equal to z alpha by 2 that is equal to 1 minus alpha which is equivalent to saying x bar minus y bar minus tau z alpha by 2 less than or equal to xi less than or equal to x bar minus y bar plus tau z alpha by 2 that is equal to 1 minus alpha. So, confidence limits are x bar minus y bar plus minus tau z alpha by 2 for mu 1 minus mu 2. Naturally, when sigma 1 square sigma 2 square are not known then I cannot make use of this tau here because tau is involving sigma 1 square and sigma 2 square. So, let us consider the case when sigma 1 square and sigma 2 square are unknown, but here again there are two possibilities they may be unknown, but equal or they may be totally known they may be totally unequal. So, we can consider these two cases separately. So, let us take sigma 1 square is equal to sigma 2 square. Now, in this case the first term that will happen that is x bar minus y bar that was following normal xi and this tau square will become sigma square into 1 by m plus 1 by n that is nothing but normal xi sigma square m plus n by m n. So, x bar minus y bar minus xi divided by sigma root m n by m plus n that is following normal 0 1 distribution. Now, we also consider the sample variances. Let us define say s 1 square is equal to 1 by m minus 1 sigma x i minus x bar square and s 2 square is equal to 1 by n minus 1 sigma y j minus y bar square. So, these are the sample variances from the two populations. Now, we look at the distributions of s 1 square and s 2 square. Since, we know that in the sampling from normal populations the sample variance as a chi square distribution we get m minus 1 s 1 square by sigma square that will follow chi square distribution on m minus 1 degrees of freedom n minus 1 s 2 square by sigma square that will follow chi square distribution on n minus 1 degrees of freedom. Also, since the two samples are taken to be independent s 1 square and s 2 square are independent. As a consequence I can use the additive property of the chi square distribution and we will get m minus 1 s 1 square plus n minus 1 s 2 square divided by sigma square following chi square m plus n minus 2. So, we define sp square is equal to m minus 1 s 1 square plus n minus 1 s 2 square divided by m plus n minus 2 that is the pulled sample variance. So, you have basically m plus n minus 2 sp square by sigma square following chi square distribution on m plus n minus 2 degrees of freedom. Now, in the sampling from normal populations sample means and the sample variances are independently distributed. Therefore, if we consider square root m n by m plus n x bar minus y bar minus xi by sigma the distribution is independent of the distribution of m plus n minus 2 sp square by sigma square. So, I can write the ratio we have let us give them some names. So, I call this z and this one I call say w. So, z and w are independent. So, we can construct z divided by square root w by m plus n minus 2. So, that is becoming x bar minus y bar minus xi divided by sp root m n by m plus n then this will follow t distribution on m plus n minus 2 degrees of freedom. Once again now we can use the form of the pdf of t distribution and easily we can construct the confidence interval for xi. So, this probability is 1 minus alpha and we can then write probability of minus t m plus n minus 2 alpha by 2. So, the confidence limits turn out to be probability of x bar minus y bar minus root m plus n by m n sp t m plus n minus 2 alpha by 2 x bar minus y bar plus same quantity here that is equal to 1 minus alpha. So, we are able to set up the confidence limits for mu 1 minus mu 2. Now, this is under the assumption that the variances of the two normal populations are unknown, but equal. Now, this felicitated actually in the using additive property of the chi square distribution because I was able to add the two terms. Now, if they are not equal then sigma 1 square and sigma 2 square will come in these two terms and I cannot add it here because adding will not pool. I will be getting separate term that is m minus 1 s 1 square by sigma 1 square plus n minus 1 s 2 square by sigma 2 square. So, this cancellation that has happened by taking the ratio of sigma that will not take place. Now, therefore, this problem becomes a little complicated. In fact, we do not have a exact confidence interval in the sense that we have here the best solution. So, we have an approximate solution. Let me call it say case 3 sigma 1 square and sigma 2 square are say completely. In that case, another statistic let us call it say t star that is equal to x bar minus y bar minus xi divided by square root s 1 square by m plus s 2 square by n. This has approximate approximately a t distribution on say p degrees of freedom and where this p is approximately equal to s 1 square by m plus s 2 square by n whole square divided by s 1 to the power 4 by m square into m minus 1 plus s 2 to the power 4 divided by n square into n minus 1. Now, naturally this is not an integer. So, we consider the integral part of it where we take integral part of the term on the right side. So, this is an approximate test and it was developed by Welch and also Smith-Sotherwhite. So, based on this again we can construct a confidence interval based on t star we get confidence limits for xi as x bar minus y bar plus minus square root s 1 square by m plus s 2 square by n t p alpha by 2. There is another case in this all the cases we have considered the sampling to be independent, but there are also situations where the sampling may not be or you cannot assume that the two samples are independent. Consider for example, effect of a medicine for say patients who have diabetes. Now, the sugar levels were measured before they started the treatment suppose after taking the medicine for a month again their blood sugar levels are measured suppose there are say 10 patients. Now, blood sugar level of patient 1 will be related to his blood sugar level after taking the treatment similarly for patient number 2 similarly for patient number 3. That means, here we can consider the observations to be in some sense paired observations. I call it case 4 paired observations. That means, I am considering here something like x 1 y 1 x 2 y 2 and so on x n y n. So, we are assuming basically bivariate normal model with mean mu 1 mu 2 and variance covariance matrix sigma 1 square sigma 2 square rho sigma 1 sigma 2 rho sigma 1 sigma 2. Basically it is something like this I have given the example of say blood sugar level before the treatment and blood sugar level after the treatment. So, this is say x this is y. So, the data will be on patients 1 2 3 up to n and here x values will be x 1 x 2 and so on x n and the y values will be y 1 y 2 y n. Naturally this data cannot be considered to be independent that means the value y 1 will be certainly related to x 1 because depending upon the structure of the patient is the effect on him will be different than the effect on patient number 2 or the effect on the patient number 3 and so on. So, this is the case of paired observations. Now, in this case the methodology that we described till now will not be applicable because in all of them all those developments I have assumed the independence. Our aim is still the same to set up a confidence interval say for mu 1 minus mu 2, but then I can use again the linearity property of bivariate normal distribution. If I consider say v i's as say x i minus y i then that will follow normal distribution with mean xi that is mu 1 minus mu 2 and variance term as sigma 1 square plus sigma 2 square plus twice minus twice rho sigma 1 sigma 2. Now, what we can do we can consider. So, this can be written as say some tau square again we can consider now interval estimation based on v 1, v 2, v n because now this is reducing to the case of one sample problem that means we can consider v 1, v 2, v n this is following normal xi tau square. So, the confidence interval for xi will be v bar minus well we have to consider the say I will define S v square as 1 by n minus 1 sigma v i minus v bar square. So, if we consider this and we make use of the formula which we developed for the one sample problem. Let me just take the formula from the previous sheet here it was given by x bar minus S by root n T n minus 1 alpha by 2 to the same thing plus. So, if we use this I will get S v by root n T n minus 1 alpha by 2 to v bar plus S v by root n T n minus 1 alpha by 2. So, this will be the confidence limits. So, basically when we have the paired observations then to obtain the confidence limits for the mean difference we consider the difference of the observations and we calculate v bar that is 1 by n sigma v i and S v square that is the sample variance based on the differences and construct the confidence interval treating this problem as the one sample problem itself and we are able to get the confidence limits for this problem also. We also have the problem of ratio of the variances. So, what about the confidence interval for that again we can make use of this S 1 square and S 2 square. Let me just demonstrate that here confidence intervals for say let me give some name to it say eta that is equal to sigma 2 square by sigma 1 square or 1 by eta that is sigma 1 square by sigma 2 square it is the same thing. So, we can consider here m minus 1 S 1 square by sigma 1 square that is following chi square distribution on m minus 1 degrees of freedom and n minus 1 S 2 square by sigma 2 square that is following chi square distribution on n minus 1 degrees of freedom and these two are independent. If they are independent I can construct the ratio here. So, I will get m minus 1 S 1 square by sigma 1 square into m minus 1 divided by n minus 1 S 2 square by sigma 2 square n minus 1 and this cancels out. So, you get sigma 2 square by sigma 1 square S 1 square by S 2 square this follows f distribution on m minus 1 n minus 1 degrees of freedom. So, we can use this to get f distribution is also a positively skewed distribution. So, we consider f m minus 1 n minus 1 alpha by 2 and f m minus 1 n minus 1 1 minus alpha by 2. So, f m minus 1 n minus 1 1 minus alpha by 2 less than or equal to eta S 1 square by S 2 square less than or equal to f m minus 1 n minus 1 alpha by 2 that is equal to 1 minus alpha. So, we get the confidence limits here as S 2 square by S 1 square f m minus 1 n minus 1 1 minus alpha by 2 2 S 2 square by S 1 square f m minus 1 n minus 1 alpha by 2. So, these are the confidence limits for eta confidence limits for eta. These are the popular applications because we are assuming the normal model and I already mentioned because of the central limit theorem normal distribution plays a central role in the theory of statistics. Therefore, these methods became very popular and they are commonly used. However, the another popular one is when we have the qualitative data. So, you have the responses and we may use the binomial model. In the next lecture I will introduce the confidence intervals for proportions, the difference of proportions etcetera and then we will move over to the problem of testing of hypothesis. So, that I will be covering in the next lectures.