 In the last lecture, I introduced the concept of testing of hypothesis. We saw that Neyman Pearson approach in which they consider the probabilities of type 1 error and type 2 error and based on that the test procedures are devised in which we concede we put a restriction on one type of error usually the type 1 error and we call it the size of the test and subject to the tests functions satisfying the size of the test condition we find out those tests which have the maximum power. So, they are called most powerful test as some solution was proposed for simple versus simple hypothesis cases and later on these procedures were extended to the case of composite hypothesis also certain type of composite hypothesis and then for certain type of composite hypothesis uniformly most powerful and biased tests were also devised. In place of giving the full details of the derivation of the test I will be basically explaining to you the procedures that the test that have been obtained using this and how to use them. So, let us consider testing for the parameters of normal populations. So, let us consider say x 1, x 2, x n following normal mu sigma square distribution we consider testing for mu. Now, let us consider say case 1 when sigma square is known now we consider various kind of hypothesis we consider say first problem I will call the hypothesis testing problems as h 1 k 1. So, h 1 mu is equal to say mu is less than or equal to mu naught against say k 1 mu is greater than mu naught. In this particular case we consider x bar that is following normal mu sigma square by n. So, we consider root n x bar minus mu by sigma that follows normal 0 1. So, we consider the test statistic z is equal to root n x bar minus mu by mu naught by sigma. So, if you consider if I consider this as z alpha then this probability is alpha. So, if we consider probability of z greater than z alpha that is equal to alpha and here we are considering mu is equal to mu naught. So, the uniformly most powerful test of size alpha for testing h 1 against k 1 is reject h 1 when z is greater than z alpha where z is given by root n x bar minus mu naught by sigma. Actually it can be shown that it can be shown that if I consider probability z greater than z alpha then the supremum of mu less than or equal to mu naught is attained at mu is equal to mu naught. Therefore, this is the most powerful test of size alpha. So, of course, since this is composite hypothesis situation we will say it is the uniformly most powerful test here. Now, we have the variations in place of h 1 that is mu less than or equal to mu naught if we take h 1 star that is say mu is equal to mu naught versus k 1 mu greater than mu naught then also the same test procedure will be applicable. Now, the main reason is that actually since here the maximization is occurring at mu is equal to mu naught. Therefore, when the null hypothesis is true mu is equal to mu naught will be coming here and in this case the maximum is occurring at that point and the power is decided by the alternative. Therefore, the test function will not change and the test procedure will also not change. So, you will say accept h 1 if z is less than or equal to z alpha here equal to z alpha has no significance because the probability that z is equal to z alpha will be 0 because z is a continuous random variable. Now, naturally one may think that in what happens if we change the null and alternative hypothesis. For example, here alpha is the probability of maximum probability of type 1 error that means, we are rejecting and the null hypothesis is true. Now, if that is considered to be more serious for the beta in that case we may like to interchange the hypothesis and we may consider. So, let me call it say h 2 mu is greater than or equal to mu naught against k 2 say mu is less than mu naught. I have interchanged the role of null and alternative hypothesis, but the equality I have included in the null hypothesis. So, in this case we will be considering the rejection on the left side because you will be considering here. So, you will consider reject h 2 if z is less than or equal to minus z alpha and of course, accept otherwise accept h 2 otherwise. See in this case this hypothesis is also equivalently we may test h 2 star mu is equal to mu naught against k 2 mu is less than mu naught. Basically once again if we are considering this one then the probability of say z less than or equal to minus z alpha when mu is equal to mu naught that will be alpha and when we are considering for a general mu in this region then the maximum value will be attained when mu is equal to mu naught and therefore, the size will be alpha. So, this is the uniformly most powerful test of size alpha. Now there may be situations where we may not like to test greater than or less than rather whether a value is equal or not. In that case we formulate the hypothesis testing problem in the following fashion. We consider say h 3 mu is equal to mu naught against k 3 mu is not equal to mu naught. Now naturally in this case the rejection region will be on both the sides. So, we consider say z alpha by 2 and minus z alpha by 2. So, you will consider actually this is uniformly most powerful uniformly most powerful unbiased test of size alpha. So, that is reject h 3 if modulus of z is greater than z alpha by 2 where z is the same quantity that is z is equal to your root n x bar minus mu naught by sigma. So, you are rejecting in this region and in this region and in the intermediate region you are in favor of the hypothesis except h 3 otherwise. Now in case sigma square is unknown then naturally this z cannot be used. If you remember the development of the confidence interval there in place of sigma we had used s there where s square was 1 by n minus 1 sigma x i minus x bar square that is the sample variance. So, we consider the situation sigma square is unknown then consider s square that is equal to 1 by n minus 1 sigma x i minus x bar square. So, we consider say t that is equal to root n x bar minus mu naught divided by s. So, if we consider say root n x bar minus mu by s then that follows t distribution on n minus 1 degrees of freedom as we have seen in the confidence interval problem. So, if we consider this mu replaced by mu naught then the test statistic will be following the t distribution and we can consider the problems. So, for h 1 mu less than or equal to mu naught against say k 1 mu is greater than mu naught then we will have the test as reject h 1 if t is greater than t n minus 1 alpha. If we consider mu is greater than mu naught against k 2 mu is less than or equal to mu naught then the test will be reject h 2 if t is less than t n minus 1 1 minus alpha that is minus t n minus 1 alpha because of the symmetry. Then the third situation comes for the two-sided test that is for mu is equal to mu naught against mu is not equal to mu naught then we will consider reject h 3 if modulus of t is greater than or equal to t n minus 1 alpha by 2. So, you will have a two-sided rejection region here this is alpha by 2 and this is alpha by 2. So, these are the most powerful unbiased test for the size alpha for these problems here. Now, one may like to test for the variance also. So, if we consider the test for the variance testing for sigma square then again you will have two cases one case will be when mu is known. If mu is known then we can consider sigma x i minus mu square by sigma naught square. So, if we consider this as w then this is following chi square distribution on n minus 1 degrees of freedom when sigma square is equal to sigma naught square. So, if we consider the hypothesis testing problems based on this. So, for example, let us consider say sigma square less than or equal to sigma naught square against say sigma square is greater than sigma naught square then we will consider the rejection region as reject h 1 if w is greater than chi square n minus 1 alpha because chi square is a skewed distribution and we will have this situation here chi square n minus 1 alpha. So, this probability is simply alpha. As I mentioned earlier we can also consider sigma square is equal to sigma naught square and here sigma square greater than sigma naught square is still the test function and the test region will be same. And we may consider reverse situation sigma square greater than or equal to sigma naught square against k 2 sigma square less than sigma naught square then the test procedure will be reject h 2 if w is less than chi square n minus 1 1 minus alpha this probability is alpha. So, this is not a symmetric distribution therefore, we cannot write minus here and we will have a two sided region if we consider sigma square is equal to sigma naught square against sigma square is not equal to sigma naught square. So, the test procedure will be reject h 3 if w is less than chi square n minus 1 1 minus alpha by 2 r w is greater than chi square n minus 1 alpha by 2. So, this will be most powerful test uniformly most powerful test of size alpha here in the case 1 and 2 and in the case 3 it will be uniformly most powerful unbiased test for of size alpha here. Now in the case when mu is unknown then we base our decisions on let us call it w star that is n minus 1 s square by sigma naught square. So, that follows chi square distribution actually I made a mistake here this should be n here because this is following n this will be n this will be n these are will be n degrees of freedom. When mu is unknown then you will have n minus 1 degrees of freedom and then the test procedures will be for first case reject h 1 if in the second case it will be reject h 2 if w star is less than and in the third case reject h 3 if w star is less than chi square n minus 1 1 minus alpha by 2 r w star is greater than chi square n minus 1 minus alpha by 2. This is about the testing for the parameters of a one normal population. Now this type of methods have been can be applied actually to other distributions also in which certain nice properties for example if the distributions are in the exponential family if the distributions are having monoton likelihood ratio even though they may not been the exponential family in all those situations this type of testing procedures are applicable. Now I will briefly touch upon the two parameter two population model for the normal populations. So, we consider two sample problems like in the case of confidence intervals we have two samples available to us. One is from say a normal distribution with mean mu 1 and variance sigma 1 square and another independent random sample that is available from normal with mean mu 2 and variance sigma 2 square and these two samples are taken independently. Now we consider say parameters mu 1 mu 2 sigma 1 square sigma 2 square these could be our testing problems. Now one of the you can say commonly used problems could be to test whether the mean of the first population is less than the mean of the second population or equal or greater than etcetera that means we are interested in the difference here. Now naturally this is a problem which can be handled easily using the Neyman Pearson theory. So, we consider testing for means if we consider the testing for means we may consider hypothesis problems of the nature say mu 1 less than or equal to mu 2 mu 1 is equal to mu 2 mu 1 greater than or equal to mu 2 and so on. These are the types of hypothesis problems that we may have. So, again as before we consider case one when sigma 1 square and sigma 2 square are known. If sigma 1 square and sigma 2 square are known then we consider the statistic of the form let me call it z star that is equal to x bar minus y bar minus divided by square root sigma 1 square by m plus sigma 2 square by n. Now when mu 1 is equal to mu 2 then z star follows normal 0 1. So, we utilize this actually. In fact, it can be shown that the maximum of the probability of type 1 error will be achieved when mu 1 is equal to mu 2. Let us consider various hypothesis testing problems here. Say mu 1 less than or equal to mu 2 against say mu 1 greater than mu 2. Naturally if in the alternative case we are saying mu 1 is greater than mu 2 that means we will be considering the rejection region on the larger side. So, we will consider here that is z alpha. So, we will consider reject h 1 if z star is greater than z alpha. In the second case here we will be rejecting for the small values of z star. Now if we consider the small values then on the left hand side we can consider z 1 minus alpha which is equal to minus z alpha. So, the rejection region will be reject h 2 if z star is less than minus z alpha. And once again for the two sided problem we may consider see we may consider this as h 2 here. So, this will be h 3 mu 1 is equal to mu 2 against k 3 mu 1 is not equal to mu 2. So, you will consider reject h 3 if modulus of z star is greater than z alpha by 2. That means we will be rejecting on both the sides of the normal curve. That means if the values is in this gene or in this zone that is minus z alpha by 2. Now we consider the second case when sigma square is equal to sigma 2 square is equal to say sigma square, but this is unknown. If this is unknown then we formulate the test statistic. Now let me briefly mention about the large sample cases also. See if we look at the case that I discussed in the beginning here we are considering the approximation by the normal 0 1. Now suppose the original distribution need not be normal, but if we are considering the testing for the mean and we have large sample in that case we can consider by applying central limit theorem that this will be approximately normal 0 1. So, the test procedure that I have mentioned here will still be applicable for the large sample cases. However, when sigma square is unknown in that case this procedure will not be applicable. Similarly, in this problem when I considered comparison of mu 1 mu 2 when sigma 1 square and sigma 2 square are known in that case even if the original populations need not be normal then by central limit theorem this result will be applicable. However, when sigma 1 square sigma 2 square are unknown then this result central limit theorem will not be applicable and we have to go for the exact procedures. So, let us consider here if we remember our notations that we developed for the confidence intervals that is we considered S 1 square is equal to 1 by m minus 1 sigma xi minus x bar square and we considered S 2 square is equal to 1 by n minus 1 sigma y j minus y bar square and S p square was taken as m minus 1 S 1 square plus n minus 1 S 2 square by m plus n minus 2. Then based on this we had considered that when mu 1 is equal to mu 2 then you have x bar minus y bar multiplied by m n by m plus n divided by S p this has t distribution on m plus n minus 2 degrees of freedom. Therefore, we can write down the tests for all the 3 situations and let me just repeat it again we are having the testing problems that is h 1 versus k 1 that is mu 1 less than or equal to mu 2 versus mu 1 greater than mu 2. In this case your rejection region will be on the right hand side that is t m plus n minus 2 alpha. So, your region will be reject h 1 if so let us call this quantity say t 1. So, this is equal to t 1 greater than t m plus n minus 2 alpha. In the second case you will be on the left hand side. So, you will say reject h 2 if t 1 is less than minus t m plus n minus 2 alpha and for the 2 sided case it will be reject h 3 if modulus of t 1 is greater than t m plus n minus 2 alpha by 2. Now, the third case is when sigma 1 square and sigma 2 square are completely unknown. If they are completely unknown in this particular case we consider say t 2 that is equal to x bar minus y bar divided by square root S 1 square by m plus S 2 square by n. When mu 1 is equal to mu 2 then t 2 has approximate t distribution on some p degrees of freedom where p is given by S 1 square by m plus S 2 square by n whole square divided by S 1 to the power 4 by m square into m minus 1 plus S 2 to the power 4 by n square into n minus 1. We usually take p to be integer part of the right hand expression. So, the test procedures can be formulated the test procedures for h 1 versus k 1 h 2 versus k 2 and h 3 versus k 3 can be based on. So, I am not describing here for example, in the first case it will be rejecting h 1 if t 2 is greater than t p alpha. Similarly, in the second case it will be reject h 2 if t 2 is less than minus t p alpha and in the third case it will be reject h 3 if t 2 is modulus of t 2 is greater than t p alpha by 2. We had also considered a case of paired observations. In the confidence interval I had described the situation that is where mu 1 and mu 2 are resulting from the same set of individuals or items. For example, it could be the certain learning procedure and we look at the ability of the candidates before conducting the learning procedure and after conducting the learning procedure after a certain time. For example, you could say x i's are the scores on tests of n students before the coaching you can say and y i's are the scores on tests of n students after the coaching. In this case we can consider say x i y i this is following a bivariate normal distribution with some means say mu 1 mu 2 and variances sigma 1 square sigma 2 square and co variances rho sigma 1 sigma 2. So, if we want to compare mu 1 and mu 2 we may as well consider say d i that is equal to y i minus x i or x i minus y i say. So, then this will follow normal mu 1 minus mu 2 and sigma 1. So, I call it sigma d square where sigma d square is nothing but sigma 1 square plus sigma 2 square minus 2 rho sigma 1 sigma 2. Now, it is immaterial we can actually consider our observations to be d i's and we can calculate d bar that is 1 by n sigma d i and we can consider s d square as 1 by n minus 1 sigma d i minus d bar rho square. And we can formulate the test statistic let us call it t 3 that is equal to root n d bar divided by s t. So, the test procedure then for h 1 that is mu 1 less than or equal to mu 2 versus k 1 mu 1 greater than mu 2. Once again you can see here it will be reject h 1 f t 3 is greater than t n minus 1 alpha. Similarly if I consider h 1 mu 1 greater than or equal to mu 2 versus k 2 mu 1 less than mu 2 then it will be reject h 2 f t 3 is less than minus t n minus 1 alpha. Similarly if I consider the two sided testing problem mu 1 is equal to mu 2 against mu 1 not equal to mu 2 then the test procedure will be reject h 3 if modulus of t 3 is greater than t n minus 1 alpha by 2. So, we have considered various cases for the comparison of the means of two normal populations. Let us also consider a case for comparison of the variances of two normal populations comparing variances. So, that means we may have a testing problem of the nature. So, let us write say tau is equal to sigma 2 square by sigma 1 square. So, we may consider say tau is less than or equal to say tau naught against say tau is greater than tau naught tau is greater than tau naught tau is less than tau naught tau is equal to tau naught against tau is not equal to tau naught. In all these cases we may consider say s 2 square by s 1 square let us call it say v. Now when sigma 1 square is equal to sigma 2 square tau will be equal to 1. So, then v will have f distribution on n minus 1 m minus 1 degrees of freedom if tau is equal to 1. Therefore, we can use this for the testing here. In the first case it will be reject h 1 if now you can see here you have to reject for the large values of tau. So, large values of tau will correspond to the large values of v. So, if v is greater than f n minus 1 m minus 1 alpha. In the second case reject h 2 if v is less than f n minus 1 m minus 1 1 minus alpha which is of course equal to 1 by f m minus 1 n minus 1 alpha. In the third case it will be 2 sided region if v is less than f n minus 1 m minus 1 1 minus alpha by 2 or v is greater than f n minus 1 m minus 1 alpha by 2. Of course, we may also consider the case when mu 1 and mu 2 are known. In that case the only thing is that in place of s 2 square by s 1 square we can consider sigma yj minus mu 2 square divided by sigma xi minus mu 1 square and this f statistic will be replaced by f n m in rather than n minus 1 m minus 1. So, without spending too much time on that I will just skip that portion. So, this is the case for the comparison of the variances. Now, equivalently we may have testing problem for the proportions also testing for proportions. For example, if I am considering say x following binomial n p and we may like to test about say p is equal to p naught or p less than or equal to p naught as before. So, we may consider the tests based on x minus p naught because x by n is the let us write it as p hat that is equal to x by n and q hat is equal to 1 minus p hat. So, we may consider basing our tests on this. We can consider p hat minus p naught divided by root p naught q naught by n and we can consider the normal t for this thing that is when p is equal to p naught then this is approximately normal 0 1. This is approximation for n large. So, we can let us write it say some z 1. So, we can base our tests on z 1 for hypothesis h 1 versus k 1 or similarly we can consider p greater than p naught against p less than or equal to p naught etcetera all those kinds of cases can be considered. We can also consider this situation x following say binomial m p 1 and y following binomial n p 2 and we may like to compare p 1 less than or equal to p 2 p 1 greater than p 2 etcetera. So, we can consider base on the differences x minus y and then we can consider the p 1 q 1 by etcetera. So, all those things can be done. I am not spending too much time on this problem here. Now, the test that I have discussed here they are based on the Neyman Pearson theory. However, there was another approach which was considered by R A Fisher and others that is based on the likelihood ratio. In fact, Neyman Pearson came to the f 1 by f naught form based on the likelihood ratios only. However, the approach in a more general form can be described like this. So, let me mention this thing likelihood ratio tests. Let us consider say x 1, x 2, x n be a random sample from a population with some distribution. So, it could be say f x theta we just write in general. Here theta belongs to some parameter space theta. We want to test h naught theta belongs to say omega naught. Let me just change the notation here. This omega let me write here. So, this omega naught is a subset of omega. As you have seen in all these problems like in the binomial problem p was lying between 0 to 1. So, the parameter space was 0 to 1, but in the null hypothesis we are restricting attention to 0 to p naught. If you consider the previous problems of normal populations etcetera for example, here you are writing sigma square is equal to sigma naught square, but here your mu range is from minus infinity to infinity and sigma square can be greater than 0. So, full parameter space is that, but in the null hypothesis you are saying mu is equal to mu is from minus infinity to infinity, but sigma square is equal to sigma naught square. So, you are specifying a region. In the Neyman Pearson theory it was essential to specify an alternative hypothesis, but in the case of likelihood ratio test it is not required. That procedure is based on a simple argument that we consider maximization of the likelihood function under the full region and under the null hypothesis space and then we compare them. So, the logic is as follows. Consider the likelihood function L theta x that is equal to product of f x i theta, we maximize L over omega with respect to theta say maximization is L hat omega that is equal to supremum of L theta x for theta belonging to omega. Consider we consider maximization of L over omega naught with respect to theta and say maximization is L hat I call it L hat omega naught that is equal to supremum of L for theta belonging to omega naught. Now, you see if the hypothesis omega naught is true that means, theta belonging to omega naught is true then the maximization of the likelihood function over this will be almost the same as the maximization over the whole space. You can of course, notice from a simple mathematical argument that L hat omega naught is always less than or equal to L hat omega because this is maximization over a subset and this is maximization over the full space. So, we always have we always have L hat omega naught always less than or equal to L hat omega. So, naturally if L hat omega naught is closer to L hat omega that means, we have more confidence in the hypothesis omega naught that means, the likelihood that H naught is true is more likely. However, if L hat omega naught is much less than L hat omega then we have doubts over the correctness are beingover H naught being true. So, therefore, if we formulate the ratio L hat omega naught by L hat omega then for the smaller values of that we would tend to believe that H naught is not true. So, this is the basic idea for formulating the likelihood ratio test. So, consider the likelihood ratio. So, that is let us call it say some lambda that is equal to L hat omega naught in the likelihood ratio test we reject H naught if lambda is less than or equal to some k. Now, once again the question about the choice of k comes and therefore, we can choose k to fix the size. We may actually look at what is the probability of rejection. So, that is known as the significance testing we consider theprobability of this and we look at the range by which we will be actually accepting. For example, if I consider say alpha is equal to 0.01 or alpha is equal to 0.05 and we look at whether we will be actually rejecting. So, the maximum valuethe minimum value up to which we will be considering that will be called the p value of the test. Let us consider an example here say x 1, x 2, x n follow exponential distribution with parameter say mu this is where f x mu. Now, let us consider say we want likelihood ratio test for say mu less than or equal to 1 against say H 1 mu greater than 1. So, we consider here the likelihood function that is equal to e to the power n mu minus sigma x i and here it will be x i greater than mu for i is equal to 1 to n. So, naturally this is maximized here we can consider say mu is greater than 0. You may consider this as a typical situation where the lifetimes of components are following exponential distribution with parameter mu, but here mu denotes the minimum guarantee time the rate is 1 here. So, this is maximized with respect to mu when mu is equal to actually the minimum of x 1, x 2, x n. So, you get l head omega that is equal to e to the power n x 1 minus sigma x i the maximum value of the likelihood function over the parameter space. Now, let us consider to find maximum over omega naught, omega naught here is mu is less than or equal to 1, then it is maximized when mu is equal to minimum of x 1 and 1 because we are putting 2 restrictions mu is less than or equal to x 1 and mu is less than or equal to 1. So, the maximum value that mu can take is minimum of x 1 and 1. So, l head omega naught that will become e to the power n minimum of x 1 and 1 minus sigma x i. So, now, the likelihood ratio is say lambda that is equal to l head omega naught divided by l head omega. So, that is equal to e to the power n minimum of x 1 1 minus sigma x i divided by e to the power n x 1 minus sigma x i. So, this term naturally cancels out. Now, this is equal to 1 if x 1 is less than or equal to 1 and it is equal to e to the power n minus n x 1 if x 1 is greater than 1. So, you can easily see that when the likelihood ratio is 1, you will always you cannot reject h naught because this is the best that can happen. So, we can say that l r t will always accept h naught if x 1 is less than or equal to 1. Now, let us look at the other region. So, when x 1 is greater than 1, we consider the rejection region e to the power n minus n x 1 less than k. So, if I take log etcetera and then adjust the terms, then it is equivalent to something like saying x 1 is greater than some c where c is to be chosen suitably. As an example, we may consider say probability of x 1 greater than c is equal to say alpha. Suppose, we want this for supremum mu less than or equal to 1. Suppose, we consider this situation. If we consider this situation, then this is equivalent to e to the power n minus n c is equal to alpha. That means, n minus n c is equal to log of alpha or we can say c is equal to 1 plus 1 by n 1 minus 1 by n log alpha. So, you are actually rejecting for a value slightly higher than 1. So, this is a typical application of the likelihood ratio test and also you can see I can show you through an example for the normal distribution that how does it compare with the standard test that we obtained using Neyman-Pearson theory. Let us consider another example. Say I consider x 1, x 2, x n following normal mu 1 situation and we consider the likelihood function that is equal to 1 by root 2 pi to the power n e to the power minus 1 to sigma x i minus mu square. Now, I consider the hypothesis testing problem say mu is less than or equal to 0 against say mu is greater than 0. Now, if I consider the maximization of L over omega here omega is actually minus infinity to infinity gives mu hat is equal to say x bar and therefore, you will get L hat omega that is equal to 1 by root 2 pi to the power n e to the power minus 1 by 2 sigma x i minus x bar square. But if we consider maximization over omega naught where omega naught is actually minus infinity to 0 then we will get mu hat is equal to see if x bar is less than 0 then it will be x bar, but it will be 0 if x bar is greater than 0. So, that will get give us minimum of x bar and 0 if that is happening then L hat omega naught that will become equal to 1 by root 2 pi to the power n e to the power minus 1 by 2 sigma x i minus x bar square if x bar is less than 0 and it is equal to 1 by root 2 pi to the power n e to the power minus 1 by 2 sigma x i square if x bar is greater than 0. So, we can put less than or equal to 0 here it does not matter now the thing is. So, the ratio that is L hat omega naught by L hat omega if you see that is equal to 1 if x bar is less than or equal to 0 and it is equal to this ratio e to the power half sigma x i minus x bar square minus sigma x i square if x bar is greater than 0 that means always accept h naught if x bar is less than or equal to 0. Now in the other case you will formulate the reason here when x bar is greater than 0 we reject h naught when e to the power half sigma x i minus x bar square minus sigma x i square is less than k. So, if I take log here and adjust this half here it is becoming sigma x i square minus x bar square minus sigma x i square is less than some c. Now this can be further simplified here we can consider this as sigma x i square minus n x bar square minus sigma x i square less than c. So, this cancels out. So, we get actually x bar square greater than some c. So, rejection reason is turning out to be two sided something like modulus x bar is greater than some c 1 that is called it c 1 this as c 2 here which is a. So, actually we can again see here here I am considering x bar greater than 0 this is equivalent to x bar greater than c 2. Since x bar is positive this reduces to x bar greater than some c 3 kind of thing. Now if you compare it with the Neyman Pearson test there it would have been root n x bar greater than z alpha. Now here it is like this only in this particular portion, but when x bar is less than 0 we are always accepting h naught. So, there that is the difference from the Neyman Pearson test. So, notice the difference from n p test for x bar less than 0 case, but if x bar is greater than then it is, but for all practical purposes you can see because alpha will be sufficiently small. Therefore, z alpha value will be very close to high and therefore, the two tests will be practically the same. In the parametric methods I have concentrated mostly on the point estimation confidence interval and testing of hypothesis problems in the. So, there are other cases also when we do not have the parameter is specified that means the distribution is not specified and we consider distribution free methods. However, that will be slated for a different zone. Now we will be moving over to another topic in this statistical methods. So, that I will be starting from the next lecture.