 As I mentioned in the last lecture that we were calculating the expected values of the I gave the expressions for the expectations or the variance etcetera of the order statistics when the sampling is done from a uniform distribution. I also mentioned that it is quite complicated to obtain the same expressions for the general distribution if the sample is from a general continuous population. The main reason is that the distribution of the order statistics involves the powers of capital F and also the powers of 1 minus capital F. And therefore, for a typical distribution say normal distribution etcetera these expressions will be interactable that means, we cannot evaluate them in the closed form. Therefore, pattern procedures have been developed for which we can at least obtain approximations or we can obtain bounds. So, first I will discuss the method of lower or the upper bounds for the expected values. So, first assumption is that let us make assumption that F is strictly increasing and second is that second moment exists. So, in general or we can say sigma square is finite that means, the variance of say x i that exists under these assumptions this expressions will be obtained. So, let us consider say for example, distribution of the largest. Now, we have derived the distribution of the largest as n times F of x to the power n minus 1 f x and multiplied by x here. Now, we substitute say u is equal to f x. So, of course, d u is equal to f x d x then this is becoming equal to integral from 0 to 1 and place of x we will have f inverse. This is the reason why we make the assumption that f is strictly increasing because then I can write down the inverse function in a unique question. Then this is becoming n times u to the power n minus 1 d u. Now, this terms we make some adjustment here we write it as 0 to 1 f inverse u minus mu. Now, what is mu? Mu we take to be mean of x i. So, then you consider f inverse u minus mu n u to the power n minus 1 minus 1 d u that means, I have added and subtracted some terms here n times u to the power n minus 1 d u plus 0 to 1 f inverse u d u minus mu. Now, this is a cross product term on this I apply Cauchy Schwarz inequality and also let us look at what are these terms. Let us check each of them separately. So, if I look at 0 to 1 f inverse u d u this is nothing but integral x f x d x because if I put f inverse u is equal to x then I will get f x is equal to u then d u is becoming f x d x. So, this is nothing but mu itself. Similarly, the integral 0 to 1 n u to the power n minus 1 that is simply becoming mu by n then we apply Cauchy Schwarz inequality on the first integral. That means, on this term I apply the Cauchy Schwarz inequality. So, this will give me that is 0 to 1 f inverse u minus mu n u to the power n minus 1 minus 1 d u that is less than or equal to integral 0 to 1 f inverse u minus mu whole square d u integral 0 to 1 n u to the power n minus 1 minus 1 whole square d u whole to the power half. Once again what are these terms? If I look at this term here 0 to 1 f inverse u minus mu if I put f inverse u is equal to x then this is becoming minus infinity to infinity x minus mu square f x d x that is equal to sigma square and the second term can be easily evaluated that is 0 to 1 n times u to the power n minus 1 minus 1 square d u. So, that is equal to 0 to 1 n inverse n square u to the power 2 n minus 2 d u minus 2 n u to the power n minus 1 d u plus 1 that is equal to n square divided by 2 n minus 1 minus 2 plus 1. So, that is minus 1 that is equal to n minus 1 whole square divided by 2 n minus 1. So, let me substitute all the values here. So, we get using the above computations we get we started with the term expectation of x n. So, expectation of x n is less than or equal to sigma into n minus 1 divided by root 2 n minus 1 and then what are the other terms that we are getting here this term is giving me mu this is giving me mu this is mu. So, this is getting cancelled out. So, I get only mu here. So, you can say this is a bound here to find bound on expectation of x 1 we replace x i by minus x i. So, then we get expectation of x 1 greater than or equal to mu minus sigma n minus 1 by root 2 n minus 1. So, basically you can see that moments of the order statistics lie between mu minus sigma n minus 1 by root 2 n minus 1 to mu plus sigma into n minus 1 by root 2 n minus 1. Let us take say some special case for example, I take n is equal to say 5 if I take n is equal to 5 then what value I will get this sigma n minus 1 that is becoming 4 and this will become 3. So, 4 by 3 that means we are getting expectation of x 1 is greater than or equal to mu minus 4 sigma by 3 less than or equal to mu plus 4 sigma by 3. So, it is helpful in that sense for example, we know expectation of each x i is mu. So, now I want to talk about expectation of x n. So, the upper bound that is attained when I have a sample of size 5 is mu plus 4 by 3 times the standard deviation of the random variable and similarly the lower bound will be mu minus 4 by 3 times of the standard deviation. So, in some sense this is like giving you a scale for example, when we were dealing with the normal distribution then we consider like sigma limits mu plus minus sigma limits mu plus minus 2 sigma limits are mu plus minus 3 sigma limits etcetera. So, this is giving us the similar bound in the case of the order statistics from any distribution. Then one question may arise for example, this derivation that I have done is through Cauchy Schwarz inequality. Now we know that in the Cauchy Schwarz inequality the equality is attained if the two functions f and g are basically linearly related. Now if we apply that condition then certainly equality must be attained. So, we can actually show that this bounds are sharp that is there exists a distribution for which these bounds actually that example is given here. So, let me mention that thing here that is if I consider f x is equal to 1 by n minus 1 b by n 1 plus b x by n to the power 2 minus n by n minus 1, where x is from root 2 n minus 1 by n minus 1 to root 2 n minus 1. If I consider this density of course, it is 0 outside this range for this density both the bounds are attained which can be easily checked I am not giving the calculations here. Actually if we consider this the form of the CDF is also in a closed form and therefore, all the calculations can be easily done here. Now if the distribution is symmetric about 0 then that means mu is 0 in that case there is a further improvement over the bound let me give it here. In case f is symmetric 0 then let us consider generally if it is symmetric then we have this property that f of minus x is equal to 1 minus f of x and also a small f of x is equal to f of minus x and you will have mu is equal to 0 that means variance will be equal to expectation of x square. Now in this case let us consider say expectation of x n so that is equal to minus infinity to infinity n times f of x to the power n minus 1 f x dx. Now this is multiplied by x here I split it into the two regions minus infinity to 0 plus 0 to infinity. In the first one you put say y is equal to minus x then this is becoming 0 to infinity then x is becoming minus y then you get n times f of minus y to the power n minus 1 f of minus y dy. Now using this property of symmetry here this is becoming equal to minus 0 to infinity y n 1 minus f y to the power n minus 1 f of y dy. So again you can put y is equal to x so I can now consider this integral is 0 to infinity this integral is 0 to infinity this is with a minus sign and here I have n y that is n x here and f x so that is common here. So this term then becomes equal to this term can be then written as 0 to infinity n times y f of y to the power n minus 1 minus 1 minus f y to the power n minus 1 f y dy. Now you can again put as before u is equal to f y if we do that then this is simply becoming equal to n times half to 1 why half to 1 because f inverse 0 since it is symmetric about 0 then f of 0 is equal to half so f inverse 0 is equal to f inverse 0 will become equal to half. So this is now then f inverse u and then you get u to the power n minus 1 minus u to the power n minus 1 dy. Again you see this is interesting here this is coming as a product of two terms. So I can apply Cauchy-Swar's inequality we can use Cauchy-Swar's. So you will get expectation of x n less than or equal to n times integral well it is half to 1 f inverse u square du into integral half to 1 u to the power n minus 1 minus 1 minus u to the power n minus 1 square du whole to the power half. So it is a matter of calculations here this is actually becoming x square fx dx from 0 to infinity. So since I have assumed that the mean is 0 and it is a symmetric function. So it is actually equal to minus infinity to infinity half of that. So this is basically equal to sigma square by 2 this can also be easily evaluated. So I just write down the values here it is becoming equal to n sigma by root 2 because power half is coming here and in the second part it is coming 1 by 2 n minus 1 minus n minus 1 factorial square divided by 2 n minus 1 factorial whole to the power half. Of course you can further simplify it as equal to n by root 2 n minus 1 into root 2 sigma 1 minus 1 by 2 n minus 1 c n minus 1 to the power half. If you compare with the bound that we obtained here it was sigma n minus 1 by root 2 n minus 1 a plus mu. So this mu is actually becoming equal to 0 here. Here you can see that sigma this divided by root 2 n minus 1 is coming here but this is n minus 1 here that is becoming n here and here you are getting root 2. So this bound is much smaller that means it is a sharper bound if we are having additional information that the distribution is symmetric about 0. So, here what you can see here that we have made use of the Cauchy-Swarzen equality for the evaluation. Then of course one may question that why did I not consider R-th here. If you consider the R-th one then there is a difficulty here. See in the R-th one two terms will be coming here. So here it is easy because if I do it for one of them then I am able to convert it into x. But if I do the second one then it is becoming 1 minus x and I would not be able to evaluate there. I mean it will become much more complicated. Then the second is the asymptotics here. So let me talk about that. So we look at asymptotic expressions for mean and variance of order statistic. So let me talk about that. So first I develop a general formula. So let us consider say let z be a random variable with expectation of z given to be say mu and let g be any measurable function. So g z let us consider here that is equal to g mu plus g 1 mu z minus mu plus g 2 mu by 2 factorial z minus mu square and so on. I am applying Taylor's theorem here Taylor's expansion. So I am assuming that g is a nice function that means it is differentiable many times. At least two times we are writing down the term here. So if I take expectations on both the sides taking expectations on both sides I get expectation of g z that is equal to g mu. Now in the second one if I take expectation this is simply vanishing and then I get here expectation of z minus mu square. So I have assumed here expectation of this. So this is becoming variance here. So g 2 mu by 2 factorial sigma square and so on. So if I consider say g z minus expectation of g z then this first term will get cancelled out. So let me take the square here then second term will be coming here that is g 1 mu z minus mu plus in the second one I will get g 2 mu by 2 factorial and here I get z minus mu square minus sigma square. There are other terms also we can write it as a sigma g i mu by i factorial z minus mu to the power i minus basically mu i that is the mu i is the ith central moment of z. If I take this then I get this term here and then this is from i is equal to 3 to infinity square of this. So if I expand this I will get g 1 mu square z minus mu square then next term will give me minus 2 times. See this sigma square term is there. So if I consider this expansion that is sigma square the cross product term with this I will get 2 times g 1 mu z minus mu then g 2 mu by 2 factorial z minus mu by and sigma square and then certainly all the terms that I will be obtaining from this that is when I square it then I will get the cross product with this one. So let me just put that general form here minus 2 g 1 mu z minus mu sigma g i mu by i factorial z minus mu square into mu i i is equal to 3 to infinity all these terms will be coming there then I will get square of this term that is plus g 2 mu by 2 factorial square and then I will get z minus because I have already taken the cross product of this separately. So I will get a term which is giving me sigma square. So sigma to the power 4 basically and then cross product of this with this itself. So I will get sigma to the power 4 minus twice sigma square z minus mu square. So basically I am just arranging in a you can say that I am arranging the terms in a particular way then of course then there will be terms with you square at this then that will give me z minus mu to the power 4. Here the powers are starting from cube. So when I square it then in the cross product with this itself I will get 3, 4 and so on. So all those terms I write as h of z. So what I have done I have written the terms only up to square order of z minus mu. Now that was the reason that I assumed up to the variance only because I am not going beyond that. This approximation is dependent upon this choice here. Now let us consider here variance of g z. That means if I take expectation here that will give me the variance of g z it is equal to expectation of these terms. If I take expectation here this will give me sigma square. Here this term will simply vanish. This term will simply vanish. This term will again give me sigma square. So sigma to the power 4 minus 2 sigma to the power 4 plus expectation of h z. So this we write as g 1 mu square sigma square minus 0 minus 0 plus g 2 mu square by 4 minus sigma to the power 4 plus expectation of h z. So if we ignore these terms ignoring expectation of h z variance of g z is approximately equal to g 1 mu square sigma square minus sigma to the power 4 by 4 g 2 mu square. This gives an approximate expression for the variance of any function of a random variable in terms of mu and sigma square. So that means the first and second moment must be of a known form. Now what we do we make use of the order statistics here that is since if I am considering order statistics then f of that is becoming uniform. So if we use that then let us see. So this x r is the r th order statistics. So if I consider f function f function is giving me u r that is the r th order statistics from uniform 0 1. Why I am taking this because I know the mean etcetera of this. So actually I am putting z is equal to u r that is basically we are having expectation of u r is equal to r by n plus 1 and variance of u r is also known that is r into n minus r plus 1 by n plus 1 square into n plus 2. So we will actually so this mu and sigma square that is this is mu and this is sigma square and f inverse if I put here that is g function I am taking to be choose g is equal to f inverse. So if we do that and z is equal to u r in this formula let me call it 1 then what I am getting I am getting here well let us consider the first expression in one and this expression that I wrote here I can call this as 1 and this as 2 say. So in 1 and 2 so what I will get here that expectation of x r is asymptotically equivalent to f inverse r by n plus 1. So this you can consider as a first approximation and I can also consider two terms there f inverse r by n plus 1 minus f prime f inverse r by n plus 1 divided by f of f inverse r by n plus 1 whole cube r into n minus r plus 1 divided by n plus 1 square into n plus 2 there will be half coming here. Actually this term is coming from the derivative of f inverse there because let me show the calculation here for the derivative. See if I am considering say d of f inverse x by d x then that is actually becoming 1 by f f inverse x and if I consider say for example if I write y is equal to f inverse x then what is the value of say d 2 y by d x square. So this is actually d y by d x. So if I consider second derivative I will get minus f prime y by f y d y by d x square. So this I can write it as minus f prime y by f y whole cube that is equal to minus f prime of f inverse x divided by f of f inverse x whole cube. Similarly if I want to consider say variance of x r then I will get second order term here by using this one as 1 by f f inverse r by n plus 1 whole square that is this term here the first order derivative square here multiplied by the variance that is r into n minus r plus 1 divided by n plus 1 square into n plus 2. So this is actually a first approximation I can consider two terms also that is if I take this one then it will become second approximation will be 1 by f f inverse r by n plus 1 square r into n minus r plus 1 divided by n plus 1 square into n plus 2 and second term will give me 1 by 4 that is this one sigma to the power 4. So I will get r square into n minus r plus 1 square divided by n plus 1 to the power 4 into n plus 2 square and then this f prime f inverse r by n plus 1 square divided by f of f inverse r by n plus 1 whole to the power 6. So this is a second approximation here. So if the form of capital F is known you can have the expressions for the order statistics and the amount of the error in these approximations is not much one some calculations have been done for various values of n and r for a specific distributions and then one can check that how much error is there. And these approximations are quite satisfactory especially if I consider the second order approximation. So this approximation made use of the fact that Taylor expansion is possible and second thing is that we assume of course that assumption is not a very astringent assumption because what we are having is that f function I am assuming to be strictly given. So that means basically I am taking it to be a nice function and if that is so I am I am assuming the higher order moments to be negligible and in the sense that we are dividing by higher powers of n. So the deviation will be much less because as you can see here itself in the denominators we are getting the term like n plus 1 to the power 4 n plus 2 square. So if I take further then this terms will be very very high and therefore they become negligible. So it is not a very astringent or you can say bad assumption here as such. Next thing is that one can talk about the asymptotic distributions also. Next we derive asymptotic distributions of Xr that is rth order statistics. Now here there can be two cases one is that only n tends to infinity r remains fixed. So that means for example I am finding out the asymptotic distribution of the minimum the maximum etcetera. So that means the position is fixed but in the second can be that the position can also vary that means I take say r tending to infinity n tending to infinity such that r by n tends to a fixed value. So I will consider these two cases case one r is fixed and n is tending to infinity. So let us firstly consider the case of the order statistics from the uniform distribution. So I am having say u 1, u 2, u n from uniform 0, 1. So certainly here the distribution of the rth that was n factorial divided by r minus 1 factorial n minus r factorial u to the power r minus 1 1 minus u to the power n minus r 0 less than u less than 1. Here you take w to be say n times ur then what is the distribution of w it will become n factorial divided by r minus 1 factorial n minus r factorial then this is becoming w by n to the power r minus 1 1 minus w by n to the power n minus r and you will get 1 by n extra term here w is between 0 to n. Let us look at the cumulative distribution function of w that is f that is becoming equal to integral 0 to w and all these coefficients will be coming there. You can actually consider say say this is becoming basically 1 by n to the power r minus 1 factorial and this is w by n to the power n minus r. So if I integrate I will get let me write it here. So all these coefficients will be there then I have t to the power r minus 1 1 minus t by n to the power r minus sorry n minus r dt and other coefficients will be there which I have not written here. If I take the limit of this as n tends to infinity then what I get here? See this term will go to e to the power minus t here and first thing is that the limit will exist and this will go to e to the power minus t because this term will go to 0 t by n tending to. So now if I take the limit of this coefficient here you can check what are the terms that are coming here. You are actually getting n to the power r in the denominator and here you are having n into n minus 1 into up to n minus r plus 1 divided by n to the that means basically there are total r terms here. In the denominator you are getting n to the power r. So each of the terms will be adjusted like you have n by n, n minus 1 by n, n minus 2 by n up to n minus r plus 1 by n. Each of these terms as n tends to infinity will converge to 1. So if I take the limit here this is converging to exactly a term of the type 0 to w 1 by r minus 1 factorial e to the power minus t, t to the power r minus 1 dt. So this is very instructive here. You look at this, this is nothing but the CDF of a gamma distribution that means we are saying asymptotic distribution of so as n tends to infinity the distribution of n times u r converges to gamma r 1, scale parameter 1 here. So that is one result that we are able to obtain. Now let us consider CDF of x r. So if we consider the CDF of x r what is happening here? f of x r at some point say x that is equal to probability of x r less than or equal to x which we can write as f of x r less than or equal to f x. So this is equal to and we can multiply by n also. So this is nothing but n u r less than or equal to n of f x. Now we are showing that this is converging to 1 by gamma r integral 0 to n f x t to the power r minus 1 e to the power minus t dt. So this is very interesting actually we are able to obtain the general form of the limiting CDF of rth order statistics as the sample size tends to infinity. And if we can consider the derivative here then limit of that also can be obtained as 1 by gamma r n f x to the power r minus 1 e to the power minus n f x n f x because I am just applying the Lybden's rule here. So this is derivative of this is becoming n f x and here n f x to the power r minus 1 e to the power minus n f x. So we are able to obtain the general form of the limiting probability density function of the rth order statistics here. The second part is in this one you can see that we have applied the condition that r is fixed but n is tending to infinity. Now the second can be that I can take both an r and n to be tending to infinity such that r by n tends to a fixed number. Actually what is the difference here? If r is fixed that means I am finding say suppose for second one or third one etcetera. But if I take r also tending to infinity that means I am fixing the position for example it could be middle like median or a quantile. So there is a difference in the treatment here. We may also see what happens to this thing for example if I look at say limiting density for the first one then what happens then r is equal to 1 here so this term will go away, this term will go away. You get simply e to the power minus n f x n times f x d a. So this is a very very interesting thing that asymptotic distribution for a general form for the minimum can be obtained which is of course you can say it is an exponential form and that is why in the negative exponential distribution basically you are getting n times that density was repeated there. Let us consider case 2 r tends to infinity n tends to infinity such that r by n tends to p. So let us look at the u r minus mu by sigma where mu is the expectation of u r that is r by n plus 1 and sigma square is equal to variance of u r that is equal to r into n minus r plus 1 divided by n plus 1 square into n plus 2. So the density of u r is known so let us write that actually I already considered this density here. So from here I can find out the density of u r minus mu by sigma because this will give me actually u r is equal to sigma v plus mu. So the p d f of v is then f of v is then v that is equal to n factorial divided by r minus 1 factorial n minus r factorial then u is sigma v plus mu to the power r minus 1 1 minus sigma v minus mu to the power n minus r and then you are also getting d u is equal to sigma d v. Another thing is the range here u is between 0 to 1. So that will give me the range of v from minus mu by sigma to 1 minus mu by sigma that is minus mu by sigma less than or equal to v less than or equal to 1 minus mu by sigma here. So let us write the terms in a slightly adjusted fashion. So n factorial sigma r minus 1 factorial n minus r factorial this mu I will take out from here. So I get mu to the power r minus 1 from here I take out 1 minus mu then this term is becoming equal to 1 plus sigma v by mu to the power r minus 1 and 1 minus sigma v divided by 1 minus mu to the power n minus r. This term I further express as n factorial. So some coefficient is there let me write it as a coefficient of n and r e to the power r minus 1 log of 1 plus sigma v by mu plus n minus r times log of 1 minus sigma v by 1 minus mu and there are some coefficients here which I have not written here mu by sigma less than v less than 1 minus mu by sigma. The c n r coefficient is actually all these terms which are written there. So I am just omitting that here. So the c n r term if you look at the c n r which I wrote as n factorial sigma mu to the power r minus 1 into 1 minus mu to the power n minus r divided by r minus 1 factorial n minus r factorial. If I take here r tending to infinity n tending to infinity such that r by tends to p then this can be shown to be convergent to. So you can actually use some Stirling's approximation. If you remember Stirling's approximation that is n factorial is approximated by root 2 pi e to the power minus n n to the power n plus half. If we use this approximation then this can be written as root 2 pi e to the power minus n n to the power n plus half divided by since I am taking r also tending to infinity. So another root 2 pi will come here e to the power minus r plus 1 r minus 1 to the power r minus half then root 2 pi e to the power minus n plus r and then n minus r to the power n minus r plus half. Then you have sigma which is actually coming as square root of r into n minus r plus 1 by n plus 1 square into n plus 2 and then you have mu that is r by n plus 1 to the power r minus 1 1 minus r by n plus 1 to the power n minus r. So this cancels out e to the power minus n cancels out then e to the power minus r and e to the power plus r also cancels out. So you can actually show I mean one can basically obtain the limit of this. This is actually converging to 1 by root 2 pi. Basically what you will get 1 by root 2 pi term here and then all other terms can be adjusted easily 1 plus 1 by n minus r to the power n minus r then you have 1 plus 1 by n minus r to the power half that is coming by division here because see I am getting n minus r n plus 1 minus r here. So this I divide here. So I take the equal power which are coming that can be divided here and similarly in this one. So this I will get as 1 plus 1 by n to the power n and I will get 1 plus 2 by n to the power half. So now if I take the limit this is simply 1 by root 2 pi then this will give me e in the denominator I will get e and this will give me 1, this will give me 1. So this I will get as simply 1 by root 2 pi. So this coefficient C nr this converges to this as n tends to infinity r tends to infinity such that r by n tends to p. Of course r by n has not played a role here but because all the terms are not coming in that particular fashion. Now let us look at this part. This part if I consider as say t that is r minus 1 1 plus sigma v by mu plus n minus r log of 1 minus sigma v by 1 minus mu. If I consider this so we can consider expansion of this type of forms log of 1 plus x by x plus 2 that is equal to x by x plus 2 minus 1 by 2 x by x plus 2 square and so on 1 by 3 x by x plus 2 mu. This type of expansion will be valid if I consider minus 1 less than x by x plus 2 less than 1 that means x greater than minus 1. Similarly I can consider log of 1 minus x plus that is becoming minus x by x plus 2 minus 1 by 2 x by x plus 2 square minus 1 by 3 x by x plus 2 cube and so on. So if I take the difference that means let me call it a and b then if I do a minus b then I will get this as log of 1 plus x equal to twice x by x plus 2 plus 1 by 3 x by x plus 2 cube 1 by 5 x by x plus 2 to the power 5 etcetera. So for expansion of this I am unable to use directly that this is between minus 1 and 1 therefore I am using another form here. This expansion will be valid provided x is greater than minus 1 but that is true here. Here this sigma v by mu is greater than minus 1 and similarly this one will also be. So if we consider this then I get log of 1 plus sigma v by mu is equal to 2 times sigma v by mu divided by this is of course slightly cumbersome expression but still one can write it in the closed form. So this cube and so on. In a similar way I can consider in a similar way log of 1 minus sigma v divided by 1 minus mu. So that is 2 times sigma v with a minus sign here twice 1 minus mu minus sigma v plus 1 by 3 cube and so on all this type of terms will become. So the term that I defined as t that is r minus 1 times this one plus n minus r times this it will be somewhat complicated expression but I can write it as 2 times r minus 1 this particular term minus twice n minus r times all these terms will become. Now if I define something like c 1 is equal to sigma by mu that is equal to root r into n minus r plus 1 divided by n plus 1 square into n plus 2 n plus 1 by r then this is actually n minus r plus 1 divided by r into n plus 2 then if I use r by n is equal to approximately p then it is equivalent to 1 minus p divided by n p. In a similar way I define another term sigma 2 that is sigma by 1 minus mu that is root r by n minus r plus 1 into n plus 2. So that is approximately p divided by n into 1 minus p as r tends to infinity n tends to infinity r by n tends to p. And in terms of this c 1 and c 2 of course r minus 1 is also equivalent to n p r by n is r is equivalent to n p etcetera all these terms will be using. So let us substitute here. So what I get t is there are some more terms that will be appearing here let me write it here. See this 2 t can be written as 2 times after simplification as r minus 1 c 1 v 2 minus c 2 v minus n minus r c 2 v 2 plus c 1 v divided by 2 plus c 1 v into 2 minus c 2 v plus summation m is equal to 1 to infinity r minus 1 c 1 v to the power 2 m plus 1 divided by 2 m plus 1 into 2 plus c 1 v to the power 2 m plus 1. This is one term and second term will be n minus r c 2 v to the power 2 m plus 1 divided by 2 m plus 1 into 2 minus c 2 v to the power 2 m plus 1. Now this c 1 and c 2 I defined here and I obtain their asymptotic expressions here. There will be other terms coming there for example r minus 1 c 1 to the power 2 m plus 1 then this will be approximately n p by 1 minus p by n p to the power m plus half which we can of course write as say k 1 n to the power minus m plus half of course that will go to 0 as n tends to infinity. Similarly n minus r times c 2 to the power 2 m plus 1 that is also approximately n times 1 minus p p by n into 1 minus p to the power m plus half that is again some k 2 times n to the power minus m plus half that is again going to 0 as n tends to infinity. Similarly if I consider say 2 plus c 1 v that will go to 2 2 minus c 2 if I consider that will also go to 2. If I substitute all these values then rough expression for r minus 1 c 1 v 2 minus c 2 v minus n minus r c 2 v 2 plus c 1 v that will be approximately twice n p root 1 minus p by n p v minus n p root 1 minus p by n p root p by n into 1 minus p v square minus 2 n into 1 minus p root p by n into 1 minus p v minus n into 1 minus p 2 1 minus p root 1 minus p by n p p by n into 1 minus p v square. So, easily you can see these terms get cancelled out and this term itself will get cancelled out with this term and we are left with actually minus v square. So, what we are getting is that t converges to minus v square by 2 as n tends to infinity r tends to infinity and r by n tends to p. So, what we have proved is that if I consider the limiting distribution of limiting pdf of v, v was u r minus mu by sigma as r by tending to infinity n tending to infinity and r by n tending to p as 1 by root 2 pi e to the power minus v square by 2 that is u r minus mu by sigma converges to normal 0 1 that is a very very significant result and we can say that asymptotic distribution of u r that is r th order statistics is normal with mean mu that is r by n plus 1 and variance is sigma square that is r into n minus r plus 1 divided by n plus 1 square into n plus 2. This is the asymptotic distribution of this which of course you can say as p and this is r by n is approximately p and in the second part you can write it as 1 minus p divided by n that is approximately p into 1 minus p by n. So, you can think of this as actually binomial type of terms here mean is n p and the variance is n p into 1 minus p etcetera. Now, if I use the inverse function here if I use the inverse function here then this is becoming f inverse p that means it is the pth quantile coming here. So, let me write it here it in the form of a theorem. So, basically we are able to obtain the asymptotic distribution of u r as the normal distribution here and therefore, if I use the inverse function which I will be giving in the tomorrows in the next lecture the asymptotic distribution of the r th order statistics under these assumptions is obtained as a normal distribution. So, you have two cases one is when r is tending to infinity n is tending to infinity and r is fixed in that case you are getting related to gamma function and here you are getting related to the normal distribution. I will be using this for obtaining confidence intervals for the population quantiles as you I have mentioned that in the nonparametric situation rather than considering the means and the variance etcetera we discuss the positional point that is the median the quartiles the quantiles the percentiles etcetera. So, I will be doing it in the next class.