 I will continue with the central limit theorem and its applications. This example I have taken from Sheldon Ross's book on probability theory, see the idea here is that civil engineers believe that W the amount of weight in units of 1000 pounds that a certain span of a bridge can withstand without structural damage resulting is normally distributed with mean 400 and standard deviation 40. So, the weight which the bridge can withstand is random variable you know and so it is normally distributed with mean 400 and deviation 40. Suppose that the weight again in units of 1000 pounds of a car is a random variable with mean 3 and standard deviation 0.3. So, the different cars will have different weights. So, therefore, again we have treated this as a random I mean this example the weight of a car is treated as a random variable and therefore, the distribution and the distribution is normal approximately normal with mean 3 and standard deviation 0.3. How many cars would have to be on the bridge span for the probability of structural damage to exceed 0.1? So, at a particular time how many cars are there and then the weight of these cars exceeds the weight which can cause structural damage and so you want the probability of this whole random of this event to be more than 0.1. So, you want to estimate that you want to estimate the number of cars that would be on the bridge. So, that the structural damage can occur. So, we begin by defining p n as the probability that there are n cars on the bridge whose weight exceeds w because that is where. So, the event is this that when it exceeds w the structural damage can occur. So, therefore, this is same as p n. So, this is x 1 plus x 2 plus x n greater than or equal to w and that would be you can rewrite this as probability x 1 plus x 2 plus x n minus w greater than or equal to 0. Now, x i's where x i's is the weight of the i th car right x i denotes the weight of the i th car. So, this is the total weight of the n cars which are on the bridge at that time. So, and therefore, by central limit theorem because for n large we have said that when they are identically distributed random variables independent random variables because the weight of each car is independent of the other. So, then sigma x i sigma x j j varying from 1 to n would be approximately normal with mean 3 n and standard and variance 0.09 n standard deviation is 0.3. So, the variance of the weight of a car is 0.09 and therefore, the variance of the n cars is 0.09 right. So, this is approximately this. Now, w is independent of the x i's because the weight that the bridge can withstand is independent of the weights of the individual cars and therefore, when I write sigma x i minus w this is also approximately normal yes. And we will again revisit all these summation of random variables and their distributions, but right now we have enough machinery with us to say that sigma x i minus w because this is normal approximately normal this is normally distributed. So, sigma x i minus w is also approximately normally distributed and the expectation of the mean of this normal variate is 3 n minus 400 that is minus w. So, mean of sigma x i varying from 1 to n is 3 n and this is 400 and the variance of course, becomes with a plus sign comes with a plus sign. So, because they are independent. So, variance of this plus variance of w which is 1600. So, this is the variance and therefore, I can standardize. So, the whole idea is that this is my this is the variate I am looking at and I have said that this is standard this is normally distributed with mean 3 n minus 400 and variance this. So, when I standardize I will say this minus the mean divided by the standard deviation. So, that is standardize. So, now z is the standard normal variate and the event. So, when I standardize this probability now can be written as probability z greater than or equal to. So, on this side it will be minus of 3 n minus 400 I mean bracket minus 400 divided by the standard deviation. So, when I do this operation I mean this probability is the same as this probability because this I have standardize the normal variate to standard normal variate by dividing by subtracting the mean and dividing by the standard deviation. So, therefore, this is equal to this and so, z is approximately normal and now we want this probability to be greater than or equal to 0.1. And so, we look up the tables for the standard normal and we find that when z is greater than or equal to 1.28 this is approximately 0.1. So, from the normal tables I get that this number should be 1.28 for this to be equal to 0.1 and therefore, greater than or equal to you want see the whole idea is that if the number of cars n is such. So, now this number I can say that is equal to 1.28. So, therefore, if you take it equal to 1.28 then you get the you get an approximation for n and in the sense that if you write less than or equal to 1.28 then obviously, this probability will be larger and therefore, this will the whole thing will still be larger than 0.1 this is the whole idea. So, I get a value of n by equating this to 1.28 and then my n should be greater than or equal to 1.17. So, here of course, this is a little complex thing to solve, but you can do it or you can you know start by putting in values of n and then you can find out for which value of n this is you know almost equal to this or little less than this. So, one can the lot of numerical ways of actually getting the value of n which satisfies this inequality. So, we can do that and therefore, it turns out that n greater than or equal to 1.17 satisfies the above inequality and so that means, if there are more than 117 cars then the structural damage may occur with probability 0.1. So, there is a chance of 1 in 10 that the bridge will suffer structural damage. So, I this was another interesting example I I mean actually you can see the application in the sense that and then also I chose this example for the reason that this also is a random variable and therefore, to convert this event to this event and then to reduce this to stand to use the central limit theorem and transform this to a standard normal variate and therefore, get your estimate of the probability that structural damage will be caused to the it may the bridge may suffer structural damage. So, the interesting example the central limit theorem this is in a town of 20,000 people 44 percent support an upcoming referendum vote. You see say for example, the currently the hot thing is is is Anna Hazari going to form political party or not. So, you take a referendum that means, you might ask people to vote on this whether you should do it or not. So, let us say and it is the feeling is there. So, maybe this is a small town and the feeling is that 44 percent will only support the upcoming referendum, but so then what you do is you conduct a pre vote poll. So, this is happens very often media person do it lot of magazines they do it they conduct their own pre vote poll to get a feeling or the opinion and use of the eligible voters in the town and surveys 100 people. So, therefore, if you are conducting a pre vote poll of the eligible voters in the town and surveyed 100 people what is the probability that the that the survey will show that the referendum will pass. So, one needs to understand what we mean by the referendum will pass in order for a referendum to pass it requires a majority vote or 51 percent. See even though the feeling is there that 44 percent support, but you never know at the time of the voting more people may vote for the referendum and so on. So, therefore, when you conduct a pre vote poll and you want to and you have you surveyed let us say 100 people, then if in that pre vote poll it turns out that 51 percent or more support the referendum, then you can say that our pre vote poll suggest that the referendum will pass, but actually when the voting is done and then if more than 51 percent people who have voted people who have voted the 51 percent of those people if they have supported the referendum will pass. So, right now this is just conducting a pre vote survey of 100 people. So, then you want to know what is the probability that the referendum will pass. So, therefore, the question is and therefore, that means, out if you are taking a referendum if you are taking a survey of 100 people then you want 51 people to out of those 100 people 51 should say yes for the referendum or support the referendum this is what you want to do find out. So, the probability therefore, one can model the situation using binomial random variables. So, x i is I mean if the person supports if the voter or the people you are surveying they support the referendum then x i will be counted as a success otherwise it is a failure. So, you will say that sigma x i I varying from 1 to 100 is binomial 100 with mean as 0.44 into 100 because probability of success that means your p is 0.44. So, I am writing here I should have written only 0.44 this is not this is only here. So, the p is 0.44 and then the mean of the binomial distribution will be n p and you want to find out the probability that the people that you are surveying the 100 people that you have surveyed would they how many would support that means number of successes here should be greater than or equal to 51 because then the referendum will pass. So, and that is why I chose this because this is depicting a new situation and we are just trying to model it through this thing here and applying central limit theorem. So, this is the whole idea and therefore, so I hope this is clear that this is sigma x i I varying from 1 to 100 should be greater than or equal to 51. So, from this 100 people if they get a feeling that 51 or more will support the referendum then you know they can sort of advertise and they can try to influence people and say that our pre vote poll says that referendum will pass and so on. So, standardizing this where yet sigma x i I varying from 1 to 100 this will be sigma x i I varying from 100 minus 44 the mean of this random variable which is n p 44 divided by the variance which is n p q. So, 44 into 0.56 because your p is 0.44. So, q is 0.56. So, therefore, this is your variance and so under root of that the standard deviation. So, therefore, this probability is equal to this probability. So, this is greater than or equal to 51 minus 44 upon under root 44 into 0.56. Now, as I have been telling you that whenever you want to approximate a binomial probability by you know standardizing the random variable and using a standard normal probability then you should also use the continuity correction factor which I have not done here. So, anyway so therefore, so that would be if you are saying greater than 51 then it will be 50.5 that would be the right figure. But anyway, so you can do that computation later on. So, right now the whole idea is just to see that. So, therefore, so to get a feeling for the kind of numbers that you have that will the referendum pass or not. So, this is this and therefore, under root of this comes out to be 4.96. So, this probability and this is a standard normal variant. So, therefore, probability this is equal to or we are approximating this probability by probability z greater than or equal to 7 upon 4.96 which comes out to be 1.41. So, z greater than or equal to 1.41. So, this probability which from the tables gives you the number 0.079. So, therefore, this is a very small probability and hence the chance of the referendum passing is very slim. In the town is 20,000 and you are only surveying 100 people and when you know that the chances of you know there is a 44 percent support the upcoming referendum. So, the probability of you know 51 percent or more support the referendum is small and that is reflected here. So, through the central limit theorem you have made this approximation to the probability through the required probability and it turns out to be 0.079. So, the chances of when you survey the 100 people and ask for their opinion whether they support the referendum or not it shows that the chances are very small for the people. So, again when one can go on and on about the applications of central limit theorem and how to various different situations you can apply it. Then I want to get back to another thing where we had also sort of use we had used the central limit theorem, but probably did not and I had said that we will prove it later on. So, but I just want to add word of caution also to it. So, here this is that we had x equal to x is a binomial n comma p and then we said that if you want to compute this probability x less than or equal to s then you will have to compute these numbers and this can be quite messy right i varying from 0 to s n c i p raise to i 1 minus p raise to n minus 1. So, this can be quite cumbersome to compute, but then we said that we can approximate it by standardizing this thing and so here this is x minus n p divided by under root of n p q the standard deviation and then this is less than or equal to s minus n p. Now, you add 0.5 remember I had talked about the correction factor when you binomial is a discrete random variable and we are approximating it by a continuous distribution. So, therefore, this continuity correction factor is also added. So, you have 0.5 and this. So, therefore, this probability this cumbersome thing can be approximated by the normal probability which is s minus n p plus 0.5 upon under root n p q. So, and we look up the normal tables and we can compute this number. Now, the thing is that of course, when you are approximating the question does arise how good an approximation it is and see what happens is that when for a binomial distribution if your p is close to half then the binomial distribution is symmetric in the sense that the values keep on increasing and decreasing in a symmetric manner. And then because normal itself is also a symmetric distribution about its mean therefore, a normal distribution will give a good approximation as long as your p is close to half because then you are approximating a symmetric distribution a discrete symmetric distribution by a continuous symmetric distribution. And so, but when your p is away from half then your binomial will be skewed may be to the right or to the left and in that case it is not necessary that your normal distribution will give you a good approximation of the binomial probabilities. Now, it is said often that if your n p is greater than or equal to 30 or n p into 1 minus p is greater than or equal to 10 then the central limit theorem will always give you a good approximation of the binomial probabilities, but these are empirical statements. In some cases it may turn out that when you have n p greater than or equal to 30 or n p into 1 minus p greater than or equal to 10 you may get good approximations, but it cannot be said that it will happen all the time because certainly symmetry plays a role. And for p small and n large such that n p equal to lambda is moderate. So, then in that case Poisson approximation may be a good approximation and I had when we were discussing discrete random variables I had shown you that how a Poisson probabilities can approximate your binomial probabilities, but then of course, the condition was that p small and n is large and n p is moderately small is reasonable number then Poisson may give a good approximation for the binomial probabilities. So, with this word of caution of course, we can these approximations can be used and they are very helpful. And so I just thought that once we have talked about the central limit theorem we have proved it and shown its applications. I will just revisit what we had done earlier when we talked about approximating the binomial probabilities by standardizing the normal variate and reducing it to a standard normal variate and then computing the problem. . Now, which has problems for you to try on Chebyshev's inequality central limit theorem and law of large numbers, weak law of large numbers. Now, the first problem is straight forward it says that a random sample of size n equal to 81 is taken from a distribution with mu equal to 128 and standard deviation sigma equal to 6.3. With what probability can we assert that the value we obtain for x bar will not fall between 126.6 and 129. Chebyshev's inequality. So, you can see that it will be you know the absolute value. So, x bar minus when you essentially are saying that would be greater than 129.4 and less than 126.6. So, I have given it specifically because I want you to then convert it to you in the form of when you are saying that it is when you apply Chebyshev's inequality or the central limit theorem. So, we have already tried given example I have discussed examples where you can compute both the you know you can compute the probabilities given that your n is 81 and the standard deviation and mean are given to you. Now, I just want to make a comment here is that you see it as we have seen through examples in the lectures that you know the number n. For example, the probability that you get the bound that you get by using Chebyshev's inequality on the probability on the required probability would be a loose bound and the central limit theorem will give you a tighter bound or tighter this thing on the probability. Now, the thing and of course, you can also say that, but one point that is important is that the probability when you computed by the central limit theorem may sometimes depend on the distribution that you have handling. Whereas, Chebyshev's is a universal inequality and therefore, it may give you a loose bound, but then it the number does not change with respect to different distributions. So, Chebyshev's is a general statement a universal statement and later on when I have occasion I will again point out the difference between the even though we say that Chebyshev's is a looser bound there are other advantages of using the Chebyshev's inequality. . Question 2 let the random variables y n have a distribution that is binomial n p prove that y n by n converges to p in probability. So, this is the use of your weak law of large numbers and I may have already done it for you in the lectures, but anyway go through it and try to prove it by yourself. Then the third problem is consider the sequence of x n consider the sequence x n of random variables where p n is x probability of x n equal to x is 1 if x is 4 plus 2 by n and 0 otherwise. So, now here you see the probability. So, x n is equal to x the probability of that is equal to 1 if x is 4 plus 2 by n does it converge in distribution to some random variable x. So, that means find out the you will define the cumulative distribution function as n goes to infinity can you find a distribution bridge. If so find the distribution function of x show that the sequence x n converges in probability to x also. So, it should be interesting thing, but we go by the basic definitions and then try to solve the problem 1 x 2 x n are identically independently distributed random variables with density function f x equal to 1 by theta and 0 otherwise it should be equal to this 0 less than theta less than infinity. Let m n be max of x 1 x 2 x n. So, m n is the random variable which is the maximum of these n sample values find the distribution function f n of m n does. So, does f n converge to some f yes it will see, but we will not talk much about it because the second part is a little difficult part, but you can certainly see that f n will converge to some f. So, find the distribution function f n of m n. So, that part is that you can do through the tools that you have already learnt right because when you find out the to find the distribution function you have to say probability m n less than or equal to t. Now, since m n is the max of x 1 x 2 x n this will reduce to a probability that each x 1 is less than t x 2 is less than t x n is less than t less than or equal to t. And since they are independent this will reduce to probability x 1 less than or equal to t raise to n. So, therefore you can sort of do it in the regular way and then see if you can get a feeling for convergence of f n that is all we will not talk in detail about it because this becomes a little complex. If you are given that f x is 1 upon x square and x varies from 1 to infinity 0 elsewhere. So, this is how you are defining this p d f and this is the p d f of a random variable x. Consider a random sample of size 72 from the distribution having this p d f right. So, that means the sample identically independently distributed random variables there are 72 of them compute approximately the probability that more than 50 of the items of the random sample are less than 3. See the thing is now that the problem I have included it is problem because there is 2 steps. See first is that you want that probability that more than 50 of the items of the random sample are less than 3. So, there is a probability I will use this here. See you are given that f x is 1 by x square 1 less than x less than infinity right. So, you are wanting to find probability x less than or equal to 3 this is the problem right that more than 50 of the items of a random sample are less than 3. So, this is x less than or equal to 3 this will be 1 to 3 of 1 by x square d x right. The probability that a random variable which has this p d f. So, then the probability of x less than or equal to 3 will be given by this which is minus 1 by x from 1 to 3 right. So, this is comes out to be minus 1 by 3 plus 1 which is 2 by 3. So, now what I will do is you are selecting a sample of size 72 and we will say that if a sample has a value less than 3 then that is a success. So, therefore, the probability of a success would be 2 by 3. So, now this gets converted to a binomial random binomial situation. So, where we are selecting a sample of size 72 and we say that if a sample value is less than 3 then it is a success. So, that means now the question is that with the from a binomial 72 comma p 2 by 3 I want a sample of the item. So, more than 50. So, that means you want that if you are writing sigma x i. So, random variable x coming from binomial 72 may be I can write it here. So, essentially what I am treating is that x is binomial 72 and this is this. So, I am wanting that probability x is greater than or equal to 50 and so when you standardize this will be x minus the mean is 2 by 3 into 72 which is this is 24. So, 48 and that will be 1 by 3. So, 9, 16, 4. So, this is greater than or equal to 50 minus 48 that is 4 this comes out to be 9 8 sigma. So, this is the whole thing. So, that is why I chose this example. So, therefore, you have converted this to binomial situation and then you are computing the approximate probability that more than 50. So, here again I am now using the central limit theorem I am standardizing the normal the variate there and then therefore, you are computing the approximate probability because to compute the actual probability would be you know you will have to sum up those 72, 50 and the binomial probabilities for 50, 51, 52 and 72. So, this is this problem now let us go to measurements are recorded to several decimal places each of these 48 numbers is rounded off to the nearest sum of these integers. So, when you say rounding off that means, if a sum of the original 48 numbers if the decimal is below 0.5 below 5 then you reduce you drop the decimal number point and if it is 0.6, 0.7 then you take it to the next integer. This is how we say that you when you round off the numbers right approximate by sum of these integers. If we assume that the errors made by rounding off are independent and have uniform minus 1 by 2 comma 1 by 2 distribution compute approximately the probability that the sum of the integers is within 2 units of the true sum. So, now here we are assuming that the errors made by the rounding off are independent surely that you can expect because the errors that occur are not dependent on each other. So, and then this. So, therefore, the rounding off that you are doing is between minus 0.5 and 0.5 as I said if the number is something like 10.4 then you will write it rounded off to 10 if the number is 9.7 you will round it off to 10 integer. So, therefore, you assuming that the error part that means, the actual number minus the rounding that difference is uniformly distributed between minus 1 by 2 and 1 by 2 right. The approximate probability that the sum of the integers is within 2 units of the true sum. So, therefore, what we are doing is. So, you have 48 errors 48 numbers that you are rounding off. So, your sigma and each is. So, yes I can again write here that your see epsilon i is the error in the ith number right. So, we are wanting that summation epsilon and each epsilon i is and this is a uniform minus 1 by 2 1 by 2 right. Each error is uniformly distributed. Now, you are wanting the probability that this thing should be less than or equal to 2. I think this is the question that the sum of the integers is within 2 units of the true sum. So, which means that total error that occurs should be within 2 of the original. So, that means sigma epsilon i varying from 1 to 48 this should be within minus 2 and 2 right. The error can occur either on the when you round down or you round up. So, therefore, this total error we are saying what is the probability that this error is within 2 of the original sum original sum of numbers. So, I have added up the errors and so this sum should be greater than or equal to minus 2 and less than or equal to 2. This is what we want to approximate this probability and that again by the use of central limit theorem we will say because now epsilon i's are all uniform. So, therefore, your sigma epsilon i expectation of this i varying from 1 to 48 is because the mean is 0. So, this is 0 they are all independent. The errors we have assumed are independent and similarly, the variance of sigma epsilon i varying from 1 to 48 will be some of the variances and which will come out to be. So, the variance here is remember it is b minus a whole square raise to b minus a whole square divided by a 12 b minus a whole square by 12. So, this is the variance here is 1 by 12 and so variance this will be 48 by 12. So, the variance will be 48 by 12 and therefore, standard deviation will be under root of 48 by 12. So, I standardize and here this is what we get and then you know by the normal this thing. So, it says that probability. So, this is actually equal to probability mod z is less than or equal to this is 48 by 12. So, this is 1 and that comes out to be 0.6826 from the normal tables. Of course, you have to do some more computations here and this will be. So, therefore, that means the error can be kept within 2. The total errors of rounding up and rounding down can be kept within 2 with probability 0.6826. So, that is a very high probability, right. But if you look at a loose upper bound that means if you are suppose rounding up all the numbers then this will be 0.5 into 48 which will be 24. So, that means you know an upper bound on the number of total errors that can occur can be can go up to 24, right. But here this the central limit theorem gives you the idea that this the probability that the errors will be within 2 is reasonably high. So, this is something about the problem I wanted to talk to you about varying from 1 to and so on is a sequence of identically independently distributed random variables with expected value of psi n is mu and variance psi n is sigma square. Now, if S n is the sum of the first n sample values show that S n upon n goes to mu with probability p. This is again just reiteration of the weak law of large numbers and I want you to sit down and work out the proof by yourself. X n show that m g f of as n goes to infinity t greater than distribution of y n square see the notation because we could not get it. So, x n I am saying is chi square n. So, we have talked about the chi square n distribution also. So, here this is the notation it looks like in the print it looks like x n square, but it is actually chi square. So, x n is chi square n and then y n is x n upon n. So, because again we wrote x n instead of chi because chi was not coming out nicely. So, y n is x n upon n and show that moment generating function of y n will go to e raise to t as n goes to infinity for t greater than 0. So, it is defined for t greater than 0. So, this you can work out and then what is the limiting distribution of y n. So, once you get the limiting m g f of y n then you will be able to say what is the distribution of y n limiting distribution of y n this was the whole idea through this exercise. And then show that x n minus n where so x n is actually chi square n. So, chi square n has mean n and variance 2 n. So, therefore, now we are standardizing this. So, this is actually the use of you know central limit theorem because remember central limit theorem is convergence in law. So, x n minus n upon under root 2 n for n large will converge to a normal standard normal variate. So, this is again the central limit theorem. That x 1 x 2 x n are independent random variables with probability x i equal to 1 p and probability x i equal to 0 1 minus p for i varying from 1 to 2 n. That means each x i. So, x i's are identically independently distributed Bernoulli random variables p is of course between 0 and 1 and it is unknown. So, this is what we have to estimate I will get back to this thing. So, now if you define s n as x 1 plus x 2 plus x n and you fix your t. Then the problem says using Shebyshev's inequality how large an n will guarantee that the probability of s n upon n minus p is greater than or equal to t. So, the probability of this event is less than or equal to 0.01 no matter what value unknown p has. So, obviously we are trying to say that the we want to find out how many sample values we should take x 1 x 2 x n. So, that this ratio s n upon n or the average of the sample values is different from p by. So, greater than or t we have fixed. So, this difference greater than t probability of that is less than 0.01. So, you want to use Shebyshev's inequality. So, here by Shebyshev's inequality as we said this is s n by n minus p. So, this you want greater than t probability of this and this you want less than or equal to 0.01. Now, by Shebyshev's inequality because this is the variance of s n by n is because remember now your s n is what the each x i is Bernoulli. So, therefore, this is binomial and so this is the variance of s n is n p q and so 1 by n this would be n square. So, this is p q by n. So, therefore, by Shebyshev's inequality this probability is less than or equal to this is p q by n into t square and this you want to be less than or equal to 0.01. So, now what it says is p is unknown. So, q is also unknown and therefore, no matter what the value of p is now since maximum of p q we have already gone through this in the lecture also maximum p q is 1 by 4. So, if I take the if I write the maximum value here since n is in the denominator. So, this will I will get the value of n which is larger which is smaller. See what I am saying is that this probability is less than or equal to 1 by p q by this 1 by 4 into n t square and this we want less than or equal to 0.01. So, suppose I put this equal to 1 equal to 0.01 and this tells me that my n should be equal to from here n should be equal to if you take n to this side it will be 0.04 into t square and since I have written see. So, this value has come. So, 1 by 4 is the maximum value of p q. So, now that means for n greater than or equal to this this will always be satisfied less than or equal to 0.01. Can you see that? See here I am writing the maximum value this upon n t square is less than this. So, n would be greater than or equal to this. So, I am taking it. So, if I put the maximum value here then obviously, I get the value of n which will meet this inequality because n will be greater than or otherwise if I write the actual value of p q then what I get the value of n would be smaller than what I am getting here. So, therefore, you know this will always satisfy this inequality this is the idea. So, therefore, by Shebyshev's inequality this is the thing. Now, part 2 says that using C l t find the approximate n needed. So, that now here you see it has put the word minimum of this probability and the probability here that is the complement of the event that you had in the part a. So, therefore, it is the same thing because here the probability of less than t is greater than 0.99. So, exactly but the minimum part I will explain again here because this is now by central limit theorem by central limit theorem minimum this. So, minimum this probability will be attained when I put the maximum value of this. And therefore, the minimum when you write this here this will be twice phi and root n into t and this is 1 upon 4. So, 1 by 2 so that the 2 also comes here. So, this minus 1. So, that satisfies this and now you want to compute again this you want to say is equal to 0.99 which means that 2 of phi 2 root n into t is equal to 1.99 and now you can continue. And in fact, you will find out the value of t because I think may be you will complete the problem. So, this is 2 divided by this 0.99 and the corresponding z value here I think from the tables if you look up your it says that 2 root n t is 2.57. I mean I think that is a thing and so you can compute root n from here. And now what it says is again since you have the numbers I mean in this case your n square comes out to your n comes out to be equal to 2.57 divided by 2 into t whole square. And in the third part it asks you to when you fix the value of t I think the value of t is given 0.01. If you do this then it wants to wants you to compare. So, for example, from here when t is this for t equal to 0.01 n comes out to be equal to 250,000. And then when you compare it for the central limit thing I think this comes out to be well it is computed somewhere I have done it here. So, your n will be greater than or equal to 1650. So, this is the idea because the Shebyshev's inequality gives you a loose upper bound therefore, the numbers will be different. So, this is the idea behind this thing and now you can sit down and work it out yourself to get a better feeling is equal to 1.99 and now you can continue. And in fact, you will find out the value of t because I think maybe we will complete the problem. So, this is 2 divided by this 0.99 and the corresponding z value here I think from the tables if you look up your it says that 2 root n t is 2.57. I mean I think that is a thing and so you can compute root n from here. And now what it says is again since you have the numbers I mean in this case your n square comes out to your n comes out to be equal to 2.57 divided by 2 into t whole square. And in the third part it asks you to when you fix the value of t I think the value of t is given 0.01. If you do this then it wants to wants you to compare. So, for example, from here when t is this for t equal to 0.01 n comes out to be equal to 250,000. And then when you compare it for with the central limit thing I think this comes out to be well it is computed somewhere I have done it here 1 6. So, your n will be greater than or equal to 1 6 5 0 0. So, this is the idea because the Chebyshev's inequality gives you a loose upper bound therefore, the numbers will be different. So, this is the idea behind this thing and now you can sit down and work it out yourself to get a better feeling.