 So, the next topic we want to talk about is inequalities, statistical inequalities. And of course, one we have already seen which is the Cauchy-Schwarz inequality, but there are some other important ones which we will talk about now. Now, the role of inequalities is that you get bounds on probabilities of certain events and this is different from the approximations. Because the inequalities make a very definite statement, so it is a definite fact about randomness. So, you have some event and you want to, you are able to say that the probability of this event will be less than or equal to a definite number. So, you give a bound and or lower bound, upper bound whatever is possible, mostly we will see that we talk about upper bounds. So and of course, another thing is that this is different, the inequalities give you information which is different from approximations, because approximations may be good, bad, but here the inequalities trying to tell you the probability of a certain event is less than this particular number, which the approximation does not say, approximations says that the probability may be this and then depending on. So, therefore they have a very definite role to play the inequalities in your statistical analysis of data and so on. So, see what happens is that when in the absence of much knowledge about the distribution of sample values, you take different kinds of samples and then you do not know much about the distribution of the population from which you are taking the samples. And then to derive bounds for probabilities of events, you know depending on the sample values is very helpful and that is what we do through these inequalities. So, in fact, it may just happen that we may know the mean or the variance of the population of which the sample values are coming and that is it, we may not know the nature of the distribution exactly. So, then it helps to be able to compute the compute bounds of the probabilities of certain events. .. So, the first one simple one is Markov inequality and this statement is that if x is a random variable that takes only non-negative values, then for any real number a greater than 0 probability x greater than or equal to a is less than or equal to expected x upon a. And this is not difficult to prove because if you know you define this indicator valuable where variable, so which is which takes value 1 when x is greater than or equal to a and 0 otherwise. So, that means, so now when you write x greater than or equal to a this implies that 1 is less than or equal to x by a that is and since i is equal to 1 whenever x is, so that means I can write that i is less than or equal to x by a. Now, taking expectation of both sides, so when I take expectation of both sides I get expectation i which is less than or equal to expectation x by a, but expectation x by a is 1 by a expectation x right. So, this I am just relating the two random variables i and x, so this is 1 by a this and expectation i would be because this is i is equal to 1 with probability x greater than or equal to a, so the expectation would be 1 into probability x greater than or equal to a plus x equal to i equal to 0 probability that x is less than a. So, but since i is 0, so that is no contribution, so expectation i is actually equal to probability x greater than or equal to a into 1 right. And where did we use the fact that it is a non-negative variable, I used it from here saying that if i is less than or equal to x by a then the expectation will also when I take the expectation of both sides the inequality will remain intact. And therefore, this is less than or equal to 1 by a e x or probability x greater than or equal to a is less than or equal to 1 upon a into expected x. So, you see for this event just knowing that just knowing the expectation of the random variable I can compute a bound for the probability of this event which is given by 1 by a into e x. So, let us look at this Chebyshev's inequality which says that if x is a random variable with finite mean mu and variance sigma square then for any k positive expectation of x minus mu absolute value greater than or equal to k sorry it should not be expect probability. We are Chebyshev's inequality is for giving an upper bound on a certain probability which is probability absolute value of x minus mu greater than or equal to k is less than or equal to sigma square upon k square. So, mean x has mu as its mean. So, when you take so actually this can also be written as expectation of x minus mu whole square and then divided by k square. So, that is the variance and of course, I do not need the absolute sign once I am taking square. So, this is actually expectation of whatever the function here. So, you are taking x minus mu. So, expectation of this whole square divided by k square. So, we are defining this for mu as the mean of x and k is some positive number. So, this is the inequality. So, that means this gives you a bound the inequality gives a bound on this probability which has to be less than or equal to sigma square by k square. Now, we will use Markov's inequality to prove this we just established Markov's inequality. So, now, let me define y as x minus mu whole square and this will be therefore, non negative and the Markov's inequality requires is defined for is valid for random variable which takes only non negative values which is greater than or equal to 0. Then we will take a to be k square and so, then the Markov's inequality will give us that probability x minus mu whole square greater than or equal to k square is less than or equal to expectation of x minus mu whole square by k square. Expectation of this divided by k square e x. So, this is e x by a where this is a and this is your x. So, then I am writing my x is x minus mu whole square. So, expectation of x minus mu whole square by k square. So, by Markov's inequality this and then you are saying because my a is k square and my x is x minus mu whole square. Now, just see that this event x minus mu whole square greater than or equal to k square holds if and only if this holds for since k is given to be positive. So, if this is true then this is true if this is true then this must be true. So, if they are the same events and therefore, I can replace this probability by the probability absolute value of x minus mu greater than or equal to k this is less than or equal to sigma square by k square. So, the inequality is established and then we will through examples and so you will see the various ways in which this simple inequality can be used. Now, the thing is I just want to make a note here that often we are asked to obtain a bound for a probability like this which has a strict inequality. So, this is probability of x minus mu absolute value of x minus mu greater than k. But, then we know that this probability is less than or equal to probability of absolute value of x minus mu greater than or equal to k because you are taking a bigger subset here. So, this probability is this probability is bigger than this probability and hence and since we have a bound for this through ship in equality. So, the bound is also valid for this probability. So, that means we can say that probability mod x minus mu greater than k is also less than or equal to sigma square by k square this is the whole idea. And through when we discuss many examples it will turn out that we have to actually compute this a bound for this probability. And so we will see that this can be again estimated by I should not use the word estimate because this probability is less than or equal to this and Chebyshev's inequality gives as a bound for this. So, therefore, the same bound is valid for this probability also. Now, one can also obtain I mean one can there are more than one ways of obtaining these inequalities. And so I will just give you the alternate proof which says that if so I will start with the expression for sigma square which is expectation of x minus mu whole square. So, you are taking k on either side. So, now I can break and this integral will be minus infinity to infinity of x minus mu whole square f x d x. So, this integral I break up into so minus infinity to mu minus k that means this point and then it will be mu minus k to mu plus k and then mu plus k to infinity. So, I break up the integral. So, the total integral minus infinity to infinity I break it up to these three. Now, if you look at this expression for example, here x is greater than mu plus k it is going up to infinity and remember k is a positive number. So, x greater than mu plus k implies that your x minus mu is greater than k. So, it actually this is implied. So, if this wherever x is greater than mu plus k it means that x minus mu is greater than k and similarly here your x is less than mu minus k. So, for x less than mu minus k it implies that your x minus mu is less than minus k. And this is because again this is you know x minus mu whole square you are integrating x minus mu whole square f x d x from mu minus k to mu plus k. So, this must be a non negative quantity it cannot be negative because this is non negative this is non negative. And so therefore, the equality if I remove this then the inequality changes to inequality. And secondly as we said that in this interval mu plus k to infinity x minus mu is greater than k. And so here again f x is a non negative function. So, if I replace this by k square then I am taking a lower value of the whole integral. So, again the inequality gets strengthened and similarly here your x minus mu is less than x is less than x minus mu is less than k in this interval also minus infinity to mu minus k your x minus mu is less than minus k. So, square would be what will happen to square x minus mu whole square will become greater than k square. And this interval again because x minus mu is less than minus k. So, when I square up x minus mu whole square will be greater than k square. So, then if for in both these integrals I replace x minus mu whole square by k square I am taking the lower value under estimate of the integral. And therefore, sigma square is greater than or equal to k square of f x d x this is minus infinity to mu minus k plus integral mu plus k to infinity k square f x d x. But this is nothing but k square of course is a constant. So, this is probability x less than or equal to mu minus k plus this is probability x greater than or equal to mu plus k mu plus k to infinity. But then if you bring mu to this side this will be probability x minus mu less than or equal to minus k here this will be probability x minus mu greater than or equal to k. So, probability of absolute value of x minus mu greater than or equal to k. So, what is written is the strict inequality, but it should actually be probability of mod x minus mu greater than or equal to k. So, we have the inequality sigma square greater than or equal to k square probability of absolute value of x minus mu greater than q k. And so, that gives us the inequality that we wanted to prove. So, this is the Shebyshev's inequality this again gives you an upper bound on the probability mod x minus mu greater than or equal to k once we know the variance of the random variable x. So, this proof can be imitated for the discrete case also that is when x is a discrete random variable with finite variance and there will be a probability mass function also defined with it. The immediate corollary is that if you put k equal to epsilon sigma where of course, epsilon is some non negative number then the Shebyshev's inequality becomes greater than or equal to epsilon sigma this is less than or equal to sigma square upon epsilon sigma square which is 1 by epsilon square. And if I divide here by sigma sigma being a non negative positive number. So, inequality does not change and probability x minus mu by sigma. So, this is you can say standardized variable your standardized variable x. So, this greater than or equal to epsilon is less than or equal to 1 upon epsilon square simple version of the simpler version of the Shebyshev's inequality. So, we would like to now work out some examples to see how these bounds can be used. So, let us consider a few examples on this inequalities that we have just discussed. This example says that x is a random variable with mean and variance both equal to 16. So, compute a lower bound on probability that x lies between 0 and 32 using Shebyshev's inequality because here we cannot use Markov's inequality right. It is a two sided thing and yes and again fine. So, therefore, we will use the Shebyshev's inequality here. The solution says that you convert this probability to probability of 0 minus 6. So, it just divides subtract 16 from both the sides. So, it will be probability 0 minus 16 less than x minus 16 which is less than 32 minus 16. So, the two events are the same and then this reduces to probability of absolute x minus 16 is less than 16 right. This is minus 16 and this is 16. So, absolute. So, now, this is in the form of well this is the opposite of the what we compute the lower bound for greater right. So, now, here I will have to write this as 1 minus probability absolute x minus 16 greater than or equal to 16 right. We had strict inequality here. So, this was strict and therefore, the opposite event would be the converse or the complement event would be absolute x minus 16 greater than or equal to 16 right. And since Shebyshev's inequality gives us a lower bound. So, therefore, when I write down sorry gives an upper bound. So, when I replace this by its upper bound minus of that will become. So, smaller way. So, the inequality the equality here will change to greater kind right. Because I am writing see this is for this I am writing a bigger number, but the minus will become a smaller number and therefore, this probability will be greater than or equal to 1 minus expectation of x minus 16 whole square divided by 16 square. But this we know is the variance of x which is 16. So, therefore, this is 1 minus 16 upon 16 square and. So, this is 15 by 16 and you can see that it is a fairly loose bound. So, 15 by 16 is a number close to 1 yes and there will be. But anyway. So, trying to show you that these bounds that you get are rather loose they are not very tight, but at some situations they are quite helpful also. Now, another example is from past experience a professor knows that the test score of a student taking her final exam is a random wearable with mean 65 that is not bad. If the mean is 65 then out of 100 then the students are good. So, let us see give an upper bound for the probability that a student's test score will exceed 85. So, you need an upper bound for the probability. So, therefore, we just know that the mean is 65 and that is it and using that we compute. So, here you will to answer this you will simply use Markov's inequality and that will give you probability x greater than 85. So, your a is 85 and so this will be less than or equal to expectation x upon 85 which will be 65 upon 85. So, 13 by 7. Now, Markov's inequality says that probability x greater than or equal to a is less than or equal to expected x by a. But we can also use this Markov's inequality for computing an upper bound for the event x for the probability of the event x greater than a. Since probability x greater than a is less than or equal to probability x greater than or equal to a. So, therefore, probability x greater than a will be less than or equal to expected value of x divided by a. So, therefore, when to compute the upper bound for the probability x greater than 85 I could use the number e x divided by 85 and this is what we will use for the for future computations also that follow. So, this is the answer that you get from Markov's inequality. Now, if there is an additional information that the professor knows that the variance of a students test score is equal to 20. So, if you have knowledge of the variance then you can use Shevyshev's inequality and then you will say that probability absolute x minus 65 is greater than 20. So, this probability will be less than or equal to expectation of x minus 65 whole square upon 20 square because this is your k. So, that is 20 square and therefore, since we know that the variance of x is x obviously, is the test score. So, then that is 20. So, 20 upon 20 square this is 1 by 20 of course, the numbers are little contrived does not matter. So, and then see the this event is actually probability x minus 65 greater than 20 either this is greater than 20 or this is less than minus 20. So, and you were looking for this probability, but in any case this is probability x minus x is less than 45. So, I could write plus here because the two events are disjoint. So, in other words if you want a bound for probability x minus 65 x x. So, this will give you a probability x greater than 85 plus probability x less than 45 which is 1 by 20. So, in other words your probability x greater than 85 is less than or equal to 1 by 20 because this will be something positive. So, you subtract from 1 by 20. So, therefore, this bound is definitely I mean there is a dramatic difference between 13 by 7 and the number which is 1 by 20. So, you see the moment you have more information about the distribution about the random variable you can get better bounds for the bounds are tighter. So, this was just I think I just sat down and contrived these numbers and so they may not be look very realistic because 20 is probably a high number for the variance. And therefore, so therefore, there is a dramatic change because you see this had no knowledge of the variance. So, I just computed this. So, this number we computed knowing the expected value. So, if here the variance is much smaller then obviously, this probability will also be this bound will be higher fine. So, this is just to show you that the difference between the two bounds. Now, the second part of the problem is that how many students will have to take the exam to ensure with probability at least 0.9 that the class average would be within 5 of 65. So, how many students? So, suppose we assume that there are n students who are taking the exam and then the class average would be given by. So, class average would be summation x i i varying from 1 to n divided by n which in our notation we can also write as x bar. So, that will be your class average and so what they are saying is that the class average which should be within 5 of 65. So, it is either 5 less than 65 or 5 more than 65. So, therefore, this class average should be within 60 and 70. So, this is a probability that you have to. So, you are told that this probability should be at least 0.9 and then you want to know how many students should take the exam. So, that this probability is at least 0.9 that means it is greater than or equal to 0.9. So, here again I am trying to standardize or get it in the form of the Shebyshev's inequality. So, we will. So, x bar minus 65 because mu of x bar is expected value of x bar is also 65 since each x i is. So, therefore, this will be 60 minus 65 less than or equal to x bar minus 65 less than or equal to 70 minus 65. And then I can also here divide by. So, what is the variance of here variance x bar will be variance of each x any of the x i is the same divided by n. So, because n variance x i is divided by n square. So, this becomes this and so under root of this. So, that will be under root of 20 by n. So, if I divide both sides by under root of 20 by n the inequality even does not change. So, this is what I have now and this is 1 minus probability x bar minus 65 upon under root 20 by n. So, this gives me that means now this is my I mean applying Shevyshev's inequality here this is greater than 5 by under root 20 by n. So, this would be by again because I am writing 1 minus of this. So, if you have less here that means this is less than some number by Shevyshev's inequality. So, minus that will become greater and so this is 20 by n into 25 because what is the have I written it correctly here. So, you want to by Shevyshev's inequality this would be what this is this number if I just write this number this is less than or equal to expectation of x bar minus 65 upon 20 by n this whole square divided by you can say 25 n by 20. But, this has variance 1. So, actually it should be yes. So, see this is this has variance 1 because I have standardized it. So, expectation x bar minus 65 upon under root 20 by n whole square has variance 1. So, the number is 20 upon n 25 and so this should be and this is greater than or equal to. So, my probability and therefore, if I put this equal to 0.9 then my this at least part is satisfied right. And so the value of n that gives me this quantity equal to 0.9 will satisfy because the probability that I have written on the left hand side is greater than or equal to this. 20 upon 25 n equal to 0.1 implies n is equal to 8 and not 80 as written. Therefore, for n greater than or equal to 8 the class average will be within 5 of 65. So, you know lots of different kinds of probabilities you can obtain via you know using the Shevyshev's inequality you can get the bounds you can get estimates of the numbers and so on. So, let us consider this example it says that it cost rupees 1 to play a slot machine. See now some of you have an idea that you put in 1 rupee and then if you are lucky some money comes out otherwise nothing comes out. So, the machine is set by the house. So, wherever you are this slot machine is put the machine is set to pay rupees 2 with probability 0.45 and nothing with probability 0.55. So, you see you put in a rupee and then with probability 0.45 you expect to get 2 rupees otherwise with the probability 0.55 you do not expect to get anything. So, you lose that rupee fine. So, find approximate probability that after 10,000 plays of the machine the houses winnings are between rupees 800 and rupees 1200. So, you see winnings means that when the house has to pay to the player then it is losing. So, since when the player puts in 1 rupee and the house has to pay 2 rupees then that is a loss for the house. So, therefore, x i is equal to minus 1 is with probability 0.45 and x i is equal to 1 with probability 0.5. So, this represents the earning of the machine you know in the i th play of the game when the slot machine is being played for the i th time. So, x i is equal to minus 1 with probability 0.45 and this is 1 with probability 0.55 and of course, you can see that e x i the expected value of x i will be minus 1 into 4.45 plus 1 into 5 5. So, this is 0.1 as you expect because otherwise why would people or house want to invest money in a slot machine if the expected earning is not positive. So, this is 0.1 per game of the slot machine and similarly the variance x i would be 0.99 because it will be expectation x i square. So, once you put x i square then this will become plus 1 into 0.45 plus 1 into 0.55 which will be 1. So, expectation x i square is 1 and then expectation x i whole square will be 0.1 square. So, this becomes 0.99. So, the variance of each x i is 0.99 and earnings of the house are represented total earnings are 1 to 10,000. So, sigma x i summation from 1 to 10,000 gives you the earnings of the house in which losses and income are included. The net net income the net earning of the house is sigma x i i varying from 1 to 10,000 and so expected value of the earnings is rupees 1,000. Because this is 0.1 0.1 multiplied by 10,000 which gives you rupees 1,000 and the variance sigma x i is 9900. Now, I have not standardized this here it is to carry on the computations with these values. So, now we have to compute the probability that total earnings of the house lie between rupees 800 and 1200. So, here again we will do the same thing we will try to standardize this probability and so I will subtract the expected value of sigma x i which is 1,000 rupees on either side. So, that gives me that absolute of sigma x i I varying from 1 to 10,000 minus 1000 is less than 200. So, again we have it in the right form in the sense that now I will write this as 1 minus probability sigma x i I varying from 1 to 10,000 minus 1000 should be greater than or equal to 2. So, it is saying between 800 and 1200. So, I am taking strict inequality here. So, therefore, the complement event will have greater than or equal to 200. So, I have set it right for the use of Shebyshev's inequality. So, again the same reasoning that this thing is less than I get a lower upper bound. So, therefore, minus of that will give me the lower bound. So, minus of that. So, therefore, this will be this is less than or equal to variance of x i sigma x i divided by 200 square. So, variance of sigma x i is 9 to see remember this is 99,000 9900. So, therefore, this is greater than or equal to 1 minus 9900 upon 200 square which is equal to which is equal to 1 minus 9900 upon 40000. And so, this becomes this. So, again that means this close to 0.75 the probability. So, you would expect that because 10000 plays and you know the machine the expected value from each play of the machine is 0.1 expected earning. So, therefore, this is probably not a very bad bound. So, at least. So, here of course, the probability is at least 301 upon 400. It can be more. So, it is understood that the Shebyshev's bounds are not very tight, but it gives you an idea. It gives you a feeling about the probability of the event that you are trying to estimate. Another example because I feel that there are so many different situations where you have to learn to how to compute the how to make it how you can apply Shebyshev's inequality. And therefore, I just collected a lot of different examples. So, let us look at another example here which says that a fair die is rolled. And what we mean by independently 3 times which means that there is no bias and so the outcomes are independent. So, the outcomes of when the dies rolled 3 times whatever the outcomes are independent outcomes. Now, we define x i as 1 the i th roll yields a perfect square. So, x i denotes the random variable which is equal to 1 if the i th roll gives me a perfect square and 0 otherwise. So, first find the PMF of x i and I need to do all this work before I apply the Shebyshev's inequality. So, the PMF of x i and then y is equal to x 1 plus x 2 plus x 3 then you have to find the PMF of y and then verify Shebyshev's inequality. So, we will compute the actual probability and then also get the bound by the Shebyshev's inequality and compare. So, obviously, we expect that since this gives us a upper bound. So, the actual probability that we compute will be less than what we get by the Shebyshev's inequality. This is the whole idea and I thought that we can if you work out in detail you will get a good feeling about the whole thing. So, anyway so let us see the solution procedure. Now, 1 and 4 are the only perfect squares in the 6 numbers that come up when a die is rolled. So, 1 and 4 are the only perfect squares and therefore, probability x i equal to 1 will be 2 by 6 which is 1 by 3 and probability x i equal to 0 will be 4 by 6 which is equal to 2 by 3 for all i and i's are also and x i's are independent. So, now you take y to be x 1 plus x 2 plus 3 and y i takes values 0, 1, 2 and 3 because none of the die show a perfect square. So, then this value is 0, one of them shows a perfect square, two of them show a perfect square or all three show a perfect square. So, the possible values of y are 0, 1, 2 and 3 and they are not difficult to compute the PMF of y because y equal to 0 means that all three variables take 0 value and since they are independent therefore, it will be product of probability x 1 equal to 0, x 2 equal to 0 and so that x 3 equal to 0. So, it will be 2 by 3 square which is a cube. So, which is 8 by 27, similarly probability y equal to 1 will be see now here I just take this case that x 1 is 1 and x 2 and x 3 are 0, then there can be three such combinations where one of them is 1 and the other two are 0. So, this will be 3 times this particular probability which is again using independence x 1 equal to 1 is 1 by 3. So, 3 times 1 by 3 into this is 2 by 3 raise to 2 which is 4 by 9. So, the probability is 12 by 27 and y equal to 2 again the same case that if I take this particular event x equal to 1 equal to x 2 and x 3 is 0, then again the three combinations which will give me the same value for y which is equal to 2. So, 3 times probability x 1 equal to 1 equal to x 2 and x 3 equal to 0. So, this will give me 3 into 1 by 9 into 2 by 3 x 3 equal to which is 6 by 27 and probability y equal to 3 requires that all three must be equal to 1 and that probability will be 1 by 27. Now, you can just make sure that you have got computed the right PMF by adding up all these probabilities. . So, 7 plus 6 13 plus 12 divided on this is 1 and 6 sorry 1 and 6 7 7 and 12 7 and 12 19 19 and 8 27. So, all these probabilities add up to 1. So, this is the right PMF. Now, therefore, we can immediately compute expected value of y which is 1 I mean I leave that to you because now you have the probabilities multiply by the corresponding value of y and you add up and get this and similarly, expected value of y square is 45 by 27. So, the variance comes out to be 18 by 27. So, now for example, probability y minus 1 greater than half if you want an estimate for this upper bound then this is less than or equal to expected value of y 1 whole square y minus 1 whole square into 4 1 by 2 square which will become in the denominator will become 4. Now, this certainly there is no verification is needed because the actual probability cannot be more than 1 whereas, this number is coming out to be more than 1 because you are you have put a number 1 by 2 here are you ok with this because 4 into 87 by 27 this number is greater than 1. So, here I do not need any verification. So, more meaningful verifications will be when for example, I want to say that probability y minus 1 absolute value greater than 1 is less than or equal to. So, this will be variance y upon 1 which is 18 by 27. So, now we will actually compute this probability and show that it is less than 18 by 27 to make sure that this is this actually gives you an upper bound. If you observe that b is y is also distributed has a binomial distribution a binomial distribution with the parameters 3 1 comma 3. So, the probability of success is given by 1 by 3 and number of rolls is number of rolls is 3. Now, and therefore, immediately you know that hence expected value of y will be n p. So, 3 into 1 by 3 is 1 and the variance of y will be 3 into 1 by 3 into 2 by 3 n p q which is 2 by 3 and, but we computed this independently as 18 by 27 which is also 2 by 3. So, fine anyway now just want to we are we are testing the we are comparing the actual computations of probabilities with the Shevyshev's bounds. So, if you want to look at this probability absolute value of y minus 1 greater than or equal to 1. So, you see that here y minus 1 greater than or equal to 1 implies that y can take the value 0, 2 and 3 because y is 0 then absolute value of minus 1 is 1. So, which is equal to 1. So, remember the event is greater than or equal to therefore, y can be equal to 0, 2 and 3 and in that case this probability will be equal to probability y equal to 0 plus probability y equal to 2 plus probability y equal to 3 and. So, we make this computation 8 by 27 plus 6 by 27 plus 1 by 27 which is equal to 5 by 9, but if you compute the Shevyshev's bound then this will be variance y upon 1, 1 square is 1 and this is variance y which is 18 by 27 or 2 by 3. So, you see when you compare these two numbers, you will say that Shevyshev's bound is a loose bound, is a loose upper bound because 5 by 9 is less than 2 by 3. See, 15 is less than 18 when you compare 5 by 9 is less than 2 by 3 and then let us see take another event. So, this is this will be probably absolute value of y minus 1 greater than or equal to 2 which is simply probability y equal to 3. So, we can use the binomial probabilities here and so y equal to 3 would be simply 1 upon 27, 1 by 3 raise to 3 and this by the Shevyshev's inequality will be variance y upon 4 because your k is 2. So, this is 18 by 27 into 1 by 4 which comes out to be 1 by 6 and so when you compare 1 by 27 with 1 by 6 the gap widens. This is a loose upper bound by the Shevyshev's inequality and now if you want to look at the event probability absolute value of y minus 1 greater than or equal to 0 then we cannot apply the Shevyshev's bound because this number has to be positive. Remember k greater than 0 so that is required. So, therefore, I cannot compute a bound for this. So, you may compute the actual probability here, but Shevyshev's bound cannot be computed. What I have tried to show you is of course, through various examples how to get the probability required to compute the lower bound by Shevyshev's inequality. So, doing that and then also try to give you a feeling that the bounds we are computing are loose bounds, they are not tight ones. And thirdly now what we will I would like to show you in the next lecture is that apart from computing bounds it has also proved a very useful tool for showing you know we will be talking of convergence theorems in the next lecture. And that is where I will show you how useful a tool the Shevyshev's inequality is and so that will be the next.