 What about the variance of a discrete random variable? As before, we can compute the variance directly from the probabilities, and also as before, we'd like to have formulas for some common distributions. So, suppose our sample space consists of equally likely outcomes, like having a fair six-sided die roll. So, while we could calculate the variance as the expected value of the square or the difference between the random variable and the mean, remember that the variance can also be found by the difference between the expected value of the square minus the square of the expected value. And we already found the expected value of x, that's 3.5, so we can find the expected value of the square, that's the squares, times the probabilities. And so we'll get ninety one sixth. Consequently, we can find the variance, which will be 2.917. And unfortunately, we run into a problem. To compute the expected value of the squares, we need to know the sum of the squares of the outcomes. And while there are formulas for these sums, they don't really offer us very much useful insight, they're just additional formulas we have to keep track of, and if that's all they are, we might as well just leave them in some handy reference book. On the other hand, a binomial distribution does have a nice way of computing the variance. So, remember that for a binomial distribution with n trials and success probability p, we have the mean equal to np. And to find the variance, let's find the expected value of the squares, so that will be. And again, note that if k equals zero, the first term vanishes, so we can really begin our summation at k equals one. And we note that for k greater than or equal to one, k squared n choose k, well that will simplify. And note that many of our steps in this simplification are really the same as the steps we use to find the mean of the binomial distribution in the first place. Now, note that n minus one minus k minus one is n minus k, so this expression becomes nk times n minus one choose k minus one. Again, n and p are constants, so we can remove n and one factor of p outside the summation, and we'll re-index using j equal to k minus one, and this will give us, and we can split this latter summation, and let's focus on this first summation. Now, in this first summation, while we're starting at j equals zero, once again that j equals zero term vanishes, and so we can start our summation at j equal to one. And as before, j times n minus one choose j can be rewritten as n minus one times n minus two choose j minus one. So our summation becomes, and again, n minus one is a constant factor, so we'll remove a factor of n minus one and another factor of p. And this series, well, if we re-index with k equal to j minus one, we get, and our series is going to be the binomial expansion of p plus one minus p to power n minus two, and that simplifies to, and so our first summation, n minus one times p. And similarly, if we look at this second summation, well, that is again the binomial expansion of p plus one minus p to power n minus one, and that works out to be one. And so the expected value of x squared is np times n minus one p plus one. But remember, the variance is the expected value of x squared minus the square of the expected value of x. And we just computed the expected value of x squared and the expected value of x, again, that's np. And so if we simplify, we get, which is our formula for the variance of the binomial distribution.