 So, we introduce the idea of expected value when x was a discrete random variable, what if x is a continuous random variable? So, the important thing to remember here is that SUM is to integral as discrete is to continuous, which is to say any time we use a SUM for a discrete quantity, it translates into an integral for a continuous quantity. And so we define the following, let x be a continuous random variable with probability density function f, the expectation of x is defined as the following integral, and this is also known as the expected value, and like the expected value of a discrete random variable, this corresponds to the average value of the random variable. So, let x have the probability density function as follows, let's find and interpret the expectation of x. Now, to keep my mathematician card from being revoked, we'll go through all the steps. And again, there is some calculus here, but if the calculus seems too daunting, rest assured that the calculus is not the point of the process. In other words, don't worry about the details of the calculus, understand what we're trying to find. So, to begin with, since f of x is zero for x less than zero, we only need to evaluate the integral for positive x, where we can use our formula e to power a negative x. And so our expectation is going to be the integral from zero to infinity x e to the negative x. Now, since this is a limit as our upper bound goes to infinity, properly speaking, we need to limit away from that, so we'll rewrite this as limit as our upper bound goes to infinity of the finite integral zero to be x e to the negative x. And this is one of the integrals we'll do by parts. And so we'll let u be x and dv is e to the negative x, so differentiating and anti-differentiating gives us. And so when we find our integration by parts, we get, which is our indefinite integral. And so we can now evaluate our limit, which will be one. And the average value of our random variable is one. Now, remember our interpretation that our expectation is the average value of the random variable. And this means the mean value of our random variable is given by the expectation. Now we can also consider the expected value of other quantities. And in general, if g of x is some function of the random variable x with pdf f of x, the expected value of g of x is going to be computed by the integral from minus infinity to infinity g of x f of x. Now let's see what this means. Suppose x is a random variable with a given pdf, and let's find and interpret the expectation of x cubed. So our g function is x cubed, and so the expectation of x cubed is going to be the integral minus infinity to infinity x cubed f of x dx. Now, since again f of x equals 0 for x strictly less than 0, we can omit that part of the integral and only concern ourselves with the integral from 0 to infinity x cubed e to the minus x dx. And since I've done one derivation of the integral, I've satisfied the requirements of my mathematician card. And while this integral can be done by repeated integration by parts, we might as well take advantage of the fact that we can use a computer algebra system to evaluate the integral and we find... Now, there's many quantities we could find, but a useful quantity to find is the deviation from the mean x minus mu. More useful is the squared deviation from the mean x minus mu squared. And most useful is the expected value of the squared deviation from the mean. That's the expectation of x minus mu squared. And again, this tells us the average value of the square of the deviation from the mean. This is also known as the variance, and so we can write variance of our random variable x is the expected value of the square of x minus mu. And this ties back to our standard deviation, which will be the square root of the variance. Now, while the variance is defined in terms of the squared deviation from the mean, in practice that's a little hard to compute. So let's see if we could find a different way. So suppose x has pdf x, then we can find the variance, but we'll expand and break apart our integral. So this x minus mu squared, we can expand that, and the integral of a sum or difference is the sum or difference of the integrals. Now remember that if we find the integral of a function times our pdf, that's the same as the expectation of our function. So this integral minus infinity to infinity of x squared f of x, well that's the expectation of x squared. Next, remember that the mean is a constant value, and since this is an integral, that means we could remove all of our constants out front of the integral. And so we get, but mu is also the expected value, and so it's going to be the value of the integral for minus infinity to infinity of x f of x dx. And so this quantity becomes, and this integral is the expected value of x, and so this term will just be two times the square of the expectation. And finally, again, mu is a constant, so we could remove it to the front of the integral, and this integral from minus infinity to infinity, well remember the integral gives the probability our random variable is in the interval from minus infinity to infinity. But it has to be, so the value is one, and so this integral just becomes mu squared, which is also the square of the expectation. And so this gives us a nice computing formula for this squared deviation from the mean. Let x be a continuous random variable with pdf f of x. The expected value of the squared deviation from the mean, that's also known as the variance, is the expected value of the square of x minus the square of the expectation of x. And so I suppose x has a pdf that looks like this, let's find the mean and the standard deviation. So the mean will be the integral for minus infinity to infinity, so we'll run that through our favorite computer algebra system, and we get the average value of x will be zero. Now remember the standard deviation is the square root of the variance, and the variance can be computed by finding the difference between the expectation of x squared and the square of the expectation of x. Now we've already found the expectation of x, which is the mean, so we can find the variance by computing the expectation of x squared. So setting up that integral and evaluating it somehow, we find it's equal to one-half, and so the variance is the difference between the expectation of x squared and the square of the expectation of x. And so the standard deviation is the square root of the variance, square root one-half.