 So, we claim that if x and y are independent and normally distributed with means mu x, mu y, and standard deviation sigma x, sigma y, the difference x minus y will be normally distributed with mean equal to the difference and standard deviation equal to the square root of the sum of the squares of the standard deviations. We can derive these formulas from multivariable calculus. Suppose x has pdf f of x and y has pdf g of y, and x and y are independent. What do we know about the difference x minus y? First, let's find the expected value. So, suppose x and y are independent random variables. Then the expected value of the difference will be a double integral over all real numbers of the difference times the probability density function h of x, y. But since x and y are independent, the probability of both x and y is the product of the two probabilities. And this means that this probability density function h of x, y is really f of x times f of y. And we can expand our product. And remember the integral of a sum or difference is the sum or difference of the integrals. So, we can evaluate two different integrals. Now, consider this first integral. Since g of y does not depend on x, we can remove it from the first integral. But the integral that we're looking at, integral over all real numbers x f of x, well that's just the expectation of x. And so the integral becomes the expectation of x. And since the expectation of x is actually a constant, we can remove it. But this integral over all real numbers of g of y d y is the probability that y is somewhere between negative infinity and positive infinity, which must be 1. And so this first integral reduces to e of x. By essentially the same argument, this second integral, y g of y is constant with respect to x, so we can remove it outside the integral. The integral over all real numbers of f of x d x is the probability that x is between plus or minus infinity, and that's just 1. And the integral y g of y is just the expected value of y. And so this leads to the theorem, let x and y be independent with probability density functions f of x and g of y. Then the expected value of their difference is the difference of the expected values. And it's worth noting that this is true regardless of the actual probability density function. Now to find the variance, we need to find the expected value of the square of the difference. And we find, and we can expand and break up our integral. Now consider this first integral. Since g of y does not depend on x, we can remove it from the inner integral. But the remaining integral is just the expected value of x squared. And again, the expected value of x squared is constant, so we can remove it. And the integral represents the probability that y is between plus or minus infinity, which is equal to 1. And so that first integral is the expected value of x squared. Similarly, that second integral, because y squared g of y is not dependent on x, we can remove it to the front. The integral is equal to 1. The integral we have left is the expected value of y squared. And finally, the last remaining integral, well, 2 and y g of y are independent of x. 2 has an impact at constant, and we can remove it all the way to the front. Our inner integral is the expected value of x, which is a constant, and that goes to the front. The remaining integral is the expected value of y. And so the expected value of the square of the difference is the expected value of the square plus the expected value of the square of the other minus 2 times the expected value of the two different variables. So remember the variance of a random variable is the expected value of the square minus the square of the expected value. And so we can write the variance of the difference x minus y as the expected value of the square minus the square of the expected value. And we actually know what all of these things are. So substituting in the expected value of the square of the difference. And the expected value of the difference is the difference of the expected value, so we can substitute that in as well. And if we expand and simplify, and after all the dust settles, we have here the expected value of x squared minus the square of the expected value of x, which is just the variance of x. And similarly, the expected value of y squared minus the square of the expected value is the variance of y. Putting our results together, suppose x and y are independent random variables whose probability density functions have means mu x, mu y, and variances var x, var y respectively. Then the difference x minus y will have mean mu x minus mu y, and variance var x plus var y. With somewhat more work, which we omit, we can show that if x and y are independent and normally distributed, the difference x minus y will also be normally distributed. And finally, since our standard deviation is the square root of variance, we have the following. If x and y are independent and normally distributed with means and standard deviations, then the difference will also be normally distributed with mean equal to the difference and standard deviation equal to the square root of the sums of the squares.