 Welcome back to our lecture series Math 1220, Calculus II for students at Southern Utah University. As usual, I'll be your professor today, Dr. Andrew Missildine. In lecture 24 here, we're going to talk about the idea of chance, probability, randomness, and what does this have to do with calculus? Examples in this lecture, most of which are derived from section 8.5 about probability from James Stewart's Calculus textbook. And so when one talks about probability, what does that really even mean? Probability is the mathematics of randomness. For example, in tossing a fair coin, we know that the outcome is either going to be heads or tails. On a particular toss, we can't predict what's going to happen, right? At least we don't know what's going to happen. We can make some guesses, of course. But if we toss the coin many times, we can observe that the number of times that we get heads is approximately the same as the number of times we get tails. We don't expect perfection there, but the number of heads and number of tails should be someone close to each other. And so it's reasonable to assign a probability of one-half to tails and a probability of one-half to heads. So we'd say something like the probability of tails is one-half, one out of two. The probability of heads is likewise one out of two, one-half. So we want to be a little bit more specific about that in terms of our calculus of probability here. So in this lecture, we're going to talk a lot about the idea of a continuous random variable, which to start off with, what is a random variable? Well, a random variable is, as the name suggests, it's a variable whose outcome is determined by chance, by some random experiment. So as an example, if we were to flip a coin, right, that options or heads or tails, let's assign the number zero if we flip a tails, and let's assign the number one if we flip a heads. And so therefore our random variable will assign either zero or one based upon the outcome of the coin flip. And so there are probabilities associated to that. Well, what's the probability that the random variable will be zero? Well, it'll be one-half because that's how often tails comes up. And what's the probability that the random variable will be one? Well, that'd be one-half because, again, probability of heads showing up there. So that's the idea of a random variable. We say that a random variable is continuous if the number of values, if the number of values that X can take on ranges over some interval of real numbers. So we have a continuous random variable right there. So with random variables, we construct what's called a probability density function, like we saw on the previous slide, basically. The flipping the coin is an example of a discrete random variable. Our values don't come from some continuum of points, not from an interval, just from a handful of points. In that case, it was just zero and one. So the probability density function of a continuous random variable X along the interval a to b. So a to b are the values that it can take on there, right? To be a probability density function, we have to have two conditions. First of all, our function f of X can never be negative. For the entire interval a to b, every X value has to output a value greater than, you know, greater than or equal to zero. And the idea here is no event can have a negative probability. It could be zero, right? But it can't be negative. It can't be like, oh, there's a negative 13% chance I'm going to pull a brown M&A amount of my bag. That's absurd. So our probability density function has to have, it always has to be non-negative. The other thing we want is that the integral from a to b of f of X dx, that is the area under the curve, under the entire domain, has to equal one. And again, the idea here is that if we sum up all of the probabilities, the total sum should equal one. And sometimes we forget this, but this integral is a sum. It's an infinite sum, but it's a sum nonetheless. That's why the integral symbol is this elongated S, S for sum, as opposed to what maybe strong bad things S stands for. So we take an integral from a to b f of X dx that should always equals one. And so this mimics the idea of what we mean by a probability, right? A probability density function is you can't have a negative probability and the sum of all the probabilities needs to equal one. Sometimes people refer to this just as a density function for short. So if you hear me say that, that's what it means. Or if I say pdf, that it doesn't mean like a document, that means a probability density function here. With the probability density function hand, we define the probability of a continuous random variable in the following way. The probability that X will be assigned randomly between the numbers C and D, this will equal the integral from C to D of f of X dx. So let's put this to a very quick example. Show that the function defined by f of X equals 326 X squared is a probability density function on the interval one to three. And let's use that to find the probability that X will take an assignment between one and two. So for the first thing to show that's a probability density function, we have two things to show. So our function f of X we first need to show that it's always positive. And it's not too hard to do that. Notice that for all X, X squared is always non-negative. There's no X squared, it's always positive, right? And so then if you times that by some positive number 326, 3 over 26, this is always going to be positive. So that's a pretty easy thing to check. If you're wondering why in the world we choose 3 over 26, it seems like an interesting choice. We actually see the reasoning in just a second. The next thing we want to show is that the area under the entire curve is equal to 1. So if we apply that to this function, we go from 1 to 3, we get 3 over 26 X squared dx. Well, as this is an integral, I can take out the 3 over 26. I find an antiderivative for X squared, which is going to be X cubed over 3 as we go from 1 to 3. The 3 on top cancels with the 3 on bottom. If we plug in then the 3, we're going to get 3 cubed, which is 27 minus 1 cubed, which is 1. And so then you see it gets 26 over 26, and thus it becomes illuminated. Okay, I see. 326 was the factor chosen so that the area under the curve is equal to 1. This is something that statisticians and people studying probability do all the time. We might have a distribution that models a certain way. Maybe our distribution looks like a parabola. And therefore we kind of crushing, we want to kind of crush the curve so that the area under the curve is 1. So we get a probability model here. So then the third part, if we want to determine what's the probability from 1 to 2, the probability that our random variables, these are always capitalized. The random variable falls between 1 and 2. That's going to equal the integral from 1 to 2 of our density function 3 over 26 X squared dx. And so here, if we take the anti-derivative, which we've already done, we're going to get X cubed over 26 as we go from 1 to 2. We plug in 2, which is going to be 8. We plug in 1, which gives us a 1 over 26. So this would give us 7 over 26, which as this is a probability, we should probably write it as a decimal or something. This would give us 0.269. So approximately a 27% chance of our random variable being assigned the number 1 to 2. Then of course, if we were to take the compliment, right, we'd have a 73% chance that the random variable would be assigned between 2 and 3 if we're interested in that. So we can study probabilities using these probability density functions. One thing I should also mention, sort of important little remark here, that one thing that kind of baffles students when it comes to continuous random variables, if I ask what's the probability that X would equal some specific number C? Well, according to the definition, we've integrated from C to C of f of X dx, which no matter what the function is by properties of integration, this would always equal 0. And so there's a 0% chance that the random variable will be assigned to C. Now that might seem weird because it's like, can't the random variable be C if C sits between A and B, right? Then there should be some chance. And this is, of course, the misconception I'm trying to clarify here. What we are saying is not that X can never be C. What we're saying is that we cannot predict that X will equal C. If there are infinitely many options, there's no way of predicting with any certainty that X will be exactly C, right? Because it's ranging over an interval. The probability of being 0 doesn't mean that it's impossible. The probability of being 0 means it's unpredictable. The problem is when we work at discrete probability, there's no distinction between those two things and so we kind of conflate the issues together. But this is very important for continuous random variables. For a probability to be positive, it is necessary that we take some interval. We can predict that a random variable will fall within a certain margin, but we cannot predict it's going to be an exact value.