 Probability is one of the hardest and easiest areas of mathematics, and what will make it hard or easy is whether or not you learn the definitions. If you learn the definitions, probability is very easy. If you don't learn the definitions, probability is very, very, very, very difficult. So, let's go ahead and take a look at those definitions. Probability begins with this notion of uncertainty, and first of all, we define a random experiment as one whose outcomes are, in practice, unpredictable. And the important thing here is random means unpredictable, and that is one of the important ideas that have to be remembered about probability in general. If I take a look at the set of all possible outcomes of my random experiment, I get what we call the sample space. Now, if I take some of those outcomes, a set of some of those outcomes, really a subset of the sample space, is something we call an event, and the event occurs on one trial of the random experiment, if any one of its component outcomes is the result of that random experiment. As always, if you don't know the definitions, you can't do mathematics. Every definition here is a critical part of understanding probability. So, again, to emphasize what makes an experiment random is the unpredictability of the outcome. So, let's say I flip a coin, and I toss it in the air, and whether it lands heads or tails, well, it's unpredictable in practice. In reality, it is predictable in the sense that it is completely deterministic. It's determined by things like air density, and how hard I flip the coin, and so on and so forth. But in practice, it's unpredictable, so we say that it is random. Whether it's going to snow tomorrow. Well, I can't predict whether it's going to snow tomorrow or not. So, again, we have this level of unpredictability, so it is a random experiment. Which candidate is going to win the election? Well, that's completely predictable because the winner is determined years in advance. And there's a cup all that determines every winner of every election from now until the year infinity. Well, actually, it's actually unpredictable. We cannot guarantee which candidate is going to win the election. At least we hope that's the case. We really don't want to think that our elections are determined by somebody well in advance. And so, again, it is unpredictable, so this, which candidate wins the election, is random. Whether a jury convicts or acquits a defendant. Well, again, it's predictable since that decision has already been made by the time of trial. Why bother having the trial if the decision has already been made? Well, again, this is something that we hope is, in fact, in practice unpredictable, and so this is something that is a random experiment. And again, the thing to emphasize here is random doesn't mean reasonless. There is a reason why the coin lands heads or tails. It has to do with physics. There is a reason whether it snows tomorrow. Again, it has to do with physics. There is a reason why one candidate wins the election. There is a reason why the jury convicts or acquits a defendant. But that has nothing to do with randomness. What has to do with randomness, what determines randomness, is whether or not the outcome is unpredictable in practice. If you can't guarantee the outcome, it's unpredictable. Now, the probability itself, we haven't yet defined it. It's a number, but what that number tells us depends on which of two views of probability you're using. Both views are important. It is not a question of one view is right and the other one is not. It's both views are important. It depends on the context. The most important, the most common view of probability is known as the frequentist viewpoint. When I give the probability of an event, it's the frequency that that event occurs when the underlying random experiment is repeated an infinite number of times. Now, if we don't repeat the experiment an infinite number of times, then our probability is going to be an approximation. For example, when we say the probability that the coin lands heads is one half, what we mean is that if we flip the coin many, many, many, many, many, many, many, many, many, many times, then about half the time that coin is going to land heads. If we flip the coin an infinite number of times, it'll land heads exactly half the time. If we don't flip the coin an infinite number of times, then somewhere around one half the time it'll land heads. The other important viewpoint of probability is known as the Bayesian viewpoint, and probability measures our confidence that an event will occur the next time the experiment is run. So let's talk about the weather. The probability of rain tomorrow is 70% or whatever it is. That expresses my confidence that it will rain tomorrow. The closer that probability is to 100%, the more confident we are that the event will occur. So if the probability of rain is 10%, I'm not so confident that it'll rain. If the probability of rain is 90%, I am very confident that it's going to rain. So how do we find probabilities? As a general rule, calculating a probability is impossible. You cannot calculate a probability. However, if we use the frequentist interpretation, we can find what's called an empirical probability. The empirical probability of an event is the frequency it occurred when the event was repeated some finite number of times. Note that this means we actually have to perform the random experiment a finite number of times. You cannot find an empirical probability without performing the random experiment. So, for example, let's say a coin is tossed 1000 times and it lands heads 437 times. So I've done the experiment, I've recorded the results, and now I can talk about the empirical probability that the coin lands heads. So again, the only probability we can find is the empirical probability. So we need to know the frequency that the coin lands heads. And it did that 437 times. The experiment was repeated 1000 times. And so the empirical probability, the frequency that it landed heads, was 437 out of 1000. And so our probability, 437 out of 1000.