 Man, I do not have the cards to back that last bet up. My butt is really on the line here. I guess I should have calculated my posterior probability. When I say the word prediction, what do you think of? Crystal balls? Stock price projections? We usually think of seeing into the future as a task for spirits or well-funded analysts, but it's the thing that humans do routinely in their everyday lives. Knowing where to look for your keys this morning was a prediction, guessing what would be tasty for lunch was a prediction. Even clicking on this video is an example of ability to determine the future based on incomplete information. Good choice, by the way. Bayes' theorem is a simple tool custom-built for the purpose of prediction. In ordinary language, it states that any event that we see could be the result of many different circumstances, each of which must be considered when predicting what will happen next. The concept is such an essential part of how the world works that you use it instinctively without even thinking about it. If we're playing Russian roulette with an unknown number of bullets and the gun goes click once, you know that it would be stupid to assume that that means it's empty, even though an idiot might expect it to keep doing what it's done so far. There are several different situations that could have resulted in the event that we saw, and when we're trying to predict the future, we need to consider each of them. Bayes' theorem is a formalized mathematical version of that principle, and it tells us exactly how we should update a guess about what's going to happen next based on what we've seen so far. Honing in on the set of possible futures we're most likely to find ourselves in. I say futures, plural, because that's how prediction actually works. Usually when we talk or think about predictions, we treat them like they're binary things. Someone says, I think that this will happen, and either it happens or it doesn't. That's true once we've made it past the predicted event, but until then, it's a guess, and there's some amount of uncertainty associated with it. We very rarely talk about the likelihood of every possible outcome. We only really tell people what we think has the highest chance of happening, and ignore everything else. Will I get that report done by Friday? Yeah, I'd put the odds at about 75%. Of course, statistically speaking, there's a 5% chance I'll get into a car accident on the way to work. A 2.5% chance I'll have a stroke, maybe a 3-5% chance that I'll have a heart attack, a 10% chance that we'll have some sort of company emergency that needs to be dealt with first, a 1% chance that ... oh, yeah, sorry, I'll have it done. That's great for brevity, but not so much for accuracy. By throwing out everything but the most likely option, we're losing a lot of resolution in our predictions. A person who would only give 51% odds on finishing the report would technically give their boss the same answer as someone who thought it was 99% certain. At least one of those guesses is way off, and knowing those probabilities is much more informative and valuable than a simple yes or no, especially if you want to know which of your employees is good at prediction. Poker is a great example of how that sort of Bayesian thinking works in practice. Poker players win or lose by being accurate predictors. They need to be able to guess whether they have the best hand at the table or whether they should fold, and they need to convince the other players at the table that they're good at guessing. Importantly, they don't just take turns saying, I think I will win. They use a fine-grained metric for certainty. How many chips did that? The cardinal sin in poker isn't winning or losing hands. Even the best players can't control what cards they're dealt. The only thing that makes you look like a rank amateur is betting too much on a crap hand or too little on a good one. Doing that deliberately and strategically by bluffing is important, but good players can tell if you're having trouble figuring out the proper balance of confidence and uncertainty in your betting. If you only manage to win a few bucks with a royal flush, you clearly have no idea what you're doing. The same thing goes for estimating probabilities of future events. We can't expect anyone to predict the future exactly right all the time. Sometimes unlikely things happen, things that nobody could have seen coming. What's really important is getting the probabilities right, really nailing how likely each possibility is. For people who aren't used to thinking in a Beijing fashion, that can look much different than how we usually think of successful prediction. Let's say that you make a set of 100 forecasts and you attach a 51% certainty to each of them. It turns out that you were right about every single one. From the common all or nothing point of view, you should be thrilled, you predicted everything correctly, but from the Beijing point of view, you should be devastated. Because the odds that you attach to them about half of your answers should have been wrong. Your probabilities were way off. You just performed the poker equivalent of calling instead of raising with four aces. The Beijing way of looking at the everyday predictions we make in the normal course of our lives might be a little weird at first. We're so used to discarding our second and third best guesses for the future when we talk to others, not to mention our estimates of their probability. All we usually get is what's in your number one spot and did it happen or not. But the truth is, it's much more informative to have a few guesses about likely outcomes and how likely each of them are. It's a good mental check that forces us to consider alternatives and helps us evaluate how accurate our picture of the world really is. And instead of depending on a winner to keep winning, Beijing thinking is a better way to know who's really on the ball and who might randomly decide to go all in on a pair of queens. Seriously dude? Thank you very much for watching. Don't forget to subscribe while I share. And don't stop thunking. Are they gone? Hey nerds, come here for a second. I've wanted to get into the nuts and bolts of this thing for a while, but it requires some math and I figured that only nerds like us would appreciate it. We're going to look at how to update the odds of different hypotheses given a new bit of relevant information. Take the example of one bad dinner at a restaurant that has never served anything but delicious meals before. Hypothesis one is that it's really a good place and our bad meal was a fluke. Hypothesis two is that they're under new management or something and we should probably not come back. Let's update the odds of the first hypothesis given this new data. First, start with the probability you originally thought it was beforehand, what basions call a prior. Let's say that you were 90% sure that this place was incredible before this. Maybe you had a 10% lingering doubt that it was too good to be true. Next, figure out what the odds are that you would see this particular bit of evidence given each hypothesis. I'm looking at a really bad meal. If this place was actually amazing, this kind of mistake should only be a one in a hundred thing. If they finally lost their best chef or something, it could be a 50-50 chance whether you're going to get pizza or pizza. These are called conditional probabilities. Basically, how likely would this observation be if this hypothesis were true? Finally, we're going to use Bayes theorem. The new probability of a hypothesis being true is equal to the previous probability of the hypothesis times the conditional probability of the evidence given that hypothesis, divided by the sum of those values for all hypotheses. In this case, the updated odds that the restaurants still good are 90%, which is what we believed before, times 1%, that one in a hundred chance that some weird series of events could result in a crappy meal like this, even at a good restaurant, divided by those values for hypothesis one, 90% times 1%, plus the same values for hypothesis two. The prior times the conditional probability of this evidence given hypothesis two, or 10% times 50%. Plug and Chug, and you get a 15.2% chance that the restaurant is actually still okay, and an 86.8% chance that something is seriously wrong. These are called posterior probabilities because they're what you get after the calculation. As we were discussing before the non-nerds left, on the surface, an outsider would just see that we've decided not to go back to the restaurant, like we were absolutely certain that it's crap now. But the truth is that we just think that it's more likely it's hit upon hard times than we are that our burnt-to-cris pizza was an honest mistake. Of course, you probably don't need to use math to make these sorts of decisions and update your beliefs. But if you're a nerd, like I am, why the hell not? Do you think that thinking like a Bayesian in everyday life might be helpful or is it just for nerds? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to ball-ball-subscribe, blast share, and don't stop thunking.