 I told you in an earlier blog post that all of frequentist statistical inference boils down to one sentence. Does the evidence that we collected make our null hypothesis look ridiculous? Yes or no? The p-value is the answer to that question. So that's kind of the first question and then you do a whole lot of stuff and then you get the punchline. That punchline is the p-value. And that tells you what to do. Reject the null hypothesis or don't reject the null hypothesis. Now how do you make that decision? Well, you compare it against a setting that the decision maker chooses. It's called the significance level. It's a threshold. It basically says how ridiculous is ridiculous as far as you're concerned. How weird do these data have to be for you to call it ridiculous? And that's a personal choice. You've got to ask yourself what your tolerance is for the risk of being wrong. If you've got quite a high threshold, then it's fairly easy for the p-value to get under that threshold and for you to conclude that things are ridiculous. Whereas if the threshold is really low, that significance level is really low, then that evidence has got to be really, really weird in the null hypothesis world for you to react to it. And you set that in advance. And so you're going to compare the p-value against that threshold that you set and if the p-value is so small that it sneaks under there, I feel a little ridiculous about persisting in acting as if I live in that world. So I reject the null hypothesis and I conclude in favor of the alternative. That is how it works. On the other hand, if the p-value is too high above your significance level, you're going to keep doing what you were going to do anyway. It's a reporting convenience that works with calculations. So you get used to it, but it was never designed to be intuitive. A lot of statisticians, myself included, feel that there's no actual reason to surface the p-value to the viewer. Let them input their significance level and they just get a ridiculous or not ridiculous output.