 But we could run tests on it. We could run multiple tests. And the way we would structure the test is to say, like we do with the laboratory test, we would say, well, the hypothesis is going to be the null hypothesis is that it's fair. It's going to be 50-50. And then we're going to get the preponderance of evidence. And if the evidence is such that it's going to be different than that, the evidence is showing that it's not fair, then we're going to have to change our assumption. And notice that I've heard this compared to a legal type of procedure in the American justice system. If someone goes into a court, for example, for a crime, they're supposed to be innocent until proven guilty. They're innocent until proven guilty. So the innocence is kind of like the null assumption. And you're kind of leaning towards the null assumption. So it's not like most of the time when we run these types of tests, people might say something like, well, you can't prove that it isn't or something like that. And it's like, well, no, the coin we're assuming is 50-50. The preponderance it's on you, it's your job to prove that that is not true with a preponderance of evidence. The assumption is the null assumption. The assumption is the null. You have to give me evidence that's sufficient to remove me from the null assumption. You can't just say, well, you don't know either way kind of thing. We'll know that just like in a court case where we say, no, the citizens are innocent unless you give me a preponderance of evidence that is sufficient to overturn my prior assumption. Same thing is happening here. We're saying the null assumption, we assume it's 50-50. I have to have enough evidence. You have to prove that that's not the case. It's not enough to be even. It's not enough to say, well, it could go either way. No, you have to prove it. OK, we know that. So to investigate this, we essentially put the coin through a test. So we flip this 100 times and keep track of how many times it lands on heads and how many times it lands on tails. This process provides us with the randomized data about the likelihood of getting heads or tails in each flip. So notice we're doing probability kind of concepts here. And we're applying. You can see how that's applicable in a statistical type of analysis, because we're basically saying there's an infinite number of coin flips. There's the probability we don't know it, because we don't know the infinite set of data if we flipped it like an infinite number of times. But we can test it out for multiple times of flipping it, and then get our results and see if we can infer, based on the whole population, which would be kind of like an infinite number of flips, and see whether or not. And so if we get evidence, for example, if you just flipped it 10 times, that might not be enough, because if it comes out 60, 40, and you flipped it 10 times, you can say, well, yeah, that's not fair. But that could happen just randomly speaking. So now the question comes up, what is a sufficient ponderance of evidence? How many times would you have to flip it? And then these questions come up. So how close and how confident are we once we get our data? If we got 10 flips and the middle point was 60% or something, instead of 50, maybe we don't have enough confidence to really remove me from the presumption of the null assumption similar to the presumption of innocence. But if we did it on multiple hundred time tests, for example, and they're averaging out or coming out to 60 more closer, then now you're coming to a preponderance of evidence where it's like, I think I have to remove myself from the null, just like we would if someone says, well, I clearly have evidence of this person committing the crime. It's on tape. He's right there. He's flipping off the camera right there. You could see him as he's stealing the stuff and beating the guy up or something, right? So then you go, OK, I think I have to remove from my innocent to guilty. OK, so conclusion. Overall statistical inference is a set of tools that allows us to use sample data to make generalizations about an unknown population. So randomness and probability theory are at its core enabling us to quantify our level of uncertainty, make educated guesses, and test hypotheses about the population. Statistical inference is fundamental to many aspects of life, including science, economic, medicine, business, and even politics. Obviously these kind of concepts come up all the time. So whenever we read anything, whenever someone gives you advice or something, it's usually going to be medical advice, business advice, investment advice, career advice. We kind of assume that they've got some kind of statistical rationale for it in some way because that's usually how we think of making a lot of these kinds of decisions. So it's clearly a very useful, important tool in many different areas of life and profession.