 All right, we are back in our. Colab file, but we're talking about normal distributions. And in this final video, we're going to be talking about how you calculate the p value from the normal distribution. So the p value from the normal distribution is calculated by taking the sample statistic subtracting the null value that we assume and dividing by the standard error. So we're going to continue to work with that data set that we developed during the significance level video where we drew red cards. And so if we want to determine the p value of this sample statistic, which helps us to reject or accept our null hypothesis, we need to calculate the Z score. So we can say Z is data p hat. So this is our sample statistic minus 0.5, which is our null value divided by the standard error, which is Z, E, R or S, E, R. And so we can run that. We can then print Z. And so we can see that it is 7.76. And now this value in and of itself is not. It's not entirely representative of anything that has to do with the p hat. Rather, it is used within a normal distribution to calculate the p value based off of the left or right tail test. And so in effect, you can look at where this Z value lines up within a normal distribution and everything to the left of that becomes the p value. The proportion of data to the left is the proportion of is the p value for a left tail proportion of data to the right is the p value for right tail test. And so normally we would look at the actual data, but if our data is large enough, if our data is normally distributed, we can assume that the Z value will work. And so if we were doing a left tail test with this Z value, we can say stats.norm.cdf and we can just give it Z. And so we can see that on the left tail test 99% of this data was above our Z value, which means we would accept the null hypothesis. For a right tail test. It's the same formula. Except this time we need to multiply our Z value by negative one. Essentially, we just need to flip the sign of the Z value to do a right tail test. And so here we can see that it is close to zero. So in this case we reject the null hypothesis in favor of the alternative that in this case, if we scroll all the way up to our significance. Our alternative test was that the proportion of red cards is greater than 50%. And so that is our right tail test. Then our two tail test is stats.norm.cdf same as before. We need to do whichever one was smaller. So when we do a two tail test, we calculate both sides and we double whichever one is smaller. And so our smaller tail test was right tailed. So we do negative one. And we say times two. And so here, if we were doing a two tailed test, which is essentially will say not equal in the coding form. We can say that we would still reject the null hypothesis that the proportion is not 50%. So this can be used when your data set is sufficiently large and normally distributed to quickly conduct a hypothesis test.