 pounds that are not in trouble or a full adult at the 80 pounds is not general. So then if I go the other way, 11.66 times 4 is 46.64 plus the middle point of 127.08, we get 173.72. Now there are, you would think, our data set is going up to 173 on the high end. So we're going to say, OK, that'll be the high point of the data that we're using. So I'm going to say, all right, then if I go over here, we can say the X's are just going to go from 80 on up to that high point of 174, and that should capture the data that we need to be plotting in the graph. Then we can do our P of X's, which is our norm.dist formula, norm.dist, the X, in this case 80, the mean standard deviation are going to be these two numbers, the 127.08, the mean standard deviation, 11.66 in our formula, and we do not want it to be cumulative. That's what's being represented by the zero. So if I do this all the way down, you could say, for example, this one, what's the likelihood that we have someone at 93 pounds, given our normal distribution, 0.05%. What's the likelihood that we have someone at 96 pounds, given this 0.1%. So notice that the questions that we're likely to ask are, what's the likelihood that we have someone on the high end, maybe, that's like 148 pounds or above? Or what's the likelihood that we have someone at 110 pounds and below, for example? So you might think that we could sum these up, but you can't do that exactly typically because we're talking about the area under the curve, although that will give you an approximation. Now we could compare this to our actual data set. So our actual data set over here is counting. We're imagining that this is counting an actual distribution, right? So we're going to say that these are actual numbers in pounds, but this is in percents. So what I'd like to do is count my data over here and then convert it into percents. So I'm going to do my frequency distribution. So for example, this frequency distribution, I want to say, how many people in our sample or in our population have greater than 84 pounds up to and including 85 pounds, three of them? How many people are over 85 pounds up to and including 86 pounds, two people? The frequency formula is up top, which is taking the frequency of the data array, it's a fancy array formula, and that's going to be all of this data. And then it's taking the other side is the bends, which are the X, and then it spills out, it's a spill formula, this data. The total of all of this data, if I add it all up, notice that 146, we had 239 in our data set for it. If I add it up, we come up to 25,000 data points. So I didn't put all those data points in here, because I only went down to here. But in Excel, it went on to 25,000. So in other words, if I was to count this data with a formula of count this Excel, count every line, just count them 25,000, that's why we have a lot of data. That's why when we graft it, we have a pretty smooth graph. And we can verify that those data have been properly put into bends, at least to some degree, by the fact that the total down here should tie out to the number of data points at 25,000, which are now being allocated to the bends. Then we can say, okay, I could take my data, for example, here and say, what's the likelihood of someone being 90% based on the bell curve versus my actual data? Well I could convert the bell curve times the 25,000, 25,000 times 0.9, 0.009, because it's 0.9%, and I get the 225, which is pretty close. What's the likelihood? I could take my 25,000 times the 0.0235, and I get pretty close to that 558. However, I can take my actual data and convert it into a percent by taking each of these divided by the total. So I can take, for example, let's go down here where we have some larger numbers. Let's take this one and say, I'm going to take my count, the actual count, 220 divided by the total, 25,000. And that gives us, if we move the decimal two places over, the 0.88. This one is 207 divided by 25,000 gives us the, if I move the decimal two places over 0.83% about. And if I add up all the percentages, they should add up to 100%. I didn't do the total down here, but they should add up. So then I can look at the differences between the likelihood based on our norm.dist versus our actual data. And that could further confirm whether or not we are tied into a bell curve type of situation. Then we have the Z score. Now the Z score is kind of like another representation or different representation other than the X, meaning the amount around the mean. So remember, we're talking about, if I look at a bell curve, we're talking about the middle point would be in here, and the Z score would be 0 if we're in the middle, negative numbers down below, positive numbers above, numbers greater than or less than 0 are going to be less normal. Normal would be in the middle. So we're going to say, all right, how do I calculate the Z score? That's going to be each X, in this case, 80 minus the middle point, hold on a sec, minus the middle point, which is the mean, so minus the 127.08, and then divide by the spread number, the standard deviation, 11.66, right there, that gives us our negative 4.04. Now notice that's unusual because it's way far lower than the middle point, and you can see now it's getting closer to normal because it's going towards 0. It gets to 0 around where our mean is, 127 to 128, and then it goes above. So now we're getting less and less normal on the high end. So in this case, the high end representing more weight, higher weight, the low end is less weight. So usually with these normal distributions, the normal is usually good because you want to be kind of normal usually. But obviously, when you're looking at weight, then if you're going to be abnormal in terms of more healthy by more muscular or something like that, then of course you might have the weight from the norm based on that. And again, you might look at different distributions of people who have different body mass, more muscle versus fat or whatever that could take, say, what's on average for a particular type of athlete and so on. But normal is usually kind of the baseline, of course, which is kind of good normally. All right, so then we could then ask these kind of questions, such as, let's see, this one, P of X.