 and add some decimals. So for example, the 72 inches, then what's the likelihood, 13.20 likelihood of the picture being 72 inches. Now it's likely that if we're asking a question of whether or not I have any chance of being a picture or something, we might be saying, well, I might be able to get between like some range. So you might say, if I add that range that you have a 26.71 likelihood or maybe this height or less, right? Might be a question that we would be asking. Change and add them up though, because we're looking at the area under the curve. So it might get a given approximation to do that, but you would have to do the norm.discumulative function to get that number. But before we do that, let's compare this to the actual data. So this is the actual data, which we'll do a frequency, frequency calculation. And let's go to the home tab here and black, white and wrap it, wrap up the frequency because we want a frequency, mink when see middle, try to do a wrap, but I don't think there's much that rhymes with frequency. Any case, let's go ahead and then this is gonna be equal frequency tab and we're gonna pick up our data array. So the frequency is gonna be saying, how many times does our data over here show up in these buckets? And the bucket is gonna be, for example, the second one, how many times do the items in our data show up to equal or less than 64 or from above 64 up to and including 65, which would be the second one, right? So I'm gonna select my data over here, control shift down, and then control backspace and then comma, we're gonna pick up our X's, which will be our buckets, control shift down, control backspace, and then there is our thing. I'm gonna close up and enter, it spills it out once again. It goes a little long, so I usually go back up and try to say I don't want that last one. So it stops right there. And then I can double check to see if my data is picked up by saying total, let's sum this up. I'm gonna say alt equals here to sum, alt equals enter 100%, that makes sense because most of our data is within four standard deviations. And then I'm gonna say alt equals here and this comes out to 1,040, 1,034. So I can double check that number by saying this is my count of my data and I can count my data. These are the data points that we have of the pitchers heights. So I can say equals count, brackets, and I wanna go here and say control shift down, enter, and there's 1,034 data points. So that makes sense because now we've applied all those data points to these buckets. Now note that I can't really compare the frequency I have in my actual data to these percentages unless I either convert my percentages to frequency. I can do that by saying, I'm gonna take this, for example, times this, and then I can compare the two numbers with the same data set or, and this is probably more useful oftentimes, I'm gonna take my data set and turn it into a percent by dividing by the total. So I'm gonna now say this is gonna be my percent of, this'll be the percent of total of my data set, home tab, font group, black, white, center, wrap it, and then wrap it. Let's do this wrap. Okay, then we're gonna say this equals, I keep on wanting to do a wrap, but I can't think of a rhyme. Do wraps even have to rhyme? Cause poetry doesn't rhyme anymore and they still call it poetry, but whatever. So then this one is gonna be, this last one I need to say F4, dollar sign before the H and the 22 and enter and then I'm gonna put my cursor on this one. Let's percentify it, home tab, number group, percentify it, add some decimals, double-clicking on the fill handle, dragging it down. So now we have this one, I'm gonna delete that last bit cause I wanna total it up this way, alt equals totaling it, giving us, if I get rid of the decimals or let's make this wider, there it is, 100%. 100%. So then if I look at the difference, difference, now I can compare and say, let's make this black, white, alignment, center and wrap it. Wrap it like it's a Christmas, something wrapping it, man. And then we're gonna say this is this minus this and I'll make that into a percent and then double-click pulling it down. And so there we have it. So now we can see we have our data set, we can see that the mean is similar to the median, we can see the mode is also similar indicating that it might be in alignment with a actual bell shape curve. We can also see that if I plot the bell shape curve and compare it to my actual data on a percentage basis that it's pretty close, a lot of these data points. So we're thinking now that our bell shape curve might have some significant predictive power. If I then plot my bell curve over here, selecting my percent data on the bell curve, I can say then let's insert charts and make a histogram from it. And so there we have a bell curve and now you can see, okay, that bell curve looks like it approximates our data to some degree and I can also then let's do our standard thing. I'm gonna go to my select data up top and I'm gonna go to the X ones and put in my own Xs because it shouldn't start at one. Let's edit the Xs, it should start at 64 up to 83. Don't just do your own thing, Excel. You have to use the Xs that we give you. You can't just make up like your own thing. And then we can put the actual data on top of it if we wanted to to plot them together. So I could then say, all right, let's go to the chart design data and then let's add our actual data which I'm gonna represent in a percent format this time. And then I'm gonna make sure to delete this, be careful with the data series. I'm gonna say, I'm gonna represent the data this time as a percent, the actual data as a percent, not picking up the 100 down below and boom and boom and I go, okay, and you could see, they're pretty close, right? So that again, another indication that the bell curve would be a good approximation. Now next time we'll do an area curves because the area curves is what we're often think about when we're thinking about the area under the curve. So we'll continue on this next time.