 how we hard coded the numbers so they don't change because of that random number generation tool. Okay, so now we can count the frequency of these items and we can do our standard kind of frequency type of, type of, let's do it this way, where we would say, okay, here's our bends. So these are representing the minutes between arrivals now. So we've got zero, you know, if I look, if I look at these numbers, what's the count that we have zero minutes on up to the 40 minutes between the arrival times? And note that you can't really use a count if function to do this because these numbers over here are now not whole numbers. So we have to use the frequency which are gonna give us the bends. So here's our fancy frequency which we're just gonna take the data array over here and then we're gonna say the bends which are over here. That's gonna be our array function that'll give us the frequency and then it'll put these items into our buckets. So in this case then we have the number one. So one minute, how many times did we have the one minute and in our data set on this side, 51 times, two minutes or up to and including two minutes. We had 30 of those, three minutes, 31, four minutes, 19, five minutes, 20, six minutes, 18, seven minutes, 14, eight minutes, 14, nine minutes, 18, 10 minutes. And you could see it starts to go down as we get up to those higher numbers. So there was a lot of them, you could see that are on the lower side. So the time between arrivals, we tend to have a bunch that are on the shorter side when things are following this exponential distribution and then we've got a few that take a lot longer in the intrams and that is how you can kind of imagine what's happening with our curve. So then we're gonna say, okay, I can also represent this in terms of a percent of the total. So these frequency bins, if I add them up, should add up to the number of counts that we did over here, the number of customers that we looked at and saw the intram time, which was 300, so that looks correct. And so I can divide each of these then by the total of 300. So 51 over 300, oops, hold on a second. 51 over 300 gives us the point, gives us the point one seven or 17%. So we can represent this as a percent then as well, which is what it's gonna be represented as when we do the actual exponential distribution. And so that's showing that calculation. Okay, so then I can do it this way. X equals the arrivals during one minute and let's this time use our actual exponent dot dist. So now I'm gonna do the same thing, not using our randomly generated numbers, which represent us actually going out there with a stopwatch, but now we're just gonna do the smooth curve using our exponent dot dist, where I'm just gonna take the X here, we're gonna take the lambda, and then we're gonna take the cumulative, it's not gonna be cumulative, so we put a zero. So now we're gonna plot this out with our actual curve, which is similar, notice it's giving us, it's giving us the percentages, right? Because when I use this curve, I'm not gonna get an actual frequency because we're looking at the percentages. So then I'd have to, if I looked at this one, what's the likelihood that we have the one minute, and then if I did it 300 times, you would think the 300 times the point one, four, one, one would be the actual frequency of it. So that's why you need the percent that we have so we can compare over there. And so this is what we get when we get the smooth curve or the curve generated from our function, right? And you can compare these out. So if this is the one, this is versus one, two, and two, three, and three, four, and four, five, and five. And so you could see there's somewhat similar. And so if I was to plot this out, this is the enter arrival times from our actual data set plotting this out in a histogram, which looks like this. And you can see it kind of, it's approximating the shape that we would expect. It's not perfect, of course, because we didn't generate, we only generated 300 numbers. Here it is with another type of graph. And then if I looked at it in comparison to the actual curve, which is the blue curve in this case, so the blue curve is a nice smooth curve compared to the random generated curve, we can see that it approximates what we would expect from the exponential distribution. So the general idea with these line waiting situations is like, why does that happen? And you could see why it kind of happens here is because you've got these, the times are often short, the intervals are often short, but then you have some of those intervals that are the long intervals, right? And that's what's given that characteristic type of shape, which often happens in these line waiting situations. So if you were in a, so if you saw the Poisson distribution in a line waiting situation, then again, oftentimes you would think that if you took the exponential, the time between, that it would follow, you know, this kind of exponential characteristic shape as well.