 In this lecture, we're going to be doing very small simulation problems, and we're going to do it by hand. We're not going to need any fancy schmancy computer for it, but these are real simulations. I might want to do simulation by hand before we start learning about simulation programming for very simple reasons. For one thing, it helps you to understand exactly what's going on in a discrete event simulation if you actually have to do it yourself by pencil, paper, calculator. And for another thing, as we know, if we do any programming at all, if you can't do a problem on your own, then you really can't write a program for it. You need to know what's going on. And if you do a simple problem by hand, then you can program increasingly complicated and complex simulations digitally. Simple example where we would like to simulate demand. Suppose we know that demand follows a particular probability distribution. The first one that you see, actually, they're both really the same. And we've got an enumerated probability distribution. There's a 20% chance that demand will be zero units in any one week. A 30% chance of a demand being one, a 40% chance of two, and a 10% chance of three. Make sure, as usual, all your probabilities add up to 100% because that's the universe. So knowing that, how do we simulate weekly demand? We could take easily now, just take digits, as long as we have a way of generating digits. We've got 10 digits to work with, the digits zero through nine. 20% probability, 20% out of 100%, you need two digits, zero and one. For 30%, you need three digits. For 40%, you need four. For 10%, you need one. So you see how the digits are assigned to the demand per week based on the probability. Now we just need a way of generating random digits. And when we do that, for the first week, suppose we get a five. Take a look up in the table and we see that that means for that particular week, demand must be two. Let's continue with this and see what it looks like on the next slide. Continuing here, you can see the randomness over 10 weeks. The random digit that was sampled, let's say from a table of random numbers that could have been generated in any way that you can generate random digits. The first week, indeed, the number was five, so demand was two. The second week, the random digit was eight, the third week, nine. And so each one of these random digits ends up translating to simulated demand. And you can see that in the right column. Over 10 weeks, if we add up all the demand per week, you get 18, divide by 10, the number of weeks is 1.8. So the average weekly demand is 1.8. And if you need the standard deviation, you've got 0.97. So here's one way of using a very, very simple simulation. In order to estimate a parameter, and the estimate in this case is X bar. It's the sample mean weekly demand. And of course, it's a sample over 10. If we generated more, we'd have a larger sample, but it's a random sample. So we'll go and do this all over again and generate different random numbers over 10 weeks. What will happen? We'll get a different 10 weekly demands, we'll get a different average, and both of them will be estimators of the true average mu, and since this is a simple probability distribution that's totally well known, we actually know what the mean of the population is. It's the expected value of X. Pause for a minute, think about it, and why don't you figure out what it is, and you can compare it to our point estimate. Here we go, we did it again. Different set of 10 random numbers. We got a different weekly demand for each week, generating one random number per week. Ended up with an average of 1.3 this time, with a standard deviation of 1.06. Now we have two different X bars. If we generated 100 samples, we'd have 100 different X bars. Each one of the X bars is an estimate of the true mean, which is, did you figure it out yet? Let's take a look at the next slide. Very simple, we just get the expected value, taking each value, each outcome in the probability distribution, multiplying it by the probability, adding all those products up, and you end up with 1.4. So the actual mu, the actual average weekly demand of the probability distribution is 1.4. And of course, you're not going to get the exact same value, the probability of getting a 1.4 is pretty much zero. But every time you generate another sample, you're going to get another sample estimate. So simulation is very similar to what you've already been doing in your statistics courses, isn't it? It's just that you're collecting the data from an algorithm instead of collecting it out there in the world. This is where you take a moment to think. We do know the population parameter mu. We know the true average mean weekly demand. Why in the world do we need simulation? Is there something, some additional advantage that we could get by working with the sampling distribution of the X bar? Sampling weekly demand, getting the average. There's really no easy answer to that. Well, yeah, the easy answer is if this is all we're talking about, then sure, let's get the expected value. That's all we need. But it never is real life has much more complexity than this very, very simple problem that we have here. The beauty of the simple problem is that it's easy to see how to do simulation and we can expand that and apply it when we have more complicated simulations where we can't find an analytic solution. We don't know how to find the population parameters and we do have to generate data in order to estimate them. Here's another problem. It's still extremely simple, much more complex or at least to a degree more complex than the demand problem that we looked at before. We have a checkout stand, a small, let's say gift shop in a hospital or an airport. And we want to simulate the checkout. People walk in. We don't look at how much time they spend wandering around the store. We're only interested when they get to the checkout and they're ready to pay. There's only one cashier. Remember, it's a small store. However, we're assuming we'll never run out of room for the customers who either are in service or waiting to pay. What do we want to know? You always want to make sure of what your objective is in a simulation experiment indeed in any experiment. We want to know the average time customers spend in the system. And that includes waiting in line to pay and being in service. And in addition, we want to know the percentage of time that the checkout clerk is idle. We want to see if that clerk pays for himself or herself and has enough work to do it. It is a small store. What are our assumptions, our givens? We're told that the time between successive arrivals of customers is coming in according to a uniform distribution from one through 10 minutes. And these are integers. The service time, the amount of time required to service each customer is also a uniform distribution. And it varies from one through six minutes, also integers. Notice we're keeping things very simple here. To build our simulation model, we're actually simulating by hand here. So it's a thought model, you might say. We are going to get 10 poker chips. And one die, six sided die. And we're going to use those in order to generate random numbers from the probability distributions that we need the uniform probability distribution from one to 10, which gets us a customer arrivals. And the uniform probability distribution is from one to six, which gets us customer service times. And we can, by doing this generate a whole stream of random numbers for customer arrivals a stream of customer arrivals, according to this distribution, and a stream of service times according to its distribution. So that's our input. And once we have that all we have to do is keep track of it the same way that we did weekly demand. And it's basically now a very, very simple bookkeeping problem, which we can put into a list, a table, or even a spreadsheet as we'll see on the next slide. Take a look at this spreadsheet over here, or table, however, however you'd prefer to refer to it. So we're going to do this simulation run for 20 customers. You see the 20 customers there, one per row. We have to make a decision on when the simulation starts, we're going to say it starts when the first customer comes into the system. Maybe there's one customer waiting at the door when you open the gift shop. We're going to keep track of time. The two inputs you see are generated as a number of minutes. The first column number of minutes since the last arrival. And of course the first customer starts at time zero. So there wasn't were no minutes since the last arrival that was not a random number generated. However, the second customer comes in at time, three, three minutes. The first chip that was pulled had the number three on it. So uniform distribution between one and 10 for the customer into arrival time. The first customer is at time zero. The second customer is at time three. The third customer as after an interval time of seven. So that's at time 10 and so on. And really all this is, is, is just a bunch of bookkeeping for the service times. We have the die. So we're getting by tossing the die, we get a service time between one and six for each customer. And you can see the stream of service times right there for each of the 20 customers. And then we've got clock times, because we, we want to keep track of the number of customers in the system, which is going, there's going to be a lot of flux there. And if you're going to look for an average number of customers in the system, it would have to be a time average right. And we also want to know how long and whether the clerk is idle so that we can keep track of that statistic. The demo is measures of effectiveness at the right side of the table. Those are the metrics that you're interested in. That's what that's what you're running the simulation for. Now for clock time, what are we keeping track of for each customer, what time the customer comes in the customers arrival. The customer begins service. Okay, remember, we're assuming that we're not keeping track of how long the customer was in the store before they're ready to pay. So arrival at time zero means, okay, the customer's ready for pay to pay for something at time zero. Admittedly, this is artificial and it could be made more complex. And presumably you do that. You start with a simpler model and then you add layers of complexity right now. We're happy to work with this one. The customer arrives at time zero. There's no queue. There's no waiting. The clerk is ready. So the customer goes into service at time zero. That customer service times one. So service ends at one and the customer leaves the system. How long was the customer in the system all together waiting time plus service one. And the clerk was not idle at all during that one minute. And you can see how that first customer boots up the system. And then each successive into arrival time brings in a new customer. And the take, take a minute, you know, you really do yourself a favor. I'm not going to read off every row. Pause this and get a handle on the data that was generated that you're looking at. Here's the, the statistical output that we got from the system. First, we're looking for average customer time and system. Customer time and system for each customer. Adding them all up. We got a total of 68 minutes divide by 20 customers and on the on the average of an average customer was in the system waiting plus service. 3.4 minutes. The cashier idle altogether the cashier was idle 55 minutes out of the total time that the system was running. Well, the last customer left the system at time 118. So that's 55 divided by 118 or 47% idle, which is the complement of utilization. So the utilization for this system would be 53%. And this is all stuff that you remember from your, your, the simplest part of your, the queuing component of your operations research course. Of course, there were a lot of. There were a lot of oversimplifications here. Naturally, we're using integers. We have a small sample. We're, we're, we're limited because we're doing things by hand and everything takes more time than it should. And yet we have all the elements of a simulation. Every simulation that you end up doing is going to have all the elements that you see here. And we'll look at them in a minute. The only difference is that you're going to be using a computer algorithm, which means it's going to run more efficiently. You can collect more data. You can get better randomization. And, you know, but the basic stuff, the basic components are here. You can learn a lot from this problem. You can learn from this simulation that will apply to just about every simulation we do during the semester. Well, for one thing, it's a discrete event simulation model. No matter how we do it, whether it's poker chips or a computer algorithm in a specialized programming language or in something like Python. We're working with a system and a model both that are discrete event. The system changes at discrete moments in time. In this particular case, the system and thus the model that we build on the system change when, when what what are the events. Well, for one thing, when a customer enters the system, when a customer enters the system, there's one more customer in the system for one thing. The customer may or may not go into service right away, may have to wait in the queue and so on. The system changes again when a customer finishes service and leaves the system. There's one less customer in the system. Those are discrete events. They happen at discrete moments in time. The system doesn't change continuously. It changes at discrete points. But like every simulation model, we have three important components, three important elements, randomness, moving time, and collecting output statistics. We took the easy way out here and considered generating random variants by using probabilistic devices, a die and a poker chip. Creating an artificial environment for the simulation experiment. We moved time by the values that we generated based on the, these random devices. And we moved time at discrete moments when, when an event happened. And we collected the output statistics every step along the way. So that when the simulation was finished, we could get averages and analyze the, the output. Here's a new example. That's called the staggering drunk. What we want to do is watch a drunk and a cityscape walk around. Not knowing where he's going because he's drunk and basically randomly choosing a direction to walk. He starts at, let's say point zero, a street corner somewhere. And he walks, he's, there's an equal probability that he'll go in any direction, north, south, east, west. So each, each direction, there's a one quarter chance that he'll go at any corner that he'll walk in, in that direction. If he makes this decision 10 times, what's the probability that he'll end up within two blocks of where he started? This is a very good problem for simulation because it's, it's a classic random walk problem. And what you'd like to see is exactly what happens each run of the simulation. And if you have enough data, you can easily get the probability that's asked. How are we going to build this model? We're going to say that at each corner, the direction is modeled by a tuple, x and y. X is the east west axis, y is the north south axis. So if the drunk moves east, then we, let's say, add one to x, west, subtract one from x, north, add one to y, south, subtract one. And so you can see how this can be a good way of modeling the random walk or the staggering drunk problem. At the end of the whole operation, at the end of the whole exercise, we take the current value of x and y in absolute value, add them up and see how far from zero, zero, the guy ended up. So in order to actually implement this model, what we need is for the randomness. One way to do it is to get two digit random numbers. And you need, you know, one fourth, one fourth, one fourth, one fourth. So you have random numbers from zero, zero to 99. And let's go to the next slide and we'll see how we implement this model. We have random numbers, two digits. If the random number is between zero and 24, the guy goes east and we add one to x. Between 25 and 49, the guy goes west. We subtract one from x. Between 50 and 74, that translates to north. We add one to y. Between 75 and 99 south and we take one away from y. And if necessary, you can take a look at this flow diagram, which basically is a, an algorithm of, you know, whether we're doing this as a human being or programming for a computer. It'll work either way. And here you have the results of five trials. Each trial is 10 blocks. And so there's, you know, 10 random numbers generated 10 to different digit random numbers generated in each trial. And trial number one X at the, at the end of the whole thing X is negative one Y is negative one. So that's within two of the starting point. Trial number two X is zero. Y is negative four. So no, not within two blocks. And if you look at all of the five trials that we see here, three out of five are within two blocks. It's a very small sample. Can you really extrapolate? Can you generalize from this to the full probability distribution? Probably not a good idea. A better idea to generate more data, but it gives you a very good idea of how this works. So to sum up, we've looked at a simulation without constraining ourselves to a computer. And without worrying about whether we're writing our computer program in Excel. In arena in C plus plus in Python or Java. Or sim script. Or simio. All we want to do here is have something simple enough. So we can actually follow the simulation by hand. With pencil and paper, and that's what we did with two examples. It's just an algorithm. We can do the algorithm with paper and pencil. We can do it on a computer. It's going to be the same. Just a matter of maybe scale. Efficiency. It's not going to be necessarily more effective to do this on a computer. It's just that you can do more, more runs, more better probability distributions, more replications. But you're going to have all of these elements in any simulation that you do this semester. And I hope this lecture has been a good learning experience for you. Thank you for attending the lecture.