 would say, like this, at a conference called YAPSI, the sentence by repeating itself more than five times. The sentence means, or a treat. You will understand better if you knew that there was more than one buffalo. You parse the sentence, buffalo. I would say, lessons from the city of Buffalo, New York, which are into Japanese, you buffalo's deepings. I think we happened to find out just jokes with a story. Then Brooks is a disserviceful, run-to-run, but that's service, mostly closure stuff these days. Working great, et cetera, not made money, got to shotgun in March 21st, 2010. Svetma said, in 2015, two months after the demise of Bronco Run, he put this out. He says, by the end test, read on different stats, service, but weeks worth of data. On the y-axis, you see the number, the number of jobs running on Travis CI, people were sad, especially this baby. This is a very, very sad graph. It is like an airplane, a paper airplane. Really complicated and looking great. Once you throw it and fly it in 19-0, just get the pattern for flying. Flying is easy. If you can just fly it, go as fast as you can land, precisely where the theory will come in. The system here means almost anything. What happens to the system is to put some sort of input control in front of the system and how you input based on this output from the system. This is called feedback control, influenced by random events. So there is a coil, same thing from the other side. The coil sits between two chambers of different temperatures exposed to the hot liquid and the wind chamber, and it will cool down so that it shuts off the flow between the two chambers so that the thermal stats work in these days in modern cars. You can also have a mechanical thermal stat like a sophisticated control system like 747. All these things were, this is a mathematician, Simone Laplace and Jean-Pierre Jauvard who wrote this talk. So I'm going to introduce a very common controller, which is called PID control. Between the actual signal and the desired signal level, on the x-axis you see time D and on the y-axis you see the signal level. Signal level really doesn't matter what it is as long as you can measure it. The desired output level and the actual output level from the desired level. For example, if you have the set to 72 degrees Fahrenheit and it's 4 degrees, you know that you are 2 degrees away so you have to pull it down. If the air temperature is 68 degrees, you're 4 degrees away, you have to heat it up. So that's the total thing over time like that. The zero level is the system based on this information error. Time D, difference E of T, big change. If you are 10 degrees away from the desired signal, the desired temperature, you have to pull it down faster or heat it up faster because there is a big difference historically when it's changing. If the error correction is happening a lot faster, then maybe you can slow down a little bit. If it's slow, if the error is not changing fast, then you might want to work a little harder. Present information and the recent past. In front, this is the present value, this is the integral value and this is the derivative. Again, we have a proportional component, integral component and derivative and here we have PIB controller. Two of those coefficients constant, you can look at how each of those coefficients will affect the system performance. Here we are looking at the proportional constant. You can see that before K equals 0.5, the system will go very slowly and eventually it gets to the point close to the target signal around D equals 8. If it's bigger, K equals 1.1, it is going to shoot out really fast and overshoot the target and then error correction makes the error signal dip and then it's going to go back to desired level around 3.5. The constant is too big, the system will behave a little erratically and going back goes to the desired level. And you can see how the coefficients can affect the system controller's performance. Fast supermergence is very good, right? If you have thermal stat that doesn't do a lot of work, that is a better one. If it doesn't exhibit this kind of behavior for K equals 1.6, it's a good one. You don't want to be hot, cold, hot, cold, hot, cold, eventually get comfortable. That's not very nice. So you want to talk about how these controllers work. Oh, I had a piece. Again, sorry. So you can say that some systems are slowly reacting to changes, you know, in addition to the things that we talk about with the coefficients. Why is that? How does that work? Some systems are not going to change based on the input all the way, right? Here we have a, can you see the pot on the stove? Okay, so the input here will be the fire in the stove. You can put more wood in there and you can take the wood off. You don't take the wood off. And then pot is going to heat up whatever is in the pot. It will take time to think that you're really interested in the food in the pot to heat up in the stove. The difference between the time you put in the wood, food is ready for consumption. Also talk about delay. It takes time for the water to travel the entire hose and the water to come out of the hose. There are slightly different kinds of time. What does it mean for traps? Remember this graph, this sad graph. We want to add more capacity to the blue part so that we have less of red. We use EC2, this thing called Oval Scaling Group. They will put together EC2 instances together so that they can serve a single purpose. To change the capacity, they use this thing called dynamic scaling policies. It will let you choose how you are going to add or subtract the capacity from the Oval Scaling Group. They can use metrics. It can be controlled by schedule by simple queuing service. I don't think you can find more than one kind to change the scaling, which is a crawl, by changing capacity. How many you want to add or how many you want to subtract. You can set it to the exact capacity. If this thing happens, I want five instances in this Oval Scaling Group, for example. Or you can change how much based on the... If you have 100 instances in the Oval Scaling Group, you want to change by 10% or 10% up or 10% down. It doesn't matter how many you have in the group, but it is going to be done by percentage. Notice that none of these things will fit nicely with the hit controller that we talked about. Improvise a little bit. We are going to use metrics and changing capacity, meaning that we measure something. If we have too many instances to serve the current load, we are going to reduce by one, I think. If we don't have enough load, there is more demand that we can handle. We are going to lay here. It is going to behave. The threshold shown in the red, which I defined as the difference between the capacity and the current load here. You see that we have exceeded the threshold. We are going to reduce the number of instances in the Oval Scaling Group. It is going to reduce, reduce, reduce until the headroom goes under the threshold. This is the capacity, and we start adding capacity a little later. You can see that. We are going to reduce a big surge of demand, which we cannot meet because there is a delay in system response. We have a big dip where we cannot meet the demand for a long time. The headroom is set to about 40. The graph will look like that. It is still red. It is a very sad graph. It is a little better than the previous graph. Bigger headroom? Thank you. If you have bigger headroom for the Oval Scaling Group, you have a loftless red. Yay! It costs a lot of money. This one is for 24 hours. You can see that we still have a red, but it is a lot better than the one we have. This is for three days. You can get shorter delays. The reason that we have a long delay is a little bit of implementation details. I am not going to go into that, but that will certainly help. Shorter delay in the system will let your system be more responsive to the change in the demand. There are also other things that I probably haven't thought about. I think I am running out of time. That is just last yesterday. The day before. We can still see that we have a red, but not as bad as we used to be. That was the two different goals. We have a huge demand, or we want to save money. We want to make sure that we are providing the best service while making sure that we can save this. This one is for three days. You can see that the graph is a little less granular. While looking at this graph, I will welcome your questions at this time. The question is, can you predict the surge in demand based on time or other things? In general, when the United States wakes up, in some ways we could, based on the scheduling policies that I mentioned before, easy to do all the scaling work, the problem is that I do not think that you can combine the schedules, and then yes we can, and we probably could. We used to. It is not cheap. It is. Maybe we do, but the easy funding is really as we might otherwise. By speeding up the bills, not change, and demand will not change, even if each bill is short. It doesn't matter how long each bill is, but the demand stays the same. You're not missed enough. We'll talk about that later now. Can't you or frequently? The difference of the total scaling groups calling is 60 seconds, I believe. The total period of time is the delay and all of those things. The question is, can you let coal less frequently and respond better? I don't think that the answer, I don't think it's going to be more corporate. It takes a long time for the capacity. That is the real issue there. And we have tried to make that happen. That's why we have nothing to do. I think the gentleman, as you had the right idea, if we can predict a capacity is going to be at a certain time, we can pre-boot and meet those demands that we can fairly anticipate. And that's something that we can talk about. So the question is, can we do better with the easy to boot up time? So, yes and no. That is that someone implementation detail that I didn't go over have to put images from the internet to make the instance ready. You can optimize in many different ways. You can have smaller Docker images, for example, or you can hold fewer Docker images to make sure that you can advance correctly. But there are problems to those approaches and those issues. There are operations needs to investigate those things at this time. The question is, have you thought about putting a Docker registry closer to the EC2 groups so that the portal is faster? Good question. Offer the dedicated resources so that the customers can schedule their bills for. For the hosted platform, we do not. We do have an enterprise product that will allow the on-premise installations to the bills.