 Welcome to another example with OptaPlanar. In this case, we will optimize the cloud. So here you can see a use case, a simple use case, where we have two computers, and we have six processes we need to assign to those two computers. The two computers are in the cloud, of course, right? So when we assign processes to these computers, we need to make sure that the computers have enough CPU, enough memory, and enough network bandwidth to accommodate for all of their processes. So in this case, we assigned four processes to computer zero, and you can see that there is enough CPU. Actually, there's only six gigahertz being used of the 24 gigahertz available on this computer, and there's enough memory net memory, 10 gigabyte being used of only 69 gigabytes, but the network bandwidth is just barely enough. There are 16 gigabytes being used of 16 gigabytes being available. So that means that we cannot use, not assign either of the other two processes to this machine because the network bandwidth would be, that hard constraint would basically be broken. Now for every computer that we use, we also have to pay maintenance costs. So because we have any, because we have processes assigned to this computer, we have to pay $4,800, right? So let's take a look at a bigger data set. Here we have 100 computers. If you scroll down, you can see a few of them. Again, with different characteristics, some have more CPU power and more memory and so forth. They are newer, but then of course, they usually have a higher price. And we have 300 processes which are unassigned. So we need to figure out which process do we put on which computer. Well, that's, that opt-op planner figured it out for us, right? So he started, I'm just going to stop it a little bit. Here's his first assignment. As you can see, none of the hard constraints are actually broken. Why is this? You can see the zero hard constraints broken here on the bottom. And if we scroll down, you can also clearly see that there's always enough CPU, RAM, and network bandwidth to accommodate for all of the processes. Sometimes there's only one process, as you can see on a computer. Sometimes there's, in this case, 18 or 16 processes on the same computer. So that depends, of course. Especially, of course, the computers which have more hardware and have a higher price will be able to put more processes. Now, let's see what happens if we give it a little bit more time. Because now we come at the price of $126,000 and that's pretty expensive. So let's see what happens if we give OptaPlaner a bit more time to optimize this. As we give it more time, you can actually see that it's moving processes around and it's shutting down computers. It's not using computers. So for example, now it started using computer 7, but if we would scroll down, you would see that more and more computers go black. So it actually figures out the way to use the least amount of computers to schedule all of these processes. And to run all of these processes at the same time, without actually having too little hardware for any of these processes, okay? Nice. So let's take a quick look at the domain model behind this. So we have a process over here on the right side, which requires a number of CPU, some memory and some network bandwidth. We need to assign this, which is a planning variable to the computer. This is on the left here. The computer has a number of CPU, power, memory and network bandwidth right again. So we just have to make sure that the sum of the CPU of all the processes belong to one computer. So all to all one computer are less than the CPU power of this computer. And also for any computer which has at least one process, we'll have to pay the maintenance costs, right? Okay. Great. But that's a pretty simple example, right? So let's go back here. There are only three resources, CPU, memory, network bandwidth. And there's no notion of the computers being where they are on which location or on which building they are. And there's no notion on any sort of relationship between the processes. So let's take a look at a more complex example called the machine reassignment problem. In this case, we have to assign the same thing again, processes to machines. Here they're called machines instead of computers. But there's one big difference. They are actually, all the processes are actually already assigned to machines. So we have to move them. And there's a move cost involved, of course, if we decide to move a process from one machine to another machine. So let's take a look at the domain model for this one. It's a bit more complex, as you can see. So again, we have a process over here. And again, we have a machine, which is so the computer over here. And I've split up the process and the process assignment class. There's a one-to-one relationship between them. That's a far not more easier for me to write or to understand basically. Now the process assignment is assigned to an original machine. So that means that this process assignment is, is on that machine currently. But we have to choose on which machine we're going to put it. So that's a planning variable. If you put it on the same machine, then we don't have to pay a cost. It just stays there. If we move it to a different machine, then we will have to pay a move in cost, right? Which is a soft constraints, right? Okay, now let's, so what are the hard constraints? Now, one of the hard, again, we have the same hard constraints and is the original example where each process has a number of resources, such as CPU, RAM and network bandwidth. In this case, however, the number of resources is dynamic. It depends on the data set. In some cases, we have up to 12 resources, such as hard disk space and so forth. And then each machine has a certain capacity for this resource. Now, there's a difference there in this case is that each machine actually has a maximum capacity. For example, it has eight gigabytes of RAM available, but it also has a safety capacity, which is usually about 90% of the maximum capacity. So what's the difference? We cannot go over the maximum capacity. That would be breaking a hard constraint. But as a safety precaution, we actually want to be below 90% of the maximum capacity or actually the safety capacity because it depends on machine to machine. It's not always 90%. So when we actually go over the safety capacity of a machine, we have discussed also a certain number of soft constraints. And if we go back to the example, you can actually see that's what's happening here. None of these processes use more than the maximum capacity, which is the number on the right, right? They're always lower, but a number of them do use more resources than the safety capacity. And those are currently highlighted in orange. And they actually make out the most part of the soft score that we're getting here at the bottom. So you can see where we have a very big number here because we actually break the safety capacity a lot, right? So it would be nice if we use the extra room we have here, for example, of that safety capacity to move processes into there without actually harming the other resource capacities, right? So that's the name of the game and that's what's Optoplanar does for us, right? So let's take a look at the domain model again. So that's one of the hard constraints, but there's another hard constraints, namely all the processes, each process belongs to a service. So for example, a service is the calendar service and it has actually multiple processes. The reason that it has multiple processes running the same service is because if one of those processes goes down, then the other processes will fail over for this process, right? Now, the thing is, of course, if two processes of the same service would be running on the same machine and the machine goes down, then there won't be much failover because both of the processes will go down. So of course there's a hard constraint which has that if two processes belong to the same service, they should be running on different machines, right? And it's not just that. It's not just the machine which can go down. The entire location can go down. The entire data center can get hit by a nerd quick or can lose its connection to the internet, right? So there's another hard constraint which says we want a process of the same service to be spread out as much as possible to different locations. We'll still sometimes have multiple processes on the same location, even if they're the same service. But there is a specific, there's a amount of spread we want to accomplish, right? And on top of this, this hard constraint, there's another one that services actually depend on each other. For example, the calendar service depends on the mail service. And then we need to make sure that any process of the calendar service runs in the neighborhood of the, so in the same neighborhood as any process of the, as at least one process of the mail service. So there's other hard constraints and stuff like that, right? And of course, all of these are implemented. These are, and Optoplanar takes all of these into account. Let's see what happens when we start solving this. As you can see, as we give it more and more time, the score actually goes down quite a lot actually. And we have actually a better form of the, a better, more optimized cloud. Now what you probably see is that the screen doesn't refresh. That's because it just takes too long. It would hang the application basically, in this case, because there's just too much data here. So let me just stop it and you'll see that the screen actually does refresh. All right, okay, so it's refreshed. And now there should be less orange, or at least the orange should be closer to the safety capacity. And not so close to the maximum capacity. Okay, so now to conclude, I'd like to show you one more thing, which is real-time planning. I'll show it on the simple example. It's easier to show there. So here in this case, we're again going to schedule these 100 computers. So let's continue again. So it was already pretty good. So still improving a little bit. Now what we're going to do is do a real-time change. Let's suppose one of the computers goes down, it actually gets killed. So we're going to kill this computer. Just see what happens to these processes. So they get unassigned for a split second and a planner immediately assigns them again. And the score actually didn't get much worse. So this means that it didn't restart from scratch. It could actually do this in real-time. In the logs I can actually show, this happens in just a few milliseconds, right? Thanks for watching this demonstration. If you want to know more about Optoplanar, just go to the website optoplanar.org. And if you want to try this example yourself, just download the zip and zip it and run the examples. Thanks for watching. Bye.