 Hi, welcome to another Optoplanar video. This time we're going to look at using a decision table inside Optoplanar to define the score of the planning problem. So Optoplanar optimizes a planning problem and in this case we're going to look at the cloud balancing problem again where we have to assign processes to computers and we're going to use a decision table to define some of the constraints. This means that we can actually add additional soft constraints or remove existing one by simply editing the decision table. And the decision table as you can see is an XLS file, an Excel file which you can open with Excel or LibreOffice. Now let's take a look at the planning problem. So the planning problem is assigning processes to computers and you can see here we have a number of processes, we have nine processes and we have three computers and for example we've assigned four of those processes to computer zero. Let me show all four, these four processes. And you can see that they require a number of CPU power each, some memory and some network bandwidth each. And you can see that if we count, if we sum their requirements we have enough CPU power on this machine, even though the machine has 24 gigahertz of CPU power and the processes require eight gigahertz in total and the same for memory and the same for network bandwidth. So these are three hard constraints and these three hard constraints are still in there. They're still being used. But there's also in the normal example there's a soft constraint where we look at the maintenance fee and we try to shut down computers. But in this case I've disabled that soft constraint. So instead we'll actually define other soft constraints and we'll use the decision table to do that. So in this case we presume we don't have to pay any maintenance fee for the computers and computers are sitting there so we will try to use them as optimally as possible. And let's take a look at what we have as soft constraints. So as you can see I've defined them in this decision table. And the first one is that if we have a risk for out of a memory on a certain machine then we will have a soft impact of a minus 100. So which is pretty high soft impact actually because all the others as you can see have lower impacts. So what happens here is if we have a certain machine a certain computer where the free memory is less than two gigabytes of free memory then we will have this, the score basically will go down with one minus 100. So if the autopanner finds a solution and it violates this constraints one or multiple times it will have this impact on its score and so it will try to avoid that to get to find solutions with a better score. The second rule is actually the healthy rule. So if we have at least two gigabytes of memory, free memory and at most eight then we have zero score impact. We can actually remove this line. It's actually better to remove it but for clarity I've added it here. So you can see that starting from eight gigabytes of memory that we are actually adding all the rules. So if we have eight gigabytes of free memory well and that's not bad in itself but if you have eight gigabytes of free memory and we have, and so that memory is unused but we have no CPU or no network band wide available on that machine. That means that if new processes come into the system which we need to schedule we cannot schedule it to that machine because that machine, although it has a lot of free memory doesn't have any of the other two resources available so it can't, it will have a tough time hosting any additional processes. So we paid for that memory at some point so we want to use that memory as efficiently as possible so we want to make sure that if new processes come in they can actually be hosted on that machine. So if we have eight gigabytes of memory or more free on a machine but we have less than four CPU power available on that machine then we're going to lose 10 points. If we have the same case but if we have at least four gigahertz of CPU power available but less than eight gigahertz this is still not that good because a big process that require five gigabytes of five gigahertz of CPU power we cannot schedule this on this machine if it violates this constraint so then we're going to punish this again but this time it's a lot less, it's only minus one so it's 10 times less so it's less bad. Now the same, so we do the same for network band wide where we say if we have a machine that has eight gigabytes of free memory but less than four gigabytes of network band wide then we're going to also minus 10 and similarly for the last constraint. Now for the record it is possible that a machine violates both these two lines so it's possible that a machine has eight gigabytes of free memory but no CPU and no network band wide and then it's going to suffer twice so it's going to have a score of minus 20, right? Now let's take a look at what the result of this thing is so let's take our example again let's take a big data set with another big data set actually still a small one with a hundred computers and 300 processes let's start solving this and as you can see let's give it a little bit more time you can actually see the score here at the bottom and it's finding a better and better solution as you can see. Now let's take a look at what's it's doing. If you remember first of all all the hard constraints, so matter is enough CPU power memory and network band wide for all of the processes as you can see nothing is red. And second of all it's first primary probably looking at that constraint where we say let me show you again the constraint where we say risk for out of memory, right? So as we can actually look if we actually look into the list you'll see that all the machines if you look into this list there's at least none have less than two gigabytes free. So for example this machine has six gigabytes free and so forth, right? You can see this machine has two gigabytes free but two is not less than two so that's fine. You can also see some computers are not used. Actually all the computers which have just two gigabytes of memory are not used. That's actually logical because as soon as you schedule a process on that which uses at least one gigabytes in this model then you'll see that there's only one gigabytes free so it will violate this constraint, the soft constraint and as a result of which OptoPlaner will avoid that because it has already, as you can see here found a better solution where it doesn't lose 100 points. In fact, it only loses six points in this case, right? Now if you look at the other constraints the other constraints were okay if we have free room of at least eight gigabytes then we want some free room of CPU and network bandwidth. If you look at these cases we actually see that this is the case. So here we have a lot of free RAM. You can see there's actually quite a lot of free of the other resources too. Let's take a look further here. This one does not have eight gigabytes of free memory left so it's not trying to do that in this case. But cases where we do have this as you can see we're trying to open a lot of gap here. So that means that if new process come into the system we can likely schedule them on machine 31 similar here for machine 38 and so forth, okay? Now let's stop this and we have a score of minus four and I'll put this at the side, all right? Because now it's time to actually change the rules. Let's change the rules of the game. Now if you look at the solutions there's one thing that bothers me. We don't risk it out of memory error anymore because we leave one gigabyte of memory just in case something, some process there uses far more than it's actually said it would. Now, but we do have these cases where the network bandwidth is being fully used, right? So we have actually quite a few cases where it's being used all completely. Now that means that there is a risk that some people who come to the server don't actually mean that they go out of time, right? So that some certain requests go out of time. So let's take a look at adding a constraint for that. So I'm going to add two constraints, right? One is risk for out of CPU and the other one is a risk for out of network, right? Band-tide. So let's take a look. Oh, let's make it a little bit more prettier, all right? Let's remove the borders, yeah, over here too. And let's take a look, what do we need to do? Well, if the CPU is less than two then we have a problem and the problem is minus 50, right? Okay. And then for network bandwidth, if you have less than two gigabytes of network bandwidth then we're going to do minus 100. So I keep messing with this border. So let's fix it. Okay, great. So we've just added these two rules and let's take a look what the impact of those two rules is. Let me just, oh, that's not what I wanted. It says one on one, let me just put the application again. So on the left we have the application which I ran last time. And so we started the application again and here we go. We loaded the same data set, we're going to solve it and we see that the score actually starts out a lot higher, right? So this means, this is because of course we added constraints. We made it harder to find a good solution. So it's natural that we receive a reflection of this in the score actually. But if you look at the solution, so let me just hit here, oh, hit here. Okay, so let's take a look. So what we notice here is that now it's actually making sure that on each of these computers there is some CPU and some network bandwidth available too, right? And you can see that in most cases it exceeds in this. We've basically effectively lowered the maximum resource of each resource type on every computer, right? So the maximum CPU, the maximum memory and so forth. We've actually effectively lowered that, although you can see that if you look at the score we still probably have two cases where it's not succeeding in doing that. So there's probably still two cases where we actually use the full amount of network bandwidth or something like that. Let me just see if I can find them. Don't immediately see them, but they're definitely in here somewhere. So, okay. So, and the other thing we can see of course is that we get a different solution, right? So this time we don't have a lot of space left on this first machine. This is because we need to reserve more spaces on the other machines to get this extra network bandwidth and so forth. But it is trying to load balance as much as more. So it depends of course on the user what he wants, right? What is the better solution for his planning problem? We can ignore the score. It is just a matter of getting the lowest, I mean the best score, but we cannot compare the score between these two cases because they use different rules, right? Okay. Yeah, let's stop this, right? So how did we implement this? Let's just take a look at the technical side behind this. So the first thing is, so of course we have this Excel file. The first thing is we, in our solver configuration, we added our original rules where our disabled to original soft constraint and then we added a score DRL for this XSL file too. So we can just add it directly there as an XSL file. Actually in the new version of autoplanar, we will actually also have support to add a full file path in there. If you want this to be, for example, put this on the desktop and allow them to change that. It does have, for portable bits, not that a good way to do it, but for experiments and proof of concepts, it's a useful way. Okay, so we just add XSL file. Okay, great. And then if you look into the XSL, so what's also different is, so I've changed the rules a bit and I've added what we call a cloud computer usage object which basically just encapsulate the amount of free memory, free CPU power and free network bandwidth for each of the, for every computer. So there's one per computer, a cloud computer usage. Okay. Now let's take a look at the XSL file. Now, if you look at the XSL file, this might not seem that there is a lot of technical stuff behind this, but there actually is. If you actually show the things which you don't want users to change or to see, actually. So what we have here is we say, okay, it's a rule set. We say the package name. We do some imports of the classes that we're going to use. We can do some notes, of course, right? Then we have this rule table. And for the record, you can actually have multiple rule tables. And in the rule table, I've said, okay, I have a number of conditions. So each of these conditions. So for the record, if a cell is empty, then it doesn't count, right? So this condition is not added. But you can see here, I have, for example, in this case, the two is then put into this condition. So the condition starts with looking at first at what class do I apply on? So on the computer usage. And then it says, okay, the free memory, which is one of the properties needs to be less than a parameter. And in this case, the parameter is two, okay? So in the same happens for the other constraints where we're just going to check if it's, if it's, for example, if it's a memory, at least we're going to check if it's smaller or equal to the parameter. So in this case, it's smaller or equal to two or if it's smaller or equal to eight, right? Now, if such a rule matches, so each of these rows becomes a single rule, right? Now, if one of these rules matches, so this is the left-hand side, then the right-hand side, which is the action gets executed, and here we just say, okay, there is a soft constraint match and we're going to remove, we're going to add that parameter, basically. So we're going to remove 100 points here, remove 10 points here, and so forth, right? So this is basically the technical side behind it, right? So again, if we, before we send this to the users, we hide this part, right? And we tell them, okay, you can change this part. Now, decision tables is just one way you could allow your users to change your soft, your rules, right? Your soft or your hard rules. And many other ways you can do this, you can build your own GUI, but rare, which you used to generate the rules, or you can use the rules workbench and so forth. You can even use other rules technologies such as drill scorecards, which you should not confuse with actual score, the score calculation, which we have here. But so there's a lot of, there's a number of things you can do there. And they all integrate nicely because, yeah, anything we can do in drills you can actually use in Optoplanar, right? And of course, if you don't like the rules, you can still use plain old Java score calculation in Optoplanar, but that's far more technical and far less flexible than this, right? Okay, so thanks for watching. And if you want to know more about Optoplanar, just go to the website optoplanar.org.