 OK. Basically, we are going to talk about optimized workload placement using a new batch scheduling algorithm. So we are working with the Novot Cloud Technology Center. And we integrate all the cloud technologies. So this is one of the real problems we discussed with the customer. And we go ahead to implement this small tool and trying to help our customer. So Nova Scheduler today, basically, Nova takes a workload order. And workload order basically translates to a Nova API. And Nova Scheduler takes the request from API and also look at the Nova placement and the resource tracker getting from the Nova Compute Node and makes a decision to place a workload in certain Compute Node. And Nova Scheduling algorithm mainly is called a filter scheduler. And first, the scheduler goes through the list of the Compute Node and applies the filters. Filter can be RAM, storage can be affinity, anti-affinity. And after filter out some host, some Compute Node, then the weights basically Nova apply some weights. It can be RAM, can be IO specific to that Compute Node so that the Compute Node will order in certain way. So basically, most weighted Compute Node will be taken as a workload to place a workload. And in the following, I will work through some basic examples to explain what's the main idea of the batch scheduling versus sequential scheduling. The first issue we look at is only look at one dimension. It's a virtual CPU. The example we have, we have two Compute Nodes. Each we have, I show here is the utilization of virtual CPU. And the request of basically coming in from a workload pipeline, for example, this is one virtual CPU request. And Nova basically can easily find the solution. Then come to the second request. But in this case, there are no fit for the second request because Nova scheduler have no visibility for the future requests that don't have the knowledge. But on the other hand, if we have a batch scheduler, it's easier to plan the workload. A second example I want to give is basically two dimensional CPU and RAM together. Same idea. Nova can place one, but the second may be because it's another dimension that cannot fit. But for the batch scheduler, it's just easy to find an optimal solution. And the workload actually is multi-dimensional nature. And there are lots of characters to consider in the workload, RAM, CPU, storage. And there are other performance management, for example, SDD, SROV, VFs. There are also some policy related, like affinity, anti-affinity. And there's something even more people not even consider today like thermal. If you do a placement, you want to avoid some hot spot. So this can be all optimized in the scheduling algorithm. I also have an example here to show that the complexity of the flavor, basically, for this flavor, the request is 6 CPU, but it's across two different NUMA nodes. And even the memory request is different on different NUMA nodes. So basically, to find an optimal solution for this kind of multi-dimensional workload request, it's difficult, usually. And we basically propose to have kind of this kind of batch scheduler and take work order and also consider policies as an input. The policy can be related also to something we mentioned, for example, affinity and anti-affinity. One example of anti-affinity policy can be if it's just cross different compute node or it's actually cross the rack, for example. This can be an input of a policy. And the output of the placement plan will be deterministic. So that means it will work out a plan and basically place a workload schedule, have the workload on a particular host or compute node so that NUMA will just take this transparent send to a placement instead of recalculate the compute node. So this can be done, for example, by specifying the availability zone. So basically, each availability zone can be one compute node. So you can basically specify the compute node in that way. So that's one way to do it. The scheduler core algorithm basically will first apply all the constraints and do the multi-dimensional optimization. The problem is a known NP-hard problem. Basically, it means if you find the optimal solution, it takes a long time. Basically, the searching algorithm takes a long time. And here, in this literature, basically, there are known solutions for this. People usually do heuristic algorithm. Like example here is most loaded first. List loaded first. Basically, most loaded trying to pack the workload on certain compute node. And on the other hand, the list loaded try to balance all the workload to all the compute node. Genetic algorithms, on the other hand, can take consideration of all the multi-dimensional request and try to optimize it. Simulated annealing is very similar to genetic algorithms. But in this implementation, we use genetic algorithms. The work we do is simulate the set up simulate environment and basically simulate the result. In the environment, we have 10 nodes. And each node has 40 core and 500 gain memory. We only, when we take consideration of the mixed load with NUMA and non-NUMA load. So we compare the genetic algorithm and the most loaded first and the list loaded first algorithm. So the result is very promising for the genetic algorithm. We have two profiles basically considered for the workload. Multiple load side, but two set of workloads. So high workload basically have high virtual CPUs required. And it's range from 8 CPU to 24 CPU. And the medium workload is 4 to 12 CPU. So basically, the workload is randomly generated. And we try to put places. And if you look at the diagram, basically the vertical axis showing the percentage that the total workload is placed. And the 1, 2, 3 means the basically different workload profile workload samples. So here is basically key takeaway from this lightning talk. Best scheduler provide better workload placement. And it's good, genetic algorithm is generally good approach for multi-dimensional optimization. It's also much better in our simulation result, showing that it's much better than most loaded and list loaded first algorithm. And we are looking to basically attend our work and work with NOVA and virtual project to get involved and open source this tool to the community to use it. Thank you. Any questions?