 Hi, my name is Hang Doan Chun from Clowart. Today I will present a new scheduling mechanism that we developed at Clowart. It's policy-based scheduling. I will present the motivation why we do it, the current open stack mechanism, and our proposal illustrated by the use case and our vision on the future. First, on the motivation, as a cloud public provider, Clowart wants a new scheduling mechanism that will help guarantee the contract of the client, provide him with transparency, with a better experience, more service, more offer, with attractive price. And it has to provide administrator with the flexibility. Flexibility in scheduling per client, per resource, per the context. And it has to provide him with the capability of realizing different objectives in different situations. And also, it should provide also the simplified yet efficient way to control and manage the system. So with the smart placement engines should have consideration on the regulation client construct, cloud provider operation, the environment and infrastructure and for. With that in mind, we naturally would first look at the open stack mechanism with this filter scheduler. If we look at filter scheduler in Sinova, what it is? The two step provisioning mechanism. First, it will filter out all the hosts that are not capable of providing the request. Secondly, it will order the remaining hosts based on some criteria, the remaining RAM in this case, and select the best host that is suitable. This mechanism is flexible and works. However, the problem is that it's static. Once you put all the filter wire parameters into Nova config, you cannot change it on the fly. You cannot modify the filter for one client or for one cluster of hosts, et cetera. So it's difficult to answer two different objectives in different situations. And it doesn't have any consideration on the client context. You cannot provision the resource based on some service on some class of user, for example. And you cannot provide admin with fine-grained scheduling. For example, you cannot ask Nova to put a global adapter load balancing policy in the whole infrastructure and consideration policy in one of the aggregate only. It is impossible for Nova scheduler currently. So we need to develop a new mechanism. In our solution, it's policy-based scheduling mechanism. In first step, we will provide a new solution for Novacentric architecture so that we don't need to modify anything. But later, I will present our vision on the meta scheduling in the future. The idea of policy-based scheduling mechanism is to separate the scheduling logic, how you want to provision the resource, and the execution domain, in which era, in which scope. So the scheduling logic, it represents the S rule target effect condition. Basically, it's like under this condition, apply this effect to this target. The effect can be load balancing, consolidation of a sub-question. Target can be one aggregate, one availability zone, one class of users. The rules are stored in the repository. And our policy-based scheduling engine, you consume the rules and apply it to the request of the client for certain host. Finally, for the bike work accountability, we reduce the filter and wire in Nova so that at the very least, if you don't put any rule in it, it will function exactly as the filter should look. So you don't need to worry about the transition between filter should look and our engine. For example, here we have three rules. The first rule saying that this client, Frank, benefits from the same as Cloud Gold, meaning that all of this bike machine will be hosted in the Gold era, where all the high-quality equipment will be used. The second rule saying that I want to apply load balancing in the whole infrastructure. And the third rule saying that I want to apply consolidation in one aggregate only. So how does it work? Here we have the load balancing as the global rules and consolidation as the local rule for one of the aggregates. The principle is that the local rule runs over the global rule. So inside the aggregate, the workload will be regrouped in minimum of the host and outside of the aggregate, the workload will be distributed equally between the host. Now I will illustrate by some use case. The first use case is to enhancing the regulation constraint. Let's say that now a French medical company wants to deploy his bike machine in France only to respect the French authority. Here I have four server in two zones, the France zone and the Japan zone, the blue one and the red one. If I look in as this company and launch an instance, well, let's say that I would run three instance, I select the image without selecting any availability zone, launch it. The system will detect automatically the context of this client and deploy the bike machine inside the French zone, meaning that inside this blue one. Now, if one of the users of this company select a ground zone, let's say, you select Japan instead of France, ground zone using the same image, launch it. The system will rise an error code saying that you cannot create the outside of the availability zone. And of course, you only have three vital machine running. OK, so that's the first demo on the first user. The second demo is on another user that we call GONE Client. This client benefits from the GONE Service class. Basically, here I will change the color. The GONE Service class contracts saying that his bike machine could be deployed in the zone era where all high quality equipment are used. So you may say a thing that is OK, it's easy. In Fintech Studio, what can you do? You can create a new flavor associated with the GONE era and then let the user select this flavor. The problem is that if the user changed the contract, so he had to modify all of his applications to select another flavor to benefit from his contract. It's not good. What we're trying to do is provide him with the transparency, meaning that here he doesn't do and he doesn't need to do anything special. He selects, I will create the France GONE. He selects the same flavor as the previous client, the same image, and launch it. Here I have the room for this client saying that this client benefits from the Service Class GONE, whereas the previous client has the constraint of availability zone in France. So if I go back to the portal, we see that all these three vital machines are deployed in the GONE era. So we have two clients who do exactly the same step, but result in two different placements. These two use cases showing the difference back with the context of the client. The client doesn't know anything about what happened inside the cloud, but our system will take care of it considering its context, its contract, and regulation constraint. Now I will present another demo on the Azure Keep or the Admin. Firstly, I will delete all of these vital machines, delete them. Now I want to return to the history between the load balancing and consideration politics. Let's say that a service provider signed a contract with the software vendor with charts, with charts, the license fee based on the number of hardware on which the image is deployed. So if four client deployed, four vital machines using this image here, the image we call property soft, what he's trying to achieve is to have all of these vital machines deployed in one server only so that one can pay one time for four vital machines instead of four times for four vital machines. So basically, it's consolidation for this image and load balancing elsewhere for everywhere. So let's do it. The first step I will do is to reserve and aggregate the blue error here for this image and make it consolidated. So the first step is to add a new rule here. OK, copy rules, put it here, submit a bit. The rule is very simple. I will call a new filter. The filter itself exists already in the ISOs. So it's nothing new. The thing is, now you can call it on the fly without restarting the system. The second step, I will make this aggregate here consolidated. Now the rule says that I want to apply consolidation in this aggregate, whereas we have load balancing everywhere else. So now if I launch this image, it will be deployed inside the blue error in one of the holes only. So that is consolidation. Of course, if the user continues to create the new vital machine with the image, it will consume the whole resource of the first server and would use another server, et cetera. Now outside of this zone, it's still load balancing. So if the user launch other image, Ubuntu, for example, this vital machine could be applied in the gold zone with the load balancing policy. Meaning that will be distributed equally back with the two remaining hosts. So that is the possibility of separating different policy from the different error of the inside cloud. OK, here, there. Right to my presentation. We already proposed this mechanism in Nova Launchpad to have an engine working inside Nova Studio. But our vision is not limited in Nova, but to have an engine common for Nova, Cinder, and Neutron at the same time. And we would like to have a new service manager to control and manage the infrastructure and ensure the lifecycle of the vital machine of the clients. For example, it can initiate the migration for the vital machine or recycle the aggregate if needed. And finally, we imagine of this policy management system that can control and manage the policy. So that is not me who will write out the rules, but the system will translate the contract and the opportunity, but that means into the rule itself. Our effort aligns very well with the effort in the Nova workgroup, especially in the GAN project. Basically, GAN project will create a new service based on scheduling, based on Nova scheduling. That will be here. So we hope to be able to merge this engine into the GAN project. Thanks for your attention. And if you want to know more about this, you can find our policy by Studio.Log on Launchpad.