 Hello everyone. I'm so happy to be here today talking about this topic. Autoscaling thenkin on OpenStack Cloud Platform. If you don't know what's thenkini, so thenkini is the greatest framework for continued integration, delivery, deployment, even just an automated task. You can write your pipeline, just decry the task and thenkini will do exactly what you write in the pipeline. In this presentation, I don't talk about how to write a pipeline, but about how to build thenkini system focused on how to do autoscaling thenkini on OpenStack Cloud. But first, let me introduce you a little bit about ourselves. I'm Chien and my colleagues Kong. We are boss cloud engineer at Vietel Group, the biggest tech company in Vietnam. Here is the agenda of my presentation. Firstly, before we go to the main part of this topic, we will take a look at the thenkini system that we built and then I will explain some thenkini autoscaling mechanism that we use. And finally, I will show you some configuration in thenkini that I just and I will give you guys some tips. In the old days, my boss asked that we should build thenkini system that must be highly available, scalable on demand. It's a web project, you different programming languages, not must be to should be installed in thenkini. And here are the loops accomplished thenkini system that we built, that meet low requirement. Here we have to thenkini matter running in active passive mode that receive job and distributed them to slip just follow the matter. Here we have keep alive process running in each matter that make the two matters say virtual IP and keep alive also make two matter say an open stack volume by using a script that detects it and attaches the volume to keep the volume always attack to a healthy north. Okay, so keep alive and open stack volume. We have a system meet the first requirement. Let's see that's what have high availability. So the second is scaling on demand. Why we need this? The answer is with a fixed number of slip in thenkini system. It is a waste of computing resources in the low demand of user. In contrast with the high demand, more and more new job receive. It cannot quickly it cannot really adapt to all of them. To handle this situation, system engineer must manually prepare server and rather than King Clutter add a slip to agree more complete resource or they can find a solution that scan automatically their own in this such as provision server, joining a Clutter, revoking them on low demand. So to take advantage of open stack cloud, we choose open stack cloud plug-in for thenkini to interact with cloud to automatically do everything I tell you. And the last requirement is flexibility. You may not install any built-in of any programming language in all that is left. All you do is when you write thenkini highlights. Let's put your course into a container. That's container has your built-in install and run your build, the general build. Okay, this is an overview of thenkini Clutter, that's review. Next in the past, in the main part of this presentation, we talk about how to do auto scaling thenkini in detail. First this, let me see what thenkini offer to us. Thenkini had a periodically job to check loss in the Clutter after evaluating some metric and evaluate them. Thenkini will call cloud plug-in to provision a new virtual machine and all remaining words will be left to cloud plug-in to your in-host at a thenkini slave. Thenkini also has had a job that periodically checks the slave state and permits the cloud plug-in to decide whether or not to revoke the slave. So thenkini, here the open state cloud plug-in, this is a thenkini extension plug-in for open state cloud. Just allow thenkini mother to interact with open state cloud to auto spawn and destroy. Thenkini slave virtual machine when clutter lost and would change. Here we have an example when thenkini and open state cloud plug-in working together to spawn to new nodes and by thenkini clutter later. Next, I will show you a weak configuration we can make to open state cloud plug-in. Here we have config cloud. Absolutely you must specify where either open state API last cloud plug-in will talk to you must provide you must provide an a credential to talk with open state. The next thing we want to do is define a template for provisioning a new virtual machine including a bootchart script running when virtual machine is booting. Just you have the virtual machine join the thenkini clutter at a slave. Here we have the hardware. This defines the size of the slave. You can choose an available flavor that's open state offer. You must choose a network that can connect to the matter. Next thing is the user data. This is the most important configuration you must do. You upload a booting script that will run when VM is booted. Next thing either mark number of instant. The cloud plug-in will no longer spawn another slave if number of slave reached number inspire high-loss. Next configuration is minimum number of instant. The cloud plug-in will ensure that thenkini clutter have at least this number of slaves. Here we have some slave option. We can specify how many executor in x-slaves. Retention time. If the slave free for this time the cloud plug-in will call open state to destroy this virtual machine. You can specify what kind of connection type the slave and matter will connect. Here we have the bootchart script. It will run when the virtual machine is booted. As you can see, the script will generate a couple of files that define the service that run the slave. The secret information will be injected to this service via the environment variable. Here we have the command that run when the container is up. This command will connect to the matter at a slave. And finally we have some tips for you when using plug-in. To prevent node space left on device error, you can use preventive nodes monitoring feature. The load, you can choose the free space threshold either 30 gigabytes. The low free dig with slave will be automatically disabled or removed. You can use raw VM image for slave VM to speed up VM creation step. So that's it. We talk about in this presentation. Thank you for watching. If you have any question or any idea feel free to ping us. Thank you.