 Hi, today's presentation is about auto-skilling in OpenStack without telemetry. I'm Dr. Wu Thuan from VietTel Group. The other speaker, Gien Wen Thuan, is here today, so he cannot join me. We will have four parts today. The first one, what is auto-skilling and what auto-skilling can do. The second one, OpenStack auto-skilling with heat and telemetry. The third one, and talk about what are the problems with telemetry. And the final, I will introduce our architecture with faith and promise. So, what is auto-skilling? Auto-skilling creates a target tracking scaling policies for those resources in your scaling plan. These scaling policies adjust resources capacity in response to live changes in resource utilization. And in most architecture, auto-skilling is simply the combination of three steps, metering, alarm and scale. So, auto-skilling in OpenStack with heat and telemetry. In OpenStack by using predefined rules that consider factors such as CPU or memory usage. OpenStack heat will add or remove additional instances automatically when they are needed. The architecture, the component, the core component providing automatic scaling is orchestration, which is heat. The template defined rules to evaluate system load based on telemetry data to find out whether there is need to add more instances into stack. Telemetry does performance monitoring of your OpenStack environment, collecting data on CPU storage and memory utilization for instances and physical host. However, there are some problems with telemetry, especially with our rocky OpenStack environment. First, telemetry projects are lack of contribution, which I think is the most important factor of an open source project. As of the Queen's release, the telemetry has lost several developers. The development of Panko and ALDH has stopped. Gnakti was moved out of OpenStack in June 2017. The integration between Gnakti and ALDH is really not good. Gnakti is unmanned, so telemetry is only carable of collecting OpenStack-related metrics from instances. How about application data? Customizing in telemetry is not easy. How about custom rules, custom metrics? In order to custom rules and adding new metrics to Sailometer, three different projects need to rebuild and reinstall. Sailometer isn't plugged in. Gnakti to store new metrics and ALDH to be able to evaluate new metrics. And one more thing, RebitMQ was under heavy load due to Sailometer workload. If Sailometer gets stuck, it queues overflow. And ALDH listener doesn't support high availability. Next, I'm going to talk about our problem. We use ProMeteor ecosystem to monitoring the whole infrastructure system as well as applications. So why we use ProMeteor? ProMeteor features a customizable tomb keep and delivers metrics without creating lag time on performance. ProMeteor also supports a wide range of exporters. ProMeteor also has a really nice query language from QL. So I was showing you the other scaling is a combination of three steps before the metering and scale. So in mapping to over stack, height and telemetry, the Sailometer and Gnakti is in charge of metering and ALDH for alarming and height for scaling. So we use ProMeteor stacks to collect and store metrics and height for scaling. We need a new one to do ALDH based on metrics from ProMeteor. Here we introduce faith and open source software. We view ourselves working at us as a bridge between any cloud platform from OpenStack or cloud platform, Amazon Web Service to any monitoring system. So all architectures with height and telemetry will move to the new architecture with ProMeteor's exporters and faith. So how it works? ProMeteor collects and store metrics from a set of ProMeteor's exporter. In over stack, we use over stack SD config to auto add or remove instant by querying over stack. We add a label and query over stack instant list using instant metadata. Basically, we add metadata to instances and based on that kind of metadata, we add a new label to metrics. And then faith, periodically check scalar rules, user defined from QR expression by querying ProMeteor to evaluate query is satisfied or not. If the rules is satisfied, then check a scale action. From there, it will take care of the rest. Here is an example from our system. When system on peak hour is had a really high traffic, so we need to scale for instances to serve and user requests. When the system in off chat or pick our the traffic is a really low, so we need to scale in to remove instances for other applications. So why we use faith and ProMeteor's? As I said before, ProMeteor's provide a very flexible query expression and ProMeteor's also support a wide range of exporters. So many metrics can be used or combined with each other. We can fully handle ticker scale action method, number of retries or kind of delay. And faith also has its own cluster mechanism. We also have web UI and HTTP API. So here is our web UI. You can specify what kind of query you want to input into query and the duration interval text description and so on. So this UI, you can choose a different cloud from the cloud input and ProMeteor's query goes here. The interval is the delay between each ProMeteor's check. The duration is the total amount time from the beginning of the query happens to the when the action ticker. Cundal time is the time between two HTTP action. Okay, let's go to the demo. Here I have one instant which has sent out 7.7 image. Here I have a stack with a flavor M1 medium and sent out 7.7 image. I have one dummy from QL which calls the number of faith instant. Okay, let's check if the number of faith instant equals 12. The interval of 10 seconds, 1 minute duration and Cundal 1.5 minutes. So for the URL, I copy the scale out URL of the stack. That's okay for the scale out. So let's create another for the scaling action. Okay, repeat the steps, copy the scaling URL. Okay, let's check if we have two scalars. Okay, let's go. Let's wait for a step. It may take one minute for Prometheus exporter to... Prometheus to scrape metrics from Prometheus exporters and another minute for the scalars. So that's my B2 minutes, up to two minutes. Okay, another instant. Another instant having sent out 7.7 image and M1 medium flavor. Okay, delete the scale out scaler. Okay, remove one faith instant. So the number of instant less than 12 is 11 now. And wait for another one or two minutes. So one instant is being deleted. It works as expected. Okay, so thank you for your attention. If you have any questions, please ask me now or feel free to contact us via email on the screen. Thank you.