 Okay, let's start. Okay, good morning everyone. Welcome to our session. Today we're going to show you a real use case in telecom which is using water to do energy and resource cost of the death center. So my name is Yimong Bao. I'm from China. I work for DT Corporation. And this is Alex, he's from Russia and he works for Cervodica. And let's get started. And the first thing we will introduce our problem, what kind of problem we are going to solve. And then Alex will introduce water briefly to you guys and then we will show you a real demo using water to solve this problem. And it is reported that for a death center the energy bills are typically the second largest items in their budgets. And we can see from the left chart that the first largest factor in monthly cost is the hardware cost. And then goes to the second largest one which is related to power. And you can see from the green and the yellow one. And however, it is predicted that the cost of electricity per year for a server will soon exit the cost of the hardware itself. And the reason is that servers are not efficiently used. And we say when the servers are powered on during most time they are idle and only 15 of the time they are having real workloads say having virtual machines running on them. And the most time they are idle without workload. And what's even worse, even when the servers are idle, the electricity they consume can reach a number of 16 to 19% of that active working status. And this is a very terrible problem for a death center. And this problem can be solved from different levels such as from host level we use dynamic power management. And we can also solve this from a virtualizing level such as consolidation and VM selection, load balancers, et cetera. And also we can solve this problem by consuming renewable energy resources. And OpenStack Water we provide a solution from the host level using Water when servers are left without VMs Watcher can automatically power off them or we can say turn them into sleep mode. And when the workload increase more than a given value Watcher can automatically power on them. And next I want to introduce what is Watcher for you. Yeah, hi I'm Alex and I'm a Watcher project team lead. And you know, Watcher is an optimization resource project in OpenStack infrastructure. So we leverage services provided by other projects. Mainly we have some actions like lift migrations, cold migrations, resizing power on power off actions and so on. And to apply the sections we need some sort of strategy. And the strategies, where it's from consolidation strategy to load balancing strategy, thermal strategy, noise and eyeball strategy and so on. So our main goal is to reduce the total cost of ownership across the cluster. We have the following architecture. We have three different services. Names are Watcher API, Watcher Decision Engine and Watcher Applier. And we have also Watcher Python Client and Watcher Plugin for Horizon. So we also use some metrics to generate action plan of our strategies. This metrics, we gather them from different data sources like Monaska, like pair of Nokia and Selometer. And we also have actions that connected through the API to the projects like Neutron, Nova, Cinder and so on. So let's see for our workflow. Initially we have monitoring when we gather metrics from our data sources. Then we decide and make some analysis process when if we need to stabilize our cloud somehow. And yes, if we need, then what should we do? We need to use some of our strategies. Currently we have six strategies. It's like workload stabilization using standard deviation algorithm. It's like workload consolidation when you get together virtual machines across the cluster on the as few nodes as possible. And then turning off some nodes or turning them to the ACPIS tree state. Then after we know that, yeah, okay, we need some stabilization actions, let's do audit. We launch our strategy and we get action plan result. Action plan is a set of actions that are suggested to the administrator to be applied or not to be. Yeah, so we have many contributors like Intel, Savionica, Becom, NIC, ZTE, Walmart, IBM, Turing University, Orange and AT&T. And watcher is used in NIC and Intel platform. It is also used by MIT University and by Garvert University. So let's continue. Okay, I'm going to show you, quickly show you a demo. Just open it outside of presentation. Oh yeah, okay. Oh yeah, oh sorry. So I'm going this way, I can see it. So this is an open stack cluster with three compute nodes and now we have four virtual machines, so compute eight and one on seven. And then when the VMs goes migrated or deleted, the compute seven goes idle, then watcher generate an action to power off the compute four to save energy. And then after that, when the workload increase and we need more compute or host to support our tasks and watcher will generate optimization to help. And this is our dashboard, watcher dashboard. And this is initial hypervisor. You can see from that, four VMs on compute eight and one on compute seven. And then after we delete one VMs, compute seven goes idle and then we create an audit to generate optimization action. We use audit template and then create using that audit template. The instance of audit template. Yeah, and we can, here we create a continuous audit. That means it will generate optimization actions continuously. And we can see here, we successfully generated actions of power off. That's it? No, we still have, can we? Oh yeah, here the next action, when we create more virtual machines, we might need another compute or host to support the task. And watcher will generate the actions of power on compute four automatically. Can I speed up the video? Doesn't have to. Watcher is fully documented. Watcher is fully documented and we have a wiki page and we have very active IRC channel. You can get us on OpenStack Watcher channel. So we have different repositories mainly published on GitHub. We are under big time, so we are welcome you if you want to be our contributors. So. If you have any question, just come to our channel. So thank you very much. Thank you. Thanks for your time. Come in.