 Good afternoon to all. I welcome you all to this session. Actually, this is an, I plan with Matt Young because of his personal emergency, he couldn't make out. So Matt Young is, okay, we both from HP Helion and he leads the HP Helion product cloud operation strategy and he also the co-founder of the service called the Monaska and I am Kanagraj Manikyam and I am the heat core viewer and I have like a learning experience in various area in data center, serve automations, storage automations, et cetera. And last three years, I've been working with OpenStack. I'm also trying to bring up a new service called Namos. It does, once the option is deployed, it does auto discovery, configuration, plug-in plug-out, waste management, et cetera. So today's session is heat auto scaling with Monaska. So the moment you say any cloud service or like any cloud applications, it involves like development, deployment and maintaining its life cycle. So here as a cloud application or cloud service, the owner, you concentrate on the development, the rest will take care of, the rest we will take care for you, like the deployment and maintaining its life cycle. So how we are going to do that, we will see in the upcoming slides. So we see like why or what auto scaling and how we can use auto scaling for a given cloud applications followed by a demo and co-sign in session. So what is auto scaling? Auto scaling is a cloud computing feature that enables the user community to scale their cloud service or the applications based on the situations arise to either scale up or scale down. Here cloud service could be of any nature. It can be of infrastructure as a service or a platform as a service or even software as a service. So if you look at, in general, there are three kinds of auto scalings. So scheduled, reactive and proactive. The schedule will be really helpful when you know upfront that there is a time the load is going to be up or down, like Thanksgiving day or a big sale day, et cetera. So the reactive is, it will be very useful in a situation where say you are coming up the new applications and a new cloud applications and you are going to launch. During that time, you don't know like whether how it will be the user usage, whether it's going to be a huge or it's going to be a lesser, we cannot predict. So in a reactive mode, somebody will be monitoring your cloud applications and when things going up or down, it will notify the respective person to take care of the scale up or scale down. So in those kind of situations, reactive will be very useful. The third one is predictive. Okay, so you have been running your application for quite a while and you have history of data. Based on that, you can make a prediction. So in that scenario, we can go for a predictive auto-scaling. So in OpenStack, we are having the reactive mode of auto-scaling and there are two kinds of auto-scaling. One is horizontal or vertical. The moment you say horizontal, you will scale up the number of servers. The moment you say vertical, you will scale up the capabilities of the server, for example. So here we are supporting the horizontal scaling. So why we need the auto-scaling? There is a reason behind, okay? So when you look at the graph, blue line shows that, let's say it's a fixed capacity, you are planning over the time and the red line shows that actual usage of your applications. So if you look at that, there is a gap. So that gap shows there is some wasted capacity. In other words, you might be planned for, when you look at that third hype in the blue line, so here, so you planned only this much, but actual usage gone beyond that. We called it a spike. So during that time, you cannot accommodate the user request. So it's unexpected spike. This is a two major problem if you go to any cloud applications. That's where the auto-scaling comes and help you. So this is the overview of what we are going to do today. When you look at here, so there is five components involved. So up-down performance, it's nothing but the monitoring your applications and there are certain rules. So when the performance going up or down, when the scaling situation arise, what to do? Whether you want to scale up or scale down that you can define by the rules, okay? When the situation arise and you've gone through the set of rules on you found out what to do, then the orchestration will take care of you to actually bring up or bring down. So the orchestration will talk to the compute. The compute here, it means like you know what's in there, who are involved in your cloud applications. It can be a storage or it can be a network, okay? After that, any cloud applications which needs the load balancing. Newton already provides load bands as a service. So here, these are the components involved in the cloud application. So Monoska will be taking care of the monitoring. Heat will be taking care of defining the rules on what to do and whether you want to scale up or scale down. You can define such a scenario. Also, it does the orchestration part and Newton provides the load balancing. So before getting into real auto scaling, let us see what is the Monoska. So Monoska is a highly performance scalable and fault tolerant and extensible monitoring service. It's growing a lot in our OpenStack community. So currently it provides metrics for service, infra, platform and applications. And future, they are planning for creating events for logs, lifecycle and usage, et cetera. Auto scaling, we saw that nearly say auto scaling is mainly provided by the heat. Let us talk a little bit about heat before getting into auto scaling. So what's heat? So heat is an OpenStack orchestrator. So like if you go to AWS, it has cloud formation. Similarly, in OpenStack, heat provides. So in addition, heat provides the feature for auto scaling and software deployment on configurations. So how it provides all these features? Okay, it gives the template. So what is template? So the moment you say like any cloud applications, we can declare the cloud application in a textual form by means of template. Heat provides support for hot and CFN format. We also provide something called template translator. It translates your, if you have a Tosca model, it translates the Tosca into hot template. Using the template, you can provision. So okay. So once you model your application in the template and given to the heat, so how is heat is realizing it? I mean provisioning it. So heat has so many resource plug-in with respective e-services like NOAA plug-in, Cinder plug-in, monoscopy, et cetera. Most of the service under the BigTent already got integrated in the heat. So once you define your cloud application as a template, you can spawn as many cloud applications from there using heat. So whenever you spawn a new instance of your template, we call it as a stack. So when you're provisioning a stack, you want to track its progress, how things are going on. Heat provides something called events. By using the events, you can track the progress. Whether some resources are creating or it's in progress or it is completed or whether it's failed, you can track all those in progress things through the events. Currently who is using heat? So in community, heat is heavily used by Triple-O, Murano, and Magnum. And if you take any cloud applications, so underneath the cloud will play the role for doing the orchestration for certain moment software deployment, cloud application deployment also. In HP, we are using heat for carrier-grade helium. So we know now right, auto-scaling is given as a feature in heat. Let us see how it is functioning. Okay, in heat, everything is, all the features are made as a resource. In that way, it helps you to customize however you want. Let us see how it, so there are certain resource plugins or the resources being made for the auto-scaling. The first is Scale Group. Scale Group helps you to group your resources which needs auto-scaling. So each resource in the heat is tagged with namespace in the form of like, first we'll tell who is the cloud provider and so first part is the cloud provider and second part is which service it gives and third part is what is the real resource plugin. So here we have OS, heat, then auto-scaling group. If you take any resource plugin, it will have certain properties and the outputs. Properties are nothing but the inputs to the resource plugin and outputs are when you're realizing those outputs you can get from that resource. For auto-scaling, we are having, these are mandatory properties like the resource. That is nothing but the scaling element. The resource can be a given instance or it can be in a state template. What it means? Okay, so when you want to scale, you can scale a given instance, say an over instance, or you want to scale an over instance with a given load balancer. Even when you add a new element, you want to add that element into the load balancing. So both you can group them as a resource. So I will show you like during the demo how we are using that state template for auto-scaling. Then decide capacity. First time when you deploy your cloud application, when you say the decide capacity is two under the resource is only the NOAA instance. So it will create two instance at the first time. So that is nothing but the decide capacity. Also you can define the limit whether the maximum it can go during the scale up or is a minimum it can go during the scale down. So once you define this in the template and feed to the heat, so heat will create the auto-scaling group with these things in place. So once in provision, you can get how many, what is the current size of the scale group? And say like there are instances, what are the output of those instances? Using these two outputs attributes, you can get those details. So let's move this to the side. The next one is the scaling policy. So scaling policy will help you the moment once the scaling situation arise, whether you want to, how many elements you want to add further or how many elements you want to remove from the current group. So those things are captured using the scaling policy. So first thing what it does, it points to the auto-scaling group it belongs to. Then it has the adjustment type. There are three kind of adjustment. You can adjust like you add plus two or remove minus two like that, even the number, or you can say when the scaling situation occurs, you can say you always make these many numbers. Or you can say like you can adjust this much of percentage. So these are the three kind of variants. You can use it for the scaling adjustment. Okay, there is something called cool down time. Okay, so scaling situation arise, it not fair the heat. So heat is in the progress of whether scaling up or scaled down. So it will take some certain time window. So during the time window, if another scaling situation occurs, heat it will just ignore. So you can set the timing window in the cool down parameter, property sorry. Okay, so when you create the scaling policy, it gives two kind of URL. One is alarm URL or signal URL. What is the use of it? We will see in the next slide. So alarm, when you are using the alarm URL, it will go through something called CFN API in the heat. When you are using signal URL, it will go through the heat API. So scaling policy having dependency on the auto scaling group. The next one is alarm notification. The auto scaling group on the scaling policy is provided by the heat. Now alarm notifications. Already heat is having support with the salameter. Salameter already provides the alarming. In this demo, we will see about Monoska. And Monoska has got integrated in the heat during the liberty time. So this alarm notification is part of the Monoska. It has two things. One is address, another is set type. Monoska notification supports with a webhook calling or email notification. What it means? Say in Monoska, Monoska monitors your scale group. So something has happened. Once it called, it will create an alarm. When the alarm is generated in the Monoska, it will call whatever things you specified here in the address. So it can be of two type. Either it can be a webhook or the email. So in auto scaling, we are using the type of webhook. We saw that in the scaling policy, you can generate the alarm URL or it's a signal URL. So that URL will be put as part of the address. The next one is alarm definition. So you want to define on what situation you want to create the scale up. Whether my CPUs at least is in beyond a certain limit or memory utilization beyond certain limit. So those kind of a thing you can define using the alarm definitions. This is also provided by the Monoska. So this is the resource plugin. In the expression, you need to define those rules on what situations you want to generate the alarm. And there is something called match by. So for supporting auto scaling with the Monoska, we should set that to scale group. In the next slide, we will see what is that scale group. Once you define all these things, in the alarm actions, you are going to tell that when this alarm definition, whatever you define the expression is satisfied, you call these alarm actions defined here. So that alarm action is nothing but the notification which you created in the previous slide. So what is that scale group? So in the first step, we saw that there is something called scale group in which you are going to pack those elements which is going to be scaled up or scaled down. Say in a scale group, there are 10 instances and the 10 just got created. Now Monoska is monitoring them. In Monoska, when those instance are monitored, it will create a matrix out of it. In each matrix, it will embed that matrix belongs to whom. I mean, in our case, those matrix will be belongs to those 10 VMs. So all those matrix will be tagged with scale group. So here, so whenever you are defining expression, here you are telling that in the expression itself, the scale group equal to whatever the unique information. So here I kept it as a stack ID. The same scale group should be tagged as a metadata in the instances. So that when the instance are created, the Monoska agent will when they are reading the matrix for those instance, it will read those metadata as well and push the metrics, I mean, push the measurements with the scale group in place. So below I have given the wiki where I will capture all the details related to the Monoska heat integrations for auto scaling. So now we know like how the auto scaling got implemented in heat with Monoska in place. So now let us see the workflow, what is happening. First, you create a template however you want based on your cloud applications. So once you create your template with your cloud applications, so you will feed that to the heat. So heat will start to create that your cloud application. That means the auto scale group, whatever you defined and it will go and create the all around definitions and the notification in the Monoska. So at this stage, so at this stage Monoska will start to monitor your scale group. So till now heat has created the scale group and it set up the Monoska to monitor those scale group with all around definitions. Say like when the CPU is going up or RAMs is, sorry, memories are going, memory utilizations are going down or up. It's defined in the all around definition and Monoska started to monitor. At this moment auto scaling is in place and it's continuously Monoska started to monitor. Now consider the case where along the scaling situation arise, so something's happened. So in this situation, whatever the, say like CPU has gone beyond the defined limit, the Monoska agent sitting on the compute node, it will monitoring those incents. So now it will capture, it will identify that the CPU is gone up. So it will create a metric with a measurement saying that what is the scale group is there in the VM metadata and then it will capture the CPU utilization value and push that to Monoska. So once the Monoska got the measurements, it will try to evaluate those measurements of those metrics or meeting the all around definition expression. If it found that those threshold gone beyond the defined threshold, then it will generate all around for it. So once the all around is generated in the Monoska it will trigger the corresponding notification embedded in the all around definition. So now when you call a notification that's nothing but the web hook, it will call the heat, okay. When the heat got that web hook calling, heat will, heat now knows that, okay. This is my scale group. Now I need to scale up or scale down based on how Monoska notified, okay. So once it's, it will start to expand here in our case, we can let us consider it's a scale up situation and it start to create one more instance. So once the additional instance got added and assuming that here the CPU utilization is the measuring thing. So earlier it was two VMs, now it become three. So ultimately the CPU utilization of all these three will come down and it's run forever. So the moment you say cloud application there is another part. So far we know like how auto scaling is done with the monitoring in place. So another part is the load balancer. So load balancer is completely provided by the load balancer service in the neutron and heat supports to define them in the template. So there are three resource definitions there for setting up the load balancer. So the load balancer you need to create first that will point to the load balancer pool then it will point to the monitor. So once you create this in the template, so next thing is auto scale resource. So initially we were discussing in the auto scale group the resource can be a given instance for example or it can be a set of instance. So in our case we are having the server is the one thing and the pool member is another thing. So pool member is pointing to the server. So you capture all of these things as a one template and use that template as a nested one in your cloud applications. So this pool number now point to the pool. So whenever this the scale auto scale resource is getting expand or sync, it will correspondingly added to the pool or it's removed from the pool. So finally you will point that auto scale resource as part of the auto scaling group. So in this case during the scale up one server will be added and the new server will be added to the Newton pool. So during the scale down one of the server will be removed from the scale group. Correspondingly it will be removed from the pool also. Let us see the demo. So I have made the demo setup but it's not reachable from here. I'm sorry about it. But I captured the screenshots of those demos set of whatever I had. So I will take you all through that screenshots. So this is the scaling group whatever we had defined in the template. So here I say like I said the extreme case for minimum size and maximum size. Here is the desired capacity. In the resource I have pointed something called load balancer. It will be underscore server.yaml, okay. So in the load balancer slide we have seen that the auto scale resource have two things. One is the always Nova server and one is the load balancer pool member. So those things are captured in the server yaml and for them we are feeding all the required inputs like flavor and image, key name and network. Finally this pool ID. This pool ID will make you to connect this load balancer server with the given pool. And this metadata is, we are seeing now like the scale group. Whenever you are creating the resource element each resource element will have the metadata of the scale group. Here I am setting the scale group as the stack ID. That is a unique. Then I have defined two things. One for the scale up policy and the scale down policy. So in scale up policy I am telling that, okay adjust the scale by one. So it will increment the size by one. While scaling down it will decrement by one. And I am setting the cool down as 60 seconds. So based on your cloud application you can configure the cool down as well as the scaling adjustment. Then we have two notifications. One is for the scale up notification and the scale down notifications. So the up notification will point to the scale up policy. The down notification will point to the scale down policy. And there are two alarm definitions. One for the scale up situation and another for the scale down situation. So for scale up situation we have defined the alarm definition with average of VM CPU utilization with the scale group, given scale group ID. When it reaches the average it reaches beyond 90. And how many times? Only one time. Even it reaches one time. You generate the alarm and call the notifications. And we are having the alarm accents here. So this alarm accent will point to the up notification. In this CPU alarm high alarm definitions. CPU alarm low alarm definitions. It will point to the down notifications. And you need to define what is the lower limit. For the demo purpose I put as a less than zero so that it will never do the scale down thing. But unfortunately I couldn't access the demo setup. This is your albass setup. So you are defining the monitor, pool and load balancer. So this is the sample template which we are going to use for the demo. So here I created one stack. So that stack has these many resources which we define in the template. So all the resources are up now. Stack is created and all the resources are up. That means now like in the in the work flow slide we have seen at the first step Monoska started to monitor the scale group. So at this point Monoska is monitoring the scale group ASG. So I captured the pictorial representation of the template. So the left hand side you are having the albass setup. The right hand side you are having all required Monoska things or the autoscale group, scaling policy, alarm definitions, et cetera. So at this moment we are in the scale group you are having two instances and that instance having with the load balancer pool. So you are having like a 258 will be your load balancer IP, VIP that will if you, when you run something run some call command on this it will go to either six or 59. So I set up the load balancer in the round robin mode. So at this moment let us see what are the things are created in a different places. We saw that stack is created with all the resources, all the resources are up and running. So currently NOVA has two instances. So we saw that in the scale group we define the desired capacity as two. So it has two instances and they are active and the load balancer there was one load balancer which is pointing to the 58 and there are two members which are adhering to the load balancer just 59 and six, I'm sorry. So when you are running the load balancer IP that curl with that 58. So one time it will point to the six. Next time it will go to the 59. So next time again it will go to the six. So because it's configured in the round robin phase. So this is our in-sale setup. So in the monoska side we have set up two alarm definitions. So one with CPU utilization which is greater than 90 another is less than zero and so when you are showing that alarm definition it is pointing to the, it has created with the scale group as a match by and it is pointing to the alarm accents of this ID. If you look at this ID, the DOM, so that ID will be nothing but the scale of policy. What are the notifications we have created? So this is the in-sale setup. Okay, now we are going to trigger the CPU load on one of the server. Okay, on the server 200.6 I'm going to increase the CPU load. Okay, also I, as part of this demo when I captured I increase the load on the 59 also, the 200.59 server also. So when the load is going up monoska will start to capture those CPU metrics. We call this a measurement in the monoska and it will push those measurement into the monoska and push those measurement into the monoska API server. Okay, from the monoska API we can start to capture those measurements. So if you look at here the measurement list for this metric, okay, under dimensions with the scale group whatever we have created. Okay, this is a scale group. So it has already sort of beyond 90. So when this situation happens monoska will start to create the alarm and it will call the notifications. So earlier monoska has created an alarm within a okay state. When the situation occurred because of the CPU got increased. So now it has created the same alarm. It changes state to, from okay state to alarm state and the CPU utilization average become 100.5. So it went to 100.5. So how we know like on the heat side it actually called that webhook. So we created something called web server scale up policy. The scale up policy always gives us that webhook, right? So we want to confirm that monoska has called that webhook. So earlier it was in create complete state. The moment monoska calls the heat through the webhook this resource will change it to signal complete state. That confirms that signal that webhook notification reach the heat, okay? Once heat received that webhook signal it will start to scale up or scale down. In our case it's a scale up. So now it has increased one more server, okay? So if you look at the, so if you look at the IP, the new IP graded was 200.62. So now we are having three servers and up and running. So when you look at, so when the new server got added the server will automatically got into the load balancer pool. So the load balancer pool will source that new member here. And when you run the load balancer it will start to round robin across those three servers now. So earlier it was 659. Now it added one more thing, 62, okay? So that ends the demo part. Any questions? You're sure? One by one please. Okay, the scale group concept is the feature which is in the heat. Yeah, yeah, right, right. It also attached the scale group ID in the metric as part of the metric. Okay, so when you're creating the alarm definition we are attaching that scale group ID. So when I get agent smart enough to tell, to send a metric saying this is CPU utilization, also this is the scale group ID. Exactly, you're right. Okay, okay. Can Celerometer also do the same thing? Okay, Celerometer also function in the similar manner. The metadata will be a different name but it functions in a similar manner. Okay. So you should be writing a URL there. Yes. What kind of URL it is? And are you posting some JSON to this or is it an heat API or what's it? Okay, so in heat we are running, in heat there are two kind of APIs. One is the CFN API and another is heat API. So CFN API runs in 8,000 port and heat API runs in 8,004 port. So when you're creating the scaling policy, okay once the scaling policy is created if you try to get the alarm URL or signal URL it will give that URL with one of the heat API or CFN API. Okay, that's the URL part, the authentication part. If you, suppose if you are choosing the signal URL that is nothing but it will go through the heat API. That means authentication will happen through the heat. I mean normal Kishan authentication. So if you are choosing through the CFN, I mean the alarm URL, so that will go to the CFN API, okay. There already pre-authentication is set up. So whoever is calling that webhook, it will automatically across them. So I'll just go back to the slide. So if you look at here, so this is nothing but the alarm URL. So the port is 8,000 and then, so if you look at, it will embed with all the scale of policy and then group as well as some unique ID signature. The heat actually provisions a monaska, right? Yeah, this one is with the monaska, yeah. As a part of the step three, the heat actually provisions a monaska which is a particular webhook. Heat generate the webhook and given that to a monaska as part of notification creation. Yeah, sure. How does a monaska gather the metrics? Okay, so in monaska what they do for example, capturing the VM metrics. So they place one monaska agent on the allocomputer node. For a KVM, so you will place on the KVM node. So it will start to pull every minute. So when it's pulled, what it does, it will collect the metrics from the KVM one side and it will try to associate those metrics for those servers created in the NOVA via NOVA. So say there are like two servers got created via NOVA. So for those two server, it will collect the metrics from the KVM and keep it in its cache. And every one minute it does that polling and simultaneously it will forward those measurements to the monaska API. When it's forwarding, it will forward those metrics or the measurements with the scale group ID in place. Yeah, so as part of the implementation, we have made changes in the monaska as well to make that enhancement. And it shows the polling and postponement. That can be configurable, actually. Is there any reason you guys are creating your own agent not using the slum agent? Okay, so today in community, we are having like a salameter as well as monaska for monitoring. So this is all about using monaska. But the same thing is there in the salameter as well. Let me rephrase my question. I assume this solution also works if custom has a salameter agent and provided similar magic, am I correct? Okay, it should work. When you are creating a template, instead of creating a monaska notification and the alarm definition, you should go for a salameter alarm creation. That's the only difference. Okay, one by one. I think monaska is having that capability. If you want to add a new matrix, you can go and add it. That's available. And I will actually, I will share one wiki page where I will capture this. Monaska already provides a set of metrics. So I will add that link into that wiki page. Yeah, please. There's a latency duration that it's collecting the metrics. I think it's by the point is 15 seconds. And the pushing interval is one minute at monaska. So how much is this thing real time? I mean, when you increase the load balance, will it scale up after one minute or after 15 seconds? Okay, that's a good question. Okay, so the pulling interval you can configure in the agent side. So as you said, in your case, it's a 15th. 15 seconds. Yeah, here it's, I think by default it's a 60 second actually, so every one minute it will pull actually. That you see at the web UI. I mean, the agent collects the metrics at monaska every 15 seconds, but you see the result after one minute later at monaska. That's what I am asking. So is this thing working depending on the agent interval or the pushing interval? Okay, it depends on the pushing interval. So everything will happen after one minute? Exactly. So monaska first should identify something that has happened, it should generate the alarm. So once alarm generate only, the auto scaling will be signaled. Last question. Normally the alarm occurs simultaneously. Every time the threshold exceeds, the alarm is triggering. Right. So every time you are calling the web URL, is it scaling all the time or it's happening one time? How do you configure it? Okay, first thing is you can define like, say my threshold should reach like 10 times before creating the alarm. That you can set in the expression, alarm definition expression. Okay, there you can control. Okay, assuming that you control there and every 10 times it keep on sending the alarm. Okay, next it will go to the scale group. The scale group will have the limit. What is the maximum limit? What is the minimum limit? So it will expand or sink within that limit. Is that answer? Thanks. Yeah, please. So can we guarantee any time from the agent? Let's say the compute nodes, CPU utilization is going high. Let's say that is a metric, but agent itself is a process on the compute node. Right? So you may not even get the CPU cycle to figure out there is a CPU utilization being a high. So any, I mean, in a real time, right, can we even guarantee the 15 seconds or can we even guarantee any notification from agent back to the monaska controller or something? Because you are yourself a process on that compute node. Right, it's completely depending on the polling cycle. It's nothing else. You are self a process, so you also need a CPU to run. Right? On that compute node. No, no, it's not monitoring the compute node, CPU. It's monitoring the instance running on that KVM. What infrastructure? X, okay, your question is very well valid and we are doing that in our HP Helion. So once you deploy, you will completely monitor the our open second environment through the monaska. So in that way, we are monitoring the compute node as well. You're right. It's up to your cloud applications. Okay, I'm sure there is no limit theoretically, but we have not done any like a benchmarking, like how many groups can go for it. But it's up to your cloud applications. I checked the, sorry, yeah, sorry. Okay. Is there a way like to do a manual trigger of the alarm, like user, kind of related user defined trigger, like I want my application to request for more VMs all of a sudden? Manual in the sense, so other than the monaska, can we trigger the auto scaling manually? Yes, video game, then video game server, then it must maintain three VMs. And then the application itself monitors how many VMs are active within it. So when it's less than three VMs, it requests for one. So instead of depending on the performance of the VM, it basically depends on its own metrics. Okay, so in our case, I used the monaska. Most of the cloud application will be depending on the open stack infrastructure. But the one you want to utilize the auto scaling, it's all like who is going to call that webhook? So in your case, your application is going to call. Yeah, yeah. It will get generated for each scaling policy, whatever you're defining, yeah. It's will be unique. Yeah, okay. So the moment you are going to use the user set, you need to end up using that, there are two kind of URL, right? One is alarm URL, another is signal URL. So that signal URL, it will go through the heat API. So the heat API is having the authentication authorization behind it. So users should have the right authentication in order to make that signal. A check is still reading in Java, is still the case for the release? Is it reading Python right now? I think, sorry. Okay, we'll take the question offline. I'll just stop that. I'll come to you. Okay.