 I'm good. OK. All right. Hey, welcome, everybody. My name's Jason Grimm. I'm a consulting systems engineer with Cisco. And you are at the 201 Walkthrough Auto Scaling OpenStack natively with heat, salamander, and LBAS. So join me or join with me today as Charmin, Toxin, and Shashong Sheng, two of my partners. And they'll introduce themselves when they come up for their sections. Essentially, we're going to have a quick introduction. We're going to go over how the workshop is organized. We're going to go through with the environment setup, background and use cases, services, enabling auto scale, and then to the hands-on portion of the lab of the workshop, rather. So about the workshop, I think it's important to point out that we all work for or with Cisco. However, this is not a marketing workshop. Everything we're using here today is free or open source software. That's why we put natively in the title, because we're not using any external modules or plugins or touting one technology over the other. It's all native OpenStack bits using DevStack as well. We're going to train you to Webex up so that we can have chat and record the session. And if you have questions, raise your hand on Webex or physically, one of us will get over there and help you out. So here's probably the most important part. There's three of us. And there's actually 561 of you guys. And I don't think not a fraction of those fit in the room. But very important that you collaborate with each other, because it's good to learn that way. And we're going to have, there's going to be issues. There's going to be issues that there wasn't enough bandwidth or the image didn't get copied over right. So everyone is probably not going to end up getting an environment. So someone in your vicinity should have the environment or has more or less experience with virtual box or any of those things than you do. So please reach out and work with each other. It makes it more interesting anyway and just for the general collaboration as well. So the content and schedule, the first 30 minutes were really just going through some of the theory. It's good to do. But honestly, the first 30 minutes is dedicated to getting that environment up and going so that we can do the lab exercises afterwards. The bulk of the middle is going to be the hands-on workshop. And then at the end, Q&A and open discussion and close. So honestly, I'd like to have a full 60 minutes in the middle. We can forego the end and get through this first 30 minutes as quickly as we can. To the point of the build environment, when some of you came in, you saw the vagrant screen already running. So the process that you were seeing there is not the process for the workshop. It was the process that we used to build the workshop environment. So this environment on the DVDs and on the USB keys that are going around is a baked, completed DevStack environment that's configured and ready to do the heat and solometer and autoscale and that stuff and L-bass on top of it. So the vagrant stuff that was running, we put all the documentation out there so that you guys can use that to build and test your environments. But what you should be doing right now, like our biggest priority in the next 30 minutes is to get a virtual box image running on your machine that we can do L-bass and stuff on top of it. So just by a show of hands, where are folks in that process? Do you have virtual box installed? Do you have the image copied or you're doing vagrant? How many people have an actual environment up and running right now on their machine? OK, one. No. So if you're using the keys, pass them along. The DVDs, I don't know where they are actually. So there's one. There's one. Oh, they stopped. They stopped on the front row. Let's keep moving them to the back. So a couple of call-outs here. How many people are familiar with using virtual box? OK, good, great. The image, when I tested this, worked awesome for me because the image came off my box, so I took it off and I put it back on and everything was fine. But what we realized was that my networks were already set up. And when you put the new image on, you have to tell it to give it the two host networks. And we have to give it the IP address of those networks where the image comes up and is staring off into space. So when you get the image on and you start booting it and you're not familiar with changing the host network IPs, give a yell. One of us will come around. Um, that's, you know, there's not, you know, mount the media, grab the box, and boot the machine. And then we're going to be ready for the kind of fun stuff after that. So that's what we have to get done in the next 30 minutes. This is what is on the USB keys. Not all of this fit on the DVD, actually. But in the, especially on the keys, you're going to want to grab, at a minimum, the VBox VM environment. If you don't have VirtualBox installed, you'll need that. So grab your flavor du jour, install that. Vagrant and the repo copy, that's just for you if you want to do your own development or mess with the vagrant file or understand how the build process goes. You can also get clone. Not everything but the image is up on Git. And if you want to just grab the, see that, I'm going to get the web thing, that's for the website, but you can double you get just the vagrant file as well. So I'll come back to this. So the environment, the logical architecture on the environment, it only took 29 or 40 tries with Neutron to get it exactly right the way we wanted. So any of you guys that have been working with Nova or Neutron know that the amount of configuration settings are there. Nearly the same amount are in DevStack except they're undocumented and a little bit more difficult to figure out. So to get Neutron to behave exactly the way we wanted to be able to do flat file and the LBAS and things, we had to go with a fairly pared down configuration. So Vagrant's going to create, or VirtualBox is going to create this VBoxNet zero. You basically don't want to mess with that. Leave that to DHCP. And then there's VBoxNet 1 and 2 depending on what those are named in your machine. The 33.0 slash 24 network, that's going to be your API and management network. So SSHing into your virtual machine, connecting to the APIs, connecting to HTTP and in the CLI, all of the OpenStack and HostOS by HostOS, I mean Virtual HostOS, the OS of the VM you're running is on that 33 network. The second network we tagged is 27.0. At the operating system level, it's going to be shown as unnumbered. It's just going to be under control of OVS. And it's going to use the bridges for that. We did go with OVS. I messed around with Linux Bridge and Docker and a few different kinds of interesting things. But it turned out to be just academic in the end because I knew OVS was working fine and we had done that. You may see, if you keep up with the GitHub or anything like that, you'll see configurations coming out for some of the other things as well as Ironic. But for now, everything's pretty straightforward with OVS. Here's what the kind of a configuration example. So one of the things that Vagrant does, after it builds the machine, it goes through a pre-requirement process where it creates accounts and keys and things like that. But one of the most important things it does is configure local.com for us, turns off the services we don't need, turns on the ones that we do. Pretty much the core services plus heat and salameter and then Neutron minus the firewall and VPN, but essentially everything else. Here's a stanza about just the Neutron configuration and how the bridge adapter is connected. There's our software stack over there and the IP addresses and the services that are turned on. So stable kilo, this works with master as well, or like the latest cut. But for stability, I left them stable kilo for the workshop, but it worked fine with master as well. The build process that some of you guys were looking at, there's essentially three within the Vagrant file. Well, the Vagrant file, the one that you're using is pretty, that's the wrong button. You can do a base OS install. You can do a staged install. And again, what is the image that you're getting is what we're calling the complete install that's got everything installed and staged are the base configuration, the keys, security groups, the routes, all that stuff is there, as well as some heat templates and some salameter alarms created. But the process is create the machine, install the OS, DevStack pre-install, which is groups and IP tables and sys control. If you're going to do this again, there's a good call out. I had much better success pre-staging the OVS switch configuration, with at least just adding physical adapter and the switch. When DevStack did it, I had like a 50-50 shot of whether it was working or not. Install DevStack, post-install DevStack basic stuff with the key security groups, DNS, and then finally the advanced DevStack stuff, which is really just creating load balancers and adding members to pull and creating Vips just for testing. It doesn't do all the lab exercises for you. You just want to verify that stuff works. And then at the end, it snapshots the virtual box VM for you. So status check, how are we doing on people who are doing manual vagrant up-builds from scratch? Did the wireless completely fall over? Anyone who's copied the image over? Are we getting the images up? Now, the network settings. Can someone help them with the network settings? Two and three. It would be the management IP on the second adapter. And you can leave the third adapter as is. Can you just scroll down to the, yeah. So adapter two, the first one is a NAT only. I'm sorry, it's a NAT network. And the second and the third are host onlys. So the second network adapter needs to be configured with 33.2 IP. And the last one is 168.27.1 network. So it's a 33.1 and a 27.1 for each of those. Yeah, sorry, so the 33.2 is already in the image. That's already statically IP'd. What you don't see up here is the virtual box hypervisor side that it calls that 33.1. And then so when you're configuring the networks on an image that's imported, make the first one 33.1. And the second one's actually 27.1. But I don't think it's as important. Actually, it is important because the VMs are going to be connecting to those VMs, so make that 27.1. It is, everything is stack. So it's root stack and vagrant stack and an open stack. It's admin stack and demo stack. Stack is the password. Now, are you SSH'ing in? So SSH'ing can do a couple of ways. From the console, that you, well, sorry, if you're with the image, use vagrant and stack. If you had built from vagrant, you can do vagrant space SSH. But if you're using the image, just connect with vagrant and stack, which we had a whiteboard or something. But vagrant and stack. And this is the most important thing is getting the build going. So let's spend a minute. How are we doing on time? Thank you. OK. So machines up and go, OK, 0, 0. Did you build from scratch or using the image? OK. If you built from scratch, you want SSH with 33.2. So you're going to make sure you have the right network on the virtual box. And you may notice that the second one, the first one is NAT. The second one is 192.168.32.0, right, slash 20, yeah. So you can try to SSH into the virtual machine by the 192.168.33.2 IP address. You can try 33.2, or if you're in the folder that you launched a vagrant from, you can just type vagrant space SSH. And it will, if everything worked correctly, did that work? Did it connect? A while to come up, yeah. Oh, yeah. Yeah, can we start the WebEx meeting? Using yours, right. Yeah, so mine sitting at, including Neutron Elbaz, and I don't know, so if yours is still going, yeah, you won't be able to. It doesn't set up all the auth until the build is done. OK. Go to the DevStack directory. And in that directory, you can find a shell script called rejoint.stack, to launch that shell script, you should launch all of those, your OpenStack services. It's just dot slash rejoint, yeah. Yeah, this DevStack should be, yeah. Well, did you, did you, I'm on WebEx, so if anyone wants to join the, for the, It's just entered a room, if you want to just post the room, the meeting details, the room details. Oh, yeah, yeah, I'm sorry, the meeting details. So vagrant and stack, or root and stack for that. So I think as long as, as long as everybody's got their VM up, let's probably do that checkpoint versus trying to bring up all the services right now in the interest of time, I think, because we probably might end up troubleshooting, starting the troubleshooting process right now. Got it. On the VBox image? No, I know, yeah, I have not yet. I will, as soon as one of these guys takes over, which is in just a minute, then I'll knock it out. What's the username? S-S-H-O. Oh, just your S-H-O case, S-H-O, yeah, sorry, one more time. The address, it's hard to read on here, for, for WebEx, for the Q and A in the chat and the files, HTTPS, colon, whack, whack, webex.cisco.com, slash meet, I believe, sorry, sisco.webex.com, slash meet, and it's not going to be my room, S-C-H-O-K-S-E-Y. I know you guys can't see that. So I have one key thing, so I'm going to point out, several of you guys joined the WebEx, and there's one error message when you try to import the virtual box into a virtual box application. So in that case, please do not double-click the vbox file. Instead, please create a new VM and point the hard disk image to the VMDK file in the sub directory. Everybody good with not importing the whole box? Create a VM and then point to the VMDK. That's how. So create a new VM and use existing VMDK file in the sub directory for your hard disk. Yes, the operating system is Ubuntu and is Ubuntu 64-bit. Single proc, 4 gigs of RAM, I would turn off the ACPI and the BTX. You don't have to specify. I'm looking for the, oh, you know how to upload up here. So it should look like the one that's being built by Vagrant anyway. So three nicks, one NAT, two host-only networks, four gigs of RAM, Ubuntu 64-bit, and that's it. And on the networks, I got a whole bunch of networks. But the ones that you guys are working with, that should look something like this, whatever network it wanted to create, you should have one that says 33.1 and 255, 255, 255-0. This is the most tedious and least interesting portion of the show. So it's all right when we get into the auto-scales could be more exciting. Anybody having errors with the image? Does anyone have a working DevStack image running? Booted, yes. OK. Good, all right. You built from scratch? Yeah, from Vagrant up. OK. You must be connected to some secret Wi-Fi that I don't know about because I'm still taking its time. For the, yeah, on the first one's NAT, which it doesn't have to create, and then it's going to want two more, and those host-only ones are going to, because the 33.1 is the first one, and the second one should say 27.1. 27.1, yeah, slash 24 also. Well, the first one is NAT, should be the NAT adapter. And then the one after that is 33.1, and the one after that is 27.1. Yes, this might help also, unless it has flags in it. But I'll just put it back in virtual box. I mean, there's two, it's kind of hard because mine has a whole bunch of networks. But the two networks that come in with a VM, you're going to want one that says 33.0. And I mean, vbox.net0, they all get, which is the hypervisor's default NAT interface. Right, yeah. And then the second one's 27. So you can probably, the gentleman next to you, who, oh, no, oh, if you copied up the vagrant file and you did vagrant up, you don't have to modify anything. No, it creates the network for you. I would recommend you not do right now install from GitHub because it's not going to complete. There are some folks requesting more detail steps on the GitHub installation. So let me just outline it. So if you're going to go to the GitHub route, it's going to take longer. But if you still want to do it, you may want to CD to the project folder and then go to environments directory, which is where the vagrant file is. And just run a vagrant up. It will install the image, configure the VM, and install DevStack from there on. But though it's taking the GitHub route. Yeah, so there's a few of the folks that went vagrant up from scratch when they came in like half an hour early. Any of those folks that are up and running, just ask them what the steps were. You just clone the repo or W get the vagrant file and then go. No? It still hasn't finished. Right, and that's why we did the image. I think we're kind of flawed on both routes because it's tough to get the environment up that quickly. But it will eventually finish. So there's keys. Yeah, just who needs a key? Just from a time perspective. You've got to mess with the networks a little bit, but Sagan? Yeah, should be fine. Yeah. Internal only will not let you. You'll have to be on the VM to do the CLI anyway. But if you wanted like HTTP to horizon, although you can do a port forward, you know? So you had use the image. Did you copy the image over? OK, OK. And you just create those only NICs and then put the right IP ranges on them. Did you do that? It takes, I don't know why it does 60 seconds and it does 120. It will come up, yeah. Yeah, you have to have ETH0, you have to have a NAT network because all of the configurations in DevStack and Neutron and all the config files that are written are written using ETH1 and ETH2 and references like that. And if you don't have that NAT ETH0 network, then you're not going to, the ETH1 and ETH2 stuff is not going to work. Password to horizon is going to be admin and stack. You have the first, you should get like some kind of prize or something, you have the first working machine, all right. Fantastic. Admin and stack and then you can change the projects up in the corner to demo. The stuff was built under demo, not on the right, on the left. So not the username but the project context. So did you do a vagrant up? I used the image. Use the image, okay. And you got the, and was that a fresh virtual box install? Like you didn't have anything else there before? Okay. So it seems like on fresh installs, it creates the networks for you. Not sure though, use the disk, okay. So maybe it would help if I actually just went through this on the screen. Yeah. So you guys can see that this is what a virtual box install happened to have one machine running. What I'm going to do is do a new machine. All DevStack 2, Linux Ubuntu 64, continue. 4096, use an existing hard drive. And I'm going to go and find that drive. So everyone who's imported an image, this is all looking familiar. You have the virtual box VM on the key or on the DVD. Are the DVDs still going around? Is anybody really, problem with that? Yeah, right. I thought, I don't know, I know. Yeah, 2005 called, they want their laptop back. So, right. So virtual box VM folder, right. DevStack and then grab the drive. See, it might not be a great example because those networks already exist for me. But if they didn't, I was telling, and I happen to know mine are 11 and 12. So, you just kind of import, create the networks and attach them. So whoever was stuck at Cloud in it, did it finally wake up and keep going? Okay, good. So, how many people have working environments now? Three-ish, four-ish? Okay, okay. If it boots without a message complaining about networks, then you did it right. If it boots and says, I can't find this network that you'd have in the, because it arps against the, or it does a CDP against the NIC and the OS actually and says it doesn't have a route for that network and throws up an error. So, instead of just standing here watching, do you guys build images? I'm just gonna finish off this section. And, okay, so, it's the environment process. So, you know, what is auto scaling? So, all of us kind of sitting here in this room are a part of a very interesting time in IT history. So, Cloud is probably the most disruptive technology since the advent of the mainframe. Within Cloud, OpenStack is even further disruptive in that it's open source and making a huge wave like Linux. Within OpenStack, aside from service catalog and provisioning and SDN and all that, you've got technologies like auto scale and bare metal and ironic and geo balancing and things that are even the tip of the tip of the tip, but I wouldn't call it the spear, it's kind of the tip of the grenade because it's changing everything. And it's a pretty interesting time for us in auto scaling. Some of you might have thought of or heard of these use cases, I don't know what your personal use cases are, but some of the projects I've been involved in are around academic and research, HPC, auto scaling. Internet too, a funded entity, almost like the relationship that DARPA had with the Fed. There's 10 gig and 50 gig and 100 gig pipes between Notre Dame and Clemson and UCLA now. So they're doing MPI-based and HPC-based auto scaling over OpenStack across geographies. It's changed everything the way that they do the research, they're not confined to the data center anymore, but media, video and audio rendering and hosting course analytics and big data, security. I mentioned DARPA, they actually have a public project now called Planet Nine, where they're mapping every single circuit in the US running on big data and running on top of OpenStack using auto scale to process data and model against cyber attacks. So Fed, Pub, use cases all over the place, but we have an idea of how it works, right? Instead of building just funny, it's more of an analog flow than it is a digital flow. Instead of over buying capacity or not having enough capacity, utilize auto scale technology to scale it and scale down when you don't need it. Another analogy is if you only go to church one day a year, don't build a church, and it's a poor analogy, but they don't build a church for the one day that you go and rent it, like you don't have to, through your money and time and resources. So there's certainly cost savings, but what's more interesting than the cost savings is all the innovation and the way it kind of changes the way we look at technology, period. Pretty basic components. You've got a server that's under load. You have a stress meter on that service. In our analogy or in our workshop, we're gonna be using, Shashong will go into deep detail about this, but it'll be, Slumber will be watching number of load connections against the load balancers. And when a connection count exceeds a certain amount to start spinning up web servers, and then it wouldn't cool off, it'd be spinning those back down. So we have the meter, we have the alarm, there's an action, and then there's a server result. So the alarm could be hot or cold up or down, right, and the action can be scale up or scale down. So it's a little, at first glance it's a little bit complex under the hood, but in the end it's just as few actions. So if you haven't worked with heat before, the way I think about heat, the way I look in my mind. So OpenStack is an extrapolation and orchestration and API control over all these physical assets underneath the host operating system, the OpenStack bits. So you've got API control over switches and storage and VMs and networking and IP addresses and all this stuff. Heat is, if you take that same concept and put it above, primarily heat is controlled of the virtual resources that OpenStack has access to. But even that line is getting blurred because the driver for Ironic, he can leverage there's API drivers for, there's NetConf drivers for physical network and load-mounted devices. So heat could really have control over, again physical components, it just depends. But in my mind there's a layer of orchestration above and a layer of orchestration below. This orchestration, this below the orchestration that OpenStack does natively, is really around provisioning and access control and monitoring who has access to what, where above it's more of a workflow, more of an agile, you know, up at Chef Ansible Salt kind of modality where it's actually taking the stuff that's consumed and then doing something with it like building environments. It was born really to, like many OpenStack services, it was born to compete directly with Amazon's CloudFormation. So at its inception building environments in CloudFormation and it still supports the Amazon syntax and the APIs, but building a service like go and create, you know, four web servers, put two on each network, create another network, put my app tier on that network and another one, put my database servers on there, plumb them all together, create the security groups and deploy that as a service. It's like a slam dunk for heat. The growth and kind of various ventures around heat are doing, you know, doing LBAS and doing, going further up the stack and of course autoscale and things like that. So it's a great tool. It is the orchestration service. It's part of the core. You know, where I've seen shops have issues is that they go half and half. So if you have a shop and they're doing 50% of what they need in Puppet and 50% in heat and then maybe someone else is doing another 20% Ansible, that can get, sometimes you have to do that because the functionality just doesn't exist. I'm seeing a lot of people do orchestration outside but you know, not a great use case to have Puppet call heat when Puppet can just talk to OpenStack natively. So I'm not sure how you're deploying it today or what your interests are today. It's a solid native component, but it just depends on how you wanna leverage it. Shashan again is gonna go into much more detail. While you're building, I'm gonna turn over the stage to Shashan for its solometer and we'll keep going. Thank you, Jason. Because we have a time constraint, I probably want to accelerate a little bit so can save enough time for you guys to do the lab. So in this section, I'm going to quickly go through the solometer and also more importantly, the solometer integration with LBAS and also the solometer integration with heat. So let's do a quick overview. Solometer at the time was originally designed and developed its purpose is for the user to collect the building information. Its main goal is to provide infrastructure to collect any information for any OpenStack projects. Then later on, users start using the solometer for other purpose like for example, monitoring. And if you want to collect the statistics and use a stats to trigger alarm, solometer is one of the best fit for you. For all of these reasons, in order to achieve these goals, solometer from architecture perspective can actually compose of multiple component. So from the left to right, from top to the bottom, you can see that the first one is agents. There's a couple of type of agent. For example, the messaging bus, lessoner agent. And in that case, the messaging bus lessoner agent is going to grab the event or notification from the messaging bus and transform into the samples. Here in today's workshop, we're going to take a closer look at another type of agent which is called the polling agent. And in this case, polling agents is going to communicate with other OpenStack projects via API. So when the polling agent grabbed the statistics via the API, it's going to hand it over to another key component called the pipeline publisher, the publishing pipeline. So on the receiving side, as a user, you can do a lot of magic on the received samples. For example, you can run through those samples over a cup of transformer and transform into the new meter. And afterwards, it's up to the publisher to send those meter information onto the messaging bus. On the other hand, you can have a multiple different type of the receiver. For example, as a user, you can write your own script and tap into the messaging bus, grab those raw information from the messaging bus, run through a couple of business logic and decide what you're going to do with the result. And don't forget that Solometer also provides you with a default receiver service which is called Collector. And in this case, Collector retrieved the data and saved them into the database or saved them into the file. In order to support the interaction between Solometer client and the Solometer system, Solometer also provides the API front end. By using the front end API, as a user, you can get the data out of the database for push some policy into the database. For example, the alarm definition. And since we talk about alarm, let's also take a quick look of this alarm subsystem. And by the definition, alarm define a couple of things. The first one is which meter you are interested in. The second one is what are the coding interval? And the third one is which condition you are looking for. And the last one is what action you're going to take when those conditions are met. So this is the four key points I wanted to remember because we're going to dive into more details in the latter slides. The short is, Solometer also have other subsystem, for example, notification. But as you can tell, I'm not only wrong out of the shapes, but I'm also running out of the room or space on the slides. So I intentionally skipped the notification subsystem. So I bet most of the audience here already used low balancer before. And the Neutron LBAS service is not a mystery either. So today in this workshop, we're not going to spend a lot of time and trying to turn you into the LBAS expert. Instead, let's spend our energy behind the Solometer and LBAS integration. Does anybody use this command before the Neutron LBAS proof stats just come before? Okay. All right, so like I mentioned before, the coding agent actually interact with other OpenStack services via API. And in this case is a Neutron LBAS service. As a matter of fact, the API communication between the Neutron coding agent and the Neutron LBAS service is identical to the API used by the Neutron LBAS proof stats command. So as you can see, this command actually returned you a very well-formatted table capturing a couple of key information. For example, the current active connections being processed by the low balancer pool. Some other information, for example, the total number of bytes come in and out of the low balancer pool. And the last one is this total connection, which means the total number of connection being processed or has been processed by the low balancer. So also recall that in the architecture slides, I mentioned that a coding agent grabbed all of the stats via the API and put envelope around the stats, turn them into the meter, right? So as you can see in the context of Solometer and LBAS integration, the coding agent actually create a couple of meters for you. The first one is this network service low balancer active connections. This is a counterpart of the active connections as you can see in the table at the top of the slides. Another tool is low balancer incoming bytes and the low balancer outgoing bytes, this is mapped to the bytes in and the bytes out, entry in the first table. The last one actually is the most important one for today's workshop. So let's lay our eyes on the last one. It's called low balancer total connections. So this one, this is the meter we're gonna use later on to trigger the alarm. But before we go, continue on to that discussion, let's take a pause because you might already notice the meter type of this total connection is cumulative and the unit is set to connection. So in other words, the value of this meter can only go to one direction which is up over the time. So in other words, we cannot directly use this meter for the purpose of trigger alarm. Instead, we need another new meter which can fluctuate over the time up and down, right? So with that being said, also remember that I mentioned previously, by using this publishing pipeline, we can, as a user, we can do a lot of magic on the receiving meters and its samples. So now let's use total connection as an example and see how we can transform those total connections to a new meter, let's say total connection rate. So as a matter of fact, the publishing pipeline give you a lot of flexibility. This is one of the most interesting thing I find out quite useful for Solometer. Inside this directory, ETC Solometer sub directory, you can find a file called pipeline.yaml file and this yaml file give you the coupling, define the coupling between the source of the samples to its corresponding sync for transmission purpose and also the publication of the metering information. So if you open up this file, the first thing you notice on the left hand side is the source in this section, you can find a definition of the meter which you are interested in and another thing is how often are you going to pull this meter. And in the sync section, you can find the name of the transformer in this case is a rate of change and right below you can find that what you want to do. You want to map the original meter which is the total connection to another one is called the total connection dot rate. Here towards the bottom, you can also see that the type of the meter is not accumulative anymore. Instead it become in the gauge and also the unit is not a connection anymore as it connections per second. At the bottom in the publisher section, this defines how do you want to publish the new samples for this new meter. And here we just use the default recommended value which notifier notify really means is you want to publish the metering information onto the messaging bus by Oslo messaging library. So this is the pipeline. So by the end, we have a new meter called total connection dot rate. And don't forget that on the other hand the messaging bus we also have a collector service. Collector service retrieved the raw data of the messaging bus and save a copy into the database or into the file which in turn made available for the salameter API through the salameter API. So now as a user, if you issue another command, salameter sample list command, you can now see all of a sudden this new meter showing up in the output. The type, the unit, and also the frequency in terms of timestamp, they are all defined in your pipeline.yaml file. And sure in this case, because we just start a low balancer, the volume is 2.0. After we'd used CPU to spike these and it was difficult because when you started hammering the machine with CPU to create an auto-scale event, well, you're auto-scaling what you're already hammering and making the problem worse by having more machines because you're using a tool to create the CPU. And then I was like, you know, Shashank, you know, there's this other thing about, you know, connection, Albat's connections, but it didn't, like he said, didn't do precisely what we wanted, but he was able to take it and say, well, instead of cumulative connections, change that connection, you know, connection rate. Not that that was pretty cool. So the callout is that there may be stuff in the salameter that doesn't precisely fit your use case. You may wanna know block growth in a VM over time versus total block growth or like rate of change or something like that. So you can just, I say just, he's the one who did it, but you can just call Shashank and he can write you a new meter for you. No, kidding, but. Yeah, this is actually one of the beauty I like about salameter is highly customizable. We actually create our own agent, grab a lot of information, not only from the, not only at the infrastructure level, a level, but also at the application level and send it over to the collector and then integrate those data into our database. So this is part of our product. And I think I wanna highlight here is by default, the polling interval is 600 seconds, which is 10 minutes. I don't believe we have either the time or patience here to wait for 10 minutes before you can see the new meter come up. So I intentionally trimmed down the interval to the minimum value which is 60 seconds. So in your lab, at the time when you go through the lab exercise, I encourage you to issue all of these commands and just try it out by yourself. Another one is pretty important in my opinion is the salameter statistic command. Here it give you the minimum average and also maximum value of the new meters over a certain period of time. All right, so now let's talk about alarm because the heat and the salameter integration is a heavily rely on alarm. On this slide, what I'm trying to share with you is how we can manually create a salameter alarm without, without any assistant from the heat. So remember that by definition, alarm need to define a couple of things, right? The number one is which meter you're interested in. The second one is how often you wanna evaluate the meter. And third one is which condition you're looking for. And the last one is what action you're gonna take. So what this command actually tell us is we're gonna interesting, we're interesting in the meter called low bancer connection dot rate. This is a new meter we just created by using salameter. We're gonna take a very close look at this meter and we're gonna evaluate the value of this meter every 60 seconds, every one minute. And the condition here is if we saw the case, if we saw the scenario, the average value of this new meter is greater than the predefined threshold, which is 2.0, in the past three consecutive period. Then we're gonna trigger alarm. And one thing I wanna highlight here is it sounds like a lot of tasks, but all of the tasks I just mentioned is actually covered by the alarm evaluator. But here at this moment, we still miss one thing, which is action. Does anybody know what the default action is when alarm is triggered? It's actually do nothing but save the alarm into the log file for debugging purpose. And that particular task is actually handled by alarm, by the alarm and notifier. So here, as you can see, also part of your lab, I remember putting in the lab guide, I encourage you to try the salameter alarm list prior to the integration with heat and see what is the outcome of this command. So now let's pull everything together. I just want to quickly touch upon the salameter and heat integration by single slice because I already explained all of those principles to you in the previous slides. So remember that the major, the main goal, the main goal of heat as an orchestrator is try to avoid all of this tedious task for you as a user by automation. So in other words, what I shared with you in the previous slides, and that the alarm threshold create command, you can just forget about it. You'd have to remember all of the syntax parameters would really mean because heat is going to automatically provision the alarm definition for you via the salameter API. However, I still wanted to remember the principle, the concept like the definition of this new meters, so how often we're going to pull it and also what condition we're looking for together with action, right? Because at the time we go through the lab, exercise, all of the concepts, all of this parameter, you actually need to define it as part of the heat hot template. You still need to do it as part of the hot template. And at the same time onto point of two things. The first one is by using the heat template, one of the key difference is heat is actually going to create two alarm definition for you. The first one captures a scenario where the case is when the value is greater than the predefined threshold. The second alarm defined another scenario when the average value actually go below the threshold. So by using these two alarms, the heat can get notification from the alarm notifier and uses information to decide what I need to scale up. By adding more VM instance in the low balancer pool or I need to scale down by eliminating a VM instance from the low balancer pool. So this is the first key difference I wanted to remember. The second one is in the previous slides, I talked about the default action you're going to take if you want to manually create the salamate alarm definition, in that case is lock, right? Your alarm into the log file. But by using the heat template, the action is going to change. It's going to change to another type of action which is called HTTP callback. In other words, the heat is going to tell the salameter notifier a predefined URL and when alarm is set off, the salameter notifier is going to make a HTTP request to that URL containing all of those details why this alarm is triggered. So this is the last slide of my section. Is there any questions I can help you with? Sure. Well, in our case we use HTTP because we tried to simulate the web server farm. But in general low balancer, there's no restriction what low balancer can proxy, right? So it can be anything. Yes, you're absolutely right. That is correct. Exactly, yeah. In our lab, actually, I intentionally set it to two, which is very low number. In other words, when you go through the lab exercise, there's one particular step. I'm going to ask you to use Apache Benchmarking 2 to generate the HTTP request towards the web, as you mentioned. In that case, very quickly you're going to see the values go up through the shelf and which can turn triggered alarm. Any other questions before I turn the floor to Sharmin? Thank you very much. Thanks, Shasham. So I'm going to cover concepts around heat from the basics tried to the more advanced features that heat provides with auto scaling. So moving on to, I think all the conceptual information that we will be covering today will be in the context of the lab exercises, just in the interest of time. So I'm going to probably quickly run through all the terms that are used in heat. So to start with, we have the notion of a resource. And resource is one of the most fundamental blocks in heat terminology. And a resource can be anything. It could be a virtual machine. It could be a network, a port, a security group, a subnet, and all the orchestration happens around a resource in open stack heat. Now the other important term is really a stack. And a stack is nothing but an instantiated form of a resource and nothing but a collection or a composition of those instances form a stack in the runtime terminology of heat. Now a template on the other hand is just like a specification that you use to compose these resources together. And then you have parameters that are nothing but configurable input variables. You can configure them either through CLIs or you can default these within your template as well. And of course parameters have types and descriptions. The other important aspect for heat is the output section and the labs don't cover the output section. The docs generally say, oh, it's for displaying the output of the heat runtime command. But in reality, the output section can be actually used to configure attributes and those attributes can be used in subsequent nested templates. We haven't included that lab exercise but I want to call out that the output section does have some merit to it when you use it appropriately with the get attributes functions. So we walk through the resources. We walk through what a stack is, a runtime instance of a composition of resources. What a template is. So the very first lab exercise, it's a very simple virtual machine, a simple server. It spawns a VM, it'll inject an SSH key, it will create a port, apply security groups to the port, associate a floating IP and install a small net cap utility just to simulate HTTP request response scenarios. So that's pretty much your first lab exercise. Moving on to the next lab exercise, this is a template that's solely focused on load balancer resources, on all the neutron resources that get created as a result of instantiating a load balancer. So the only parameters, this section is probably highlighting only sections that we are talking about right now. So the parameters that go into this heat template are the floating, external floating network, which will really serve as the final bit that you'll hit against. It takes in also an internal subnet because that's where you'll create your internal pool against. It creates, in terms of resources, it'll actually go ahead and create the health monitor and it creates a pool. As a reminder, right now, this is an empty pool. So it creates a pool or HTTP with a policy of round robin. It creates a whip as well. So this particular resource block, not only creates a pool, it creates a whip against the internal subnet. You create a load balancer resource and associate with a pool in the next resource. And then, of course, you have the floating IP against the external network and then association of the floating IP against the whip that we created in the previous neutron pool resource. So when you run a heat stack on this template, what typically what you should see is an empty load balancer pool here. So you should make sure the state is up and the health monitor status is in active state. It's right now, you see, if you notice the members are empty right now, we haven't added any VMs. There are no IPs associated to this pool yet for the backend configuration. So I apologize, I forgot to mention, we are using HAProxy here, which is the default plugin for load balancer right now. So this is an HAProxy setup for your DevStack installations. So if you notice the whip now is associated with external floating IP when you do a floating IP list against the internal whip. And at the end of this lab exercise, you should be able to hit the floating IP and you should get a 503 service not available, which is a valid scenario because you don't have any backend servers connected to the front end whip at this point. So part two of the lab exercise two is now to add the members. Now, one thing to note here is that the web server, yeah, mind you, I've just put snippets of what's relevant to our conversation right now. The web server YAML is exactly the same as it would have been configured for simple server with the addition of the pool ID that we created in the previous exercise for the internal subnet. And you have the association in the resource section of this web server template. You will additionally have the association of the member. So if you see here, notice the VM instance first address takes in the IP address of the VM and of the instantiated VM and associated with the pool as the member of the pool. The load bal... So there are some other concepts that we're probably going to introduce you to along the way. So there is this notion of resource groups in Icehouse that was introduced part of Icehouse. And resource group is nothing but identically configured resources. Not identical resources, but identically configured resources that can be clubbed together. And the use case here that we're trying to demonstrate is that we are saying that we are going to add the X number of capacity to load balancer pool. And that's what we are trying to do here in the capacity count. We are saying by default, launch this resource pool with the two members and associate them to the load balancer that we configured in the previous exercise. So that is all this is that is doing here. We also introduced the notion of nested resources and there are different ways of nesting templates from an optimization perspective or composition perspective. And in this case, we are using the environments.yaml and through the resource registry definition here that you're seeing. And we are saying this is a provider resource. So the webserver.yaml is really a provider resource that is aliased to a custom declaration of a resource in the environments file, which is a scale. And we're just referencing that scale back into the resource definition of the members.yaml. Oops, sorry. So yeah, if you see here, that's the reference to the scale alias. And if you were to, the man in which you would launch this particular heat template is that you would just say heat stack create load member stack, which is what will actually launch the members of that resource group. And you would provide the capacity count here in this case. And that's the only difference between the previous exercise versus the web load balancer. At the end of this exercise, you should be able to effectively hit the external width and get an alternate in IP from the local instances internal subnet that was configured for that VM. So basically that'll tell you that your Robin policy is in place and both the VMs are responding in that fashion. The last exercise is focused on auto scaling. Now auto scaling basically is another addition that was added in Icehouse. And it primarily tells you that you can scale up and down an arbitrary number of resources. It uses several parameters and properties that you can configure on the load balancer. The max size or the minimum size of a cool down policy and then we'll see in the lab exercise how these come into play, a cool down policy will wait until it actually goes ahead and performs a certain action. Again, we're using the same scale resource reference. So Shishang probably pointed out in his section where he was manually creating all those alarms and the meters. In this case, as a prerequisite, you would require to load the pipeline, the custom meter pipeline for load balancer total connections, but you don't have to really go ahead and manually create those alarms. So in lab three, we have created actually two alarms. I'm just for the purpose of this slide, I'm just showing you one alarm here, but there are two alarms that are configured. One is the rate high alarm and it's the rate low command alarm. So in the rate high alarm, basically what it does is that it has configured a threshold of two, which means that if the rate of connection per second goes across the threshold of two for two consecutive evaluation periods, I'm sorry, three consecutive evaluation periods and each period being a duration of 60 seconds, which essentially means after a consecutive period of 180 seconds, if you're seeing a threshold of connections per second crossing three, then the alarm action says that it raises an alarm and the action says that go ahead and execute the scale up policy. And in the scale up policy, we are all we are doing here is we are saying the adjustment is a change in capacity and what do I want to do? I want to scale it up by one. So the comparison operator says it's greater than two, then go ahead and change the capacity to scale it up by one, in which case when you do a lower list, you should start seeing a VM getting booted up. So that's pretty much the auto scaling exercise. Some of the enhancements in Icehouse and beyond just wanted to highlight, there was the introduction of resource groups, there was introduction of provided resources and these are all enhancements that were done from the point of optimizing the manner in which you could effectively use heat in many different use cases. Config resource was another one, added config resources typically used in conjunction with software deploy against a target, which is really a server. The Kilo actually introduced a very interesting blueprint for snapshotting an entire stack, which means not only would your resources be at that point in time, including your volume, your networks, everything would be at a point in time and you could actually roll back to that. In terms of improvements to the authentication model, so this was a big issue in Grizzly for in case those of you ever worked with Keystone and give Grizzly and specifically with some of the cloud formation signals. So some of the actions and heat require resource creation, of course, and consequently sometimes admin roles. And the owner or the creator of the stack had to actually be an admin user, which is not an optimal security model. So in starting Icehouse and actually in June or they introduced the notion of a heat Keystone domain, so they leveraged the Keystone domain feature in Keystone and created a separate domain for heat, wherein all the users that get dynamically created for heat are within confined to that heat domain and it is managed by the heat admin domain user role. Now in regular open stack installations, you will need to actually go ahead and configure this manually. You'll have to create a domain and you'll have to add the heat admin, give it the admin role. But in your DevStack configuration is automatically preconfigured by DevStack, so you don't have to do that. This again requires Keystone version three, so that's important to note. So right now, Triple O heat templates are available. Triple O is trying to do something like an overcloud. So basically they start off with the notion that I'm building a small contained cloud, which is like the undercloud that's going to go ahead and build larger, large scale open stack clouds for me as my target use cases. So overcloud is heavily being used right now and it is leveraging a bunch of these features like resource groups and provider groups, as software config and deploy resources. There's also software component deploy and so basically config will allow you to run a bash script or a puppet manifest, but component will give you more of a key value mapping so you'll have more control on how you produce the configurations. That's pretty much all we have for the enhancement, so that covers most of our heat exercises for today. Any questions? So if you actually ended up doing a Keystone user list, you would actually see those generated Keystone users. It's not the resource that generates users, it's the requirement that a heat API has to be able to go ahead and create those resources that it creates these users for you. Yeah. All right. So as we build up the lab, we probably can probably get more point of questions at this point, turn it back to you, Jason. Thank you. With GitHub, I apologize, we may not have enough time to finish the entire lab exercise today, but I do want to share the pointer to our lab guide because you already have the environment on your laptop hopefully by this moment. And later on, if you go to GitHub, we share all of those presentations, the images and also the lab guide. You can still follow the lab guide and do the exercise, you know, from your home. Yeah. So this is the lab guide that I created based on, by using the page. So surely still have some room for improvement, but I just want to use this opportunity to quickly go through this lab guide with you. This lab guide have quite a few major sections. First one is the lab environmental setup. I believe Jason Slice probably does a better job than me. Please use his Slice as a reference, right, to understand the lab environment and also the topology. Yeah, if you go out there now, you'll see two different guides, one for the environment build, which was done separately because of the bandwidth thing, but by the time you get back, this 1.4 will have the build from Bagram, from Scratch, as well as the labs. Yeah. And also as Charming just pointed out, there's some three major labs captured by this lab guide. The first one, by the end of the lab one, you're going to use a heat template to create a very simple web server. I mean, even if the web server itself, well, it's a VM, it's a virtual machine, right? There's nothing fancy, just help you to understand the layout of the heat template, those syntax, those concepts, Charming just shared with you. The lab two actually have two parts, but by the end of the lab two, you should be able to create a low balancer with a multiple web server in the pool. And by the end of the lab three, you should be able to rely on the heat template to automatically scale up and scale down. And also, as I mentioned, I also put a small two building to the image is Apache benchmark, benchmarking two. You can use that two to generate, to pump a lot of HTTP traffic into the low balancer web. Here across all of the sections, we laid out step-by-step procedures to help you stay on the right track. What is the command you need to use to verify and validate your environment? And for example, what is the prompt you expect yourself to be in at each step? And also, what is the command you should use for verification purpose? And we also spent a lot of time trying to add additional information for you. For example, the tips, right? Because some steps over there, we intentionally added as a trick question. If you, for example, in this case, if you create a low balancer and do the low balancer pool list, you won't see anything because at this moment, there's no pool member yet. So this is kind of tips we added to help you or actually to keep you thinking and the time we go through the lab and we try this out with some other audience and they like all of this layout very much. And again, this is a sharing session. We're not trying to sell anything. But instead, we really encourage you to go through this lab exercise whenever you have a free cycle. And one very important thing, we also want to hear your feedback. We are here to solicit your feedback. So next time, we're making a chance to offer the same workshop. We'll have a better material for our audience. Do you have anything else you want to add? I think we have time to try a demo, if you want. So we have a preconfigured VM running and we can probably just fire off and do some novelist commands and show them stuff. Minds of modern mind machines. Or we can really probably troubleshoot. That's fine. Yeah, you have one and... Mine is down right now. It doesn't take long to... Yeah, we'll bring it up. Yeah, let's do that. You can do it from base, right? Or we can do it from base. Yeah. I got the amulet in there. Yeah. Or you can use mine. Yeah, let's do that. Okay, all right. So what we're going to do, I know, so I thought third time was a charm. We had problems, the first time we did, we ran out of bandwidth on the hosted environment. Second time, we brought in all these machines and that fell over as there's too many people. Third time, I thought, well, everyone's just going to build their own and we'll do that. But I think that was not awesome either. So I don't know how many people have machines up. You can take the lab guide and... Okay, so everyone's circle around these guys here. Like you said, you can use the lab guide and go through all of this. What we're going to do is show you in the last few minutes that we have what it looks like. So you can see an auto-scaling event and see the traffic going in and stuff like that. So he's bringing that up. So the folks that started with Vagrant up, was it a bandwidth thing or did stuff just time out? Yeah. Yeah. So... Still going? Well, it's persistent anyway. So I don't know what the answer is. It's maybe more, okay, so it's more of everything. So if you have more servers local to host. But what you're going to see in the demo, like you said, like we've been talking about, essentially at the end of the Vagrant, what it does is it creates some load balancers and some VIPs and floating IPs, but just to test, just to sanity check, right, and it deletes all that. The lab starts and you take a basic heat template and it builds a few servers environment. You add on to that exercise with adding some load balancers and then by the third one, it's the auto-scale template and what that brings all the components together, heat, salameter, LBAS, NOVA, all together and then we take the Apache bench tool and we start hitting it. The custom meter that he wrote counts the connections that exceeds, triggers, it starts bringing machines up and then we turn the bench off. So it's actually kind of anti-climatic after we've built everything. It's like, look, machines are popping up and then they're not, but it works. So we're going to try and get a demo out in the last six minutes. So you know what the expected results should be like. I'll see if my vagrant finished. So, okay, well here's something actually. The vagrant file, my vagrant file finished, right? And if you vagrant SSH into the box, before I do that, actually you can actually cat the vagrant file and see some of the stuff that it's doing at the end. Waiting on these guys. At the end of the vagrant file, there's just a shell script section and it's going in after the file runs and it's creating rules and adding DNS to subnet is creating a custom instance, creates a little bit. So you have all the commands there right there for you in the vagrant file if you want to do this kind of stuff yourself. Also might be interesting to see. The first thing it does when it completes is it tells a time check. So this is the end of the vagrant file that on a successful run and it does a lot of stuff. That was any good, so an hour and 16 minutes. It should take about 20 minutes for the vagrant thing to build. So this took an hour and 16 minutes, but just like any good engineering and innovation this came out of pure laziness. You know, having to build an environment and add rules, add security groups, add networks, name them and add DNS, add images, all that stuff. This is all the stuff that's in the end of that file that you can reuse that's in the lab. So anyway, we'll see here, I'm just gonna grab that and you just vagrant SSH into your environment. And I did nothing special here except do a vagrant up. So this last little stands that tests the VIPs. So it creates it. So one thing we didn't cover that the interesting thing about the load balancer is you don't, contrary to popular belief probably, you think you put your load balancer on your public subnet, but you don't. The load balancer typically resides in the same subnet as your VMs, then you attach a floating IP to the load balancer VIP. And what this was just doing now was just connecting to the internal and the external VIP. And you'll see here, and when it connects, so it's connecting to the VIP every time. It's hitting this IP every time, which is the public VIP. And what's returning is, you know, it's going one, two, three, one, two, three, one, two, three, just like a load balancer should, right? And when you, assuming you have machine running and everything working, you should be able to go to 33.2 and get a login. We're gonna run out of time on the demo, but I don't know what it's doing on that first login. It does come up, just takes a minute. The people who use the from scratch environment, did it come up clean or did you have to do anything extra? The rejoin, right? That was on the image or on the boot from scratch? Okay, yeah, the, okay. I would give it another shot, so the same file. It's just, it's just not vagrant, but the connection. Yeah, anytime you reboot, so any DevStack machine that you reboot, you always have to run rejoin. And also if you're rebooting this, and it doesn't come up and you're like, ah, nothing works, it's, you know, just rabbit has a problem auto-starting for some reason too, so try rejoin. If that doesn't work, try rabbit as well. But I really appreciate your interest, especially all the folks that came early. I'm sorry there wasn't enough room, so I didn't have enough bandwidth. We really tried to do the local machine thing and it just didn't pan out for us, but this is, I mean, this is not our day job, at our jobs, this is just stuff because we're just that dorky and really like it. So send us an email, we're building environments and doing this kind of stuff all the time. I want you to take away something interesting, so just let us know if you want extra help and I'll help you out. Running it on VMware Fusion, I've done too, if you do that, so, well, okay. What's that? Yes, it is in the deck, but it's J-A-S-G-R-I-M-M, like Jason Grimm, at Cisco.com. Guys, do you have, we ran out of time on the demo as well, but do you have anything else to add? Yeah, I think do try out the exercises on GitHub, download the vagrant script and try to bring it up. You should be able to get the lab exercising up and running and yeah, feel free to reach out if you have any feedback or if you have any questions or something's not still working well for you. Time permitting, we will try to make sure that it runs successfully. Thank you for your time. Thank you.