 So, the talk I wanted to do today was about heat and how to build environments with heat and actually go deeper into building scalable and highly available environments. And so, I'm going to start a little bit with presentation and then we're actually going to go into Hensown Lab where you're going to have to team up because there's quite a few people here. And you'll try to launch a stack and see how that looks and how that actually works. I was going back and forth into, okay, so what do I want to talk about? Do I want to jump in straight into tools and like talk about all the, you know, okay, so that's what we use for load balancers, that's what we use for databases and what not, or do I want to start slow? And when they actually scheduled the stock as the first one, I figured I'll start slow and kind of give you a general overview of heat and other tools. And there's a lot actually, there's a lot of talks in the summit about heat and what it does and there's, right after this one, there's a talk from actually heat core developers about the new features in Juneau, which are really cool. So if you are interested into orchestration, you should definitely go to that one. But we'll start slow and so when we deploy applications in production, they go through multiple stages. So the first thing, before you deploy something, you have to get infrastructure, right? So you have to get some environments, well, some VMs, so some compute, network, storage, all of that to run. And well, after that is set up, typically you would install operating system, you would install the platform services, they are often called platform services, it's just databases, maybe web servers, maybe anything else that your application needs, but it's not actual application code, it's maybe it's PHP or Python or Django, whatever is your preferred, whatever your application requires. And then after that, the next stage is actually deploying application and artifacts here is typically the application code itself, there's config files and at this stage, we also typically configure HA for those applications. When we have them up and running, or during configuration of those applications, there's some changes, some extra configurations to HA tool that happen. And then there's the work, so your application is up and running, it's all good, it's great, it's working, hopefully it's working for a while and hopefully this work stage is longer than the rest of them, it's ideal. And here there's a lot of data that gets collected, so applications write logs, they produce metrics, history, and the actual data. And again, hopefully they run for a while, but at some point, almost every application fail, it doesn't necessarily mean it's a faulty application, it just, that's life. And here ideally what we want to know yet is we actually want to know when the application failed, so we could either manually react to it or the tools that we use or the tool chain we use for monitoring could trigger some events to fix the application. Sometimes if it is an application fault, you have to patch or upgrade or do some code writing. When we talk about availability for applications, there's the most common, you would hear a lot of things like active passive HA or active active HA, what's the difference there? Well, the difference there is that for active passive, even though you have multiple instances of application, there's only one active, there's one working. And the other one is deployed, ready to go, but it's not working. So when our main master application fails, the reaction is to start the second one. Active active, both of them work, both of them work simultaneously. They accept requests for data you want to replicate or synchronize data between them. And when one of them fails, the other one still works. And that's an ideal solution for us. So when we look at this kind of application in production lifecycle, there's different tools and different groups of tools and names people typically use. And if we kind of go and look at OpenStack and where OpenStack plays most of the time. So there's provision resources, so provisioning compute, network, block storage, that's where the OpenStack core programs work. And HEAT helps here to orchestrate. During deploying platform services and deploying the application, you typically have some repositories either they're really defined or just somebody's laptop has a code that you have to SCP or something like that. There's deployment tools that possibly help and they use this repository to deploy the same configuration over and over. So there's also configuration management tools that help us in this stage. During the work stage, there's a lot of log, well, there are typically applications do logging and you want to collect that logging. You don't really use it typically when the application is running because while it's running, why do I care? You use it when the application fails because that's where the logs are typically analyzed, what is the reason of the failures and whatnot. Also during the work stage, you collect the metrics, possibly you plug in them somewhere like graphite so you can have pretty graphs to show you management. Again, nobody really cares if it works. The only time people start caring if it fails. And this work fail stage is where Solometer in OpenStack plays a crucial role because it collects the metrics and it also allows to trigger a reaction to those. Monitoring tools, Solometer is a relatively new program in OpenStack that gets more and more features, more and more functionality. There's still a lot of things missing. It helps you to monitor the cloud resources but on the side of the underlying hardware infrastructure, there's a lot of work that's still not there. So if you really want to have everything covered, you're probably going to have to run some extra monitoring tools. And work stage is also, again, where if we have multiple instances of application, that's where the replication, so the data is synchronized between them. Maybe you take back up, maybe not. And it's all good until it actually fails. So if we look here, HEAT helps us on this initial stage of provisioning resources. It also helps with deployment. HEAT by itself doesn't really deploy applications on top of VMs. We'll see what helps HEAT to do that or what HEAT can trigger. But by itself, if we just take the straight HEAT, it's not actually doing anything on your virtual machines. So what is HEAT? There is a definition that you can find on OpenStack.org and OpenStack documentation where HEAT implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like a lot. If I have my own definition of HEAT that was kind of collected from multiple resources, a lot of them are OpenStack.org documentation, a lot of them are not. And where HEAT as a template-driven agent that helps deployers, so people who deploy things, deploy the infrastructure and trigger deployments of applications. When we look at HEAT architecture, HEAT consists of several demons. So there's HEAT API and HEAT API, CFN, both of them are web servers. They accept requests. They process those requests by sending them to a message queue for HEAT Engine. So HEAT Engine is actually the main, that's the worker guy. That's the busy guy. So APIs, they are really, again, just rascal web servers that don't really do a lot of things. HEAT Engine is the one that does all the work deploying infrastructure. And it does that by talking to OpenStack services. So it talks to all of the OpenStack services, other services like NOVA, Sender, Neutron to create resources and to create what's called a HEAT stack. So stack is a combination of multiple things and multiple resources that the environment typically consists of. HEAT also works closely with Solometer because they kind of, they add to each other. So HEAT can create Solometer metrics and alarms and whatnot, whereas Solometer alarms can lead to something happening with HEAT. And in the end, we have our cloud resources, which are, again, computes of VMs, networks, storage, whatnot, that are kind of managed by HEAT and monitored and metered by Solometer. So if we look at HEAT from the point of view of virtual machine, I mentioned that HEAT by itself doesn't really do anything on your virtual machines. So typically, on the virtual machine, there's a number of scripts or tools or demons or whatever that run that get the information from HEAT and then they apply that work on the VM locally. One of the, I guess, most not common, simple, not simple, something that you can find in a lot of references and when people talk about, like, HEAT and here's template, they use images that have CFN tools. So CFN tools is a collection of multiple scripts that you install in the image and that they can talk to HEAT. So HEAT can talk to them to give instructions and they can talk back to give either signals or which are, hey, something happened, that HEAT can continue processing its stacks. When we look at HEAT, so it's a template-based engine. So HEAT operates on templates. And template typically consists of three main parts. So parameters, which is optional, those are some variables you want to include in your template. So if you want to reuse the same template over and over, you possibly want to have something that a user is going to enter. Again, if we take, like, one of the examples a lot of articles talk about is WordPress applications and it makes you enter a bunch of parameters, including database passwords and whatnot. So that's the part of the template where you define the input parameters for your template. The main part is resources. So it's a description of OpenStack entities, OpenStack resources that HEAT creates and joins them together and operates on them. And then in the end, you can have outputs. Again, outputs are optional. It's some extra information that you want to give back to the user after the stack or the environment is deployed. When we try to apply HEAT to scalable applications, so there's two useful features. So one is HEAT autoscaling. HEAT autoscaling is a feature that allows you to specify and template a scaling group. A scaling group is set of resources that either, that you apply the scaling policy to. So again, you can identify different scaling policies. You can say, well, I want to grow or I want to reduce the number of resources. And for autoscaling, another important part is integration with something like Solometer or CloudWatch that is actually going to monitor those resources and trigger the policy. And the way how those Solometer alarms and the HEAT resources, how they know about each other is because when you create the Solometer alarm, you just add metadata to it. And that's the connection. That's the missing connection. So that's kind of a little bit short about HEAT. So we're going to move to the hands-on part. There is quite a few people here. I don't really have that many environments. I also have good news and bad news. So good news, you're actually going to get environments and you're going to launch stack. Bad news, I broke the floating IPs. So when you get the stack up and running, it returns you a web server IP address, the floating IP. It's not going to be available. So that's kind of bad news. The environment we're going to spin up is relatively simple, where you have two-tire applications. So it consists of web server and database. And web server, there's two VMs. Each of them just runs Apache. They are front-ended by Neutron Load Balancer. And that's where the floating IP was supposed to be attached. Well, it is attached. It's just not accessible over Port 80. There is Solometer alarm that gets created for that group. And the web servers, they connect to databases. And the databases, there's also two of them. So it's two VMs, and they are behind the load balancer. So whenever one of them fails, the other one is still accessible because connection happens over the virtual IP of load balancer. After we kind of start the environments, because it's going to take a little bit of time to provision all the resources, I'll talk a little bit about the problems I personally had was that, and what I personally don't like. And most of it, well, all of that talk is my personal opinion. But let's get to it. So I have 35 environments. It's a lot more than 35 people here. So what we're going to do is we're going to try to join into groups of four, five people, maybe. So I'll do mine on the screen as a demo. But also, some of you can actually get your own to play with. So if you open your web browser and go to that URL, you'll get to the page that asks for name, email. You can enter or whatever. It's really just a unique identifier so we can give you a unique environment. And then this page is going to return you the information and how to log into Horizon and where the heat template is and open the heat template. And I'll go through it a little bit and they'll talk what is happening on every stage. And then we can launch it. For those of you who are not going to get environment after they run out, you can still get just access the template so you can see and follow what's happening and see what's happening in the template. No, so it's going to be a heat number is a login and then Paris is a password. It's login slash password. Okay, so for this particular demonstration, we're actually using two types of images. So there is an image called Apache and an image called MariaDB. So we're running that on Fedora. Fedora 20 doesn't have MySQL out of the box anymore so it is MariaDB. You can install MySQL if you want to but that's what they get out, give you out of the box from the repository. To save some internet, what we did is we created those images, not just with Fedora but already did the YAM install for MySQL and YAM install for Apache. Just because I'll show you exactly where in template you add those lines. Just imagine if all of you start doing YAM installs, I mean internet is finite, maybe. So when you look at this template, so it starts with the description and, well, the other really important part in template that you should never forget because it's actually not gonna allow you to spin it up as the template format version. It's something you have to specify always. But there is three input parameters here because even though we're not installing the MySQL server, well, MariaDB server itself, we're still creating databases there. So we are simulating the deployment of actual application. And here you can specify the name, user, so the database name to create the database user and the password. And in parameters, when you specify the input parameters, you can specify default values. You could also specify allowed, so create some regular expression to validate those input parameters. And then there's gonna be a really, really long part that is resources, so it's a definition of all the different resources. And so the two first ones are, it's something that allows heat to kind of talk to your instances. When you start the instances and go to the log, you'll see that it has operations and SSH keys. The important part here is the VAP server group. So I mentioned that for auto-scaling, you have to specify the group, which is a group of resources that you want to perform scaling policy on. So here what we're gonna do is we're gonna have a group of resources, minimum size two, maximum size two, because we always want to have two. You can, again, depending what use case you're trying to implement, you can have different minimum and maximum. So for the actual scaling, when people use it to scale for performance, typically they have different minimum and maximum here. So the next one is scale up policy. If you want to do what, that's a definition of the policy that's gonna apply. And then the instance down is basically a salameter alarm description, what to do, what alarm to create in salameter. So if you look at the VAP server group, part of what it does, it has this property called launch configuration name. So we are specifying the configuration we want to do, and then we're just referring to that configuration. And the configuration is specified here. And what you can see, there is metadata associated with that, and then the properties. So the properties, so the configuration we're doing as for this as creating virtual machines was the image idea patchy and the instance set. So this is a, this configuration basically consists of the VAP servers. So this part, if you look at it, it's just a script, that's a bash script. And what this bash script really does is just creates the MySQL database. I remember I told you we already pre-installed MariaDB, but on the images, but if you would want to make it happen, you can actually just add here a line, saying yum and style, something like that. Another use, so we, again, here we're using CFN tools to do all the things. One of the examples you can find for heat templates with triple O is where they're using pop it to do things. It's good and I mean, pop it obviously has a lot more features than CFN tools. The problem is when you create the instances, you create them in private networks. So if you're running pop it in a server agent mode, where you have a pop it server and multiple agents that have to contact the server for the instructions or the cold pop it manifest, they all have to be in same network or your pop it server needs to be somewhere, so somewhere where it can reach the agents or agents can reach it. Since we're creating virtual machine and private network, what you have to do is pretty much spin up another VM that is a pop it server. Again, if you look at example heat templates, you'll find where they create the pop it server first and then each of the images has just pop it agent that talks to that server. And then the script that would be executed here is basically configuring pop it agent to where to talk to the server. Okay, so after we have two web servers, virtual machines, web servers, what happens here is we want to frontend them with the load balancer. So those three are, the three resources are related to load balancer. So the first one is creating the pool, just a group and then the last one, the floating IP, that's the one that is assigning the floating IP addresses to the virtual IP of the load balancer. I said I'm gonna give you my opinion on some things. So one of the things I don't like is in a lot of places when you use heat templates, you have to plug in the network IDs. Something that was changed thankfully in ice houses before when you create the instance before ice house you used to have to always use IDs, couldn't use name. So if you try to start a template in a tenant that has more than one network, it will just tell you no, I can't, you have to specify the network. So that was, that some of the problems was heat where when you use heat, you really have to know everything, how the open stack is configured, what are the networks, what not. So it doesn't really provide this ease of use for end user because end users ultimately don't really know what are the networks and where they should get this ID. That's where a lot of other application components come in that take care of those things. Okay, so the second part, this part is really similar to web servers, but it does same things on the database servers. So it creates a database group at lunches to VMs, does a little bit more because it creates the databases and configures the databases to be accessible to grants permissions for a user. And then we do the same thing with load balancer just using a different port. So we just configure a load balancer for TCP 3306. And in the end, it provides the output of HTTP IP address. So that's where the URL of the website is. And that's something that is not really there. So let me, so I'll show, I'll show you for those of you who didn't get access to play around with it, I'll show you here. So how that looks. And then look at the monitor of your neighbor. If your neighbor has horizon open, should just kick him. So I want to do it too. So we'll launch the stack. And what we do here is we just plug in this URL. So one again, another thing was heat is, heat is not really a repository for templates. So it kind of assumes that you store templates somewhere or have access to templates. And then you can either plug in, like use a URL, if it's a direct URL, or you upload the file or you provide a direct input. So you just type in there, which is good for debugging small ones, but you probably still want to use something to type them. And so you plug in URL and you specify the stack name. And so this stack name is actually important because one kilometer alarm gets created. That's something that gets referenced in metadata as well. So it's the stack name itself. I mean, you can specify any, but it's just a connection between several components. So I'll just gonna leave the password user named by default and I'm gonna do a lunch. And it's gonna take some time to start. So one of the other things, again, that we are using versionize house here. And there's a lot of work been done in Juno. And I believe the next talk at 1230 actually touches that as well as permissions. So if you know how OpenStack works, to touch any API server, you have to be authenticated. And for this server to continue operations, it has to be authenticated and it has to kind of verify and validate your authentication token. Was heat, it's interesting because heat is trying to execute a lot of things on behalf of the user. And it was this kind of idea was driving feature in Keystone called trust. So if you do notice, like here's some talks about Keystone trust, a lot of it is, like one of the use cases for that is actually using heat. Why am I talking about that? Because if you look, if you have access to horizon, I am an admin, because it's my user, but even your users, I actually had to give admin permissions because otherwise it wouldn't create some of the resources. So this is, again, something to know about, some of the resources are totally fine, doesn't care, any user can do that, but not all of them. So this is a pretty thing that got created. Rise your hands if you actually got access to horizon and managed to get same results. So we do have pretty powerful servers under it, but it's a limited number. So creating, spawning the VMs and then applying things is my take, but it should finish. Okay. That is something I didn't see, but part of what I tried was creating 140 instances and it took a while, but it actually finished all of them in the end. So it is surprising, but yeah. So this is HelloMonster, and what you can see is a lot of resources are actually tied together. And since there's a few people actually rise your hands, you can try and again, go to this URL, plug in whatever and get the environment because it doesn't seem that a lot of people actually tried to do that. You can see here all the resources, you can also see the list of the resources and their status here, and it's slow. I'm not gonna wait for that. So when it's done creating, what you'll see here is the output and there's like HTTP, blah, blah, blah, floating IP. It is pretty slow. Somebody ate all the internet. Yay. Anyways, okay, so we're pretty much almost done. So what's gonna happen is this all is actually gonna be up today. Don't trash it too much, okay? But you can try and come later, we'll see what's happening there. So you can do it later today. You know where to get the environments if they're any free and where to get the templates. So just come back later. Does anybody have any questions? So what happens was, there was a question about the alarm. So what happens with that alarm, it actually waits for the count and it's not really working correctly. So the other thing was CFN tools that kinda annoys me too is it always tries, so it tries to look for your EC2 metadata and it's not always something that you have, you need to have, because it can go like, you can use Nova API that provides metadata as well. So I actually had to hack Cloud in it a couple of times. Well, not hack, you can, when you reinstall it, it allows you to like deselect EC2. But yeah, I know what, and the problem is like, if it tries to connect to EC2 and it's not there, you sit there and wait for five minutes to instance to boot. Right, let's say exactly the same thing. Yes. Yeah, so there is a config file where you can actually modify a lot of values, including like what to do, for example, like one of the things, it changes the host name based on the VM name. So you can turn it off. What else I used? Static IPs, so if you like you can turn, so there's a config file, it depends on distribution, it's under EC2, look at ETC, I'm sorry, look for under ETC for Cloud or CFN or something like something related to any of those. There's gonna be a config file there where you can actually set like major things. There's not that much documentation about it. So again, you kind of have to like look at the code and... What happens if the point of process takes a few hours? So depends on what you use, so for if you use something like CFN, for example, the cloud name or any of those things, we also provide what you are, what you don't see in the app. Yes, yes, but you don't see yes in the app, or is it not? So, depends what distribution you have, so you have to do that. What do you direct to the cloud? Directing to the cloud? Yes, when you're comfortable. For example, when you're going to, when you're going to grab something, you know, that's why. So when you can be like the cloud, you want to be able to really, really open-set in a cloud, it's just something that you call a cloud, your iPhone, a console. That's it, that's it, I can move it out, so far.