 Okay, let's get started. Good afternoon everyone. Welcome. Can you all hear me out in the back? Everyone can hear me? Good. Thank you. It's nice to see everyone here. The keen observers among you will notice that Steve looks a bit different this time. He's lost all his hair and he's adopted a New Zealand accent and changed his name. We're trying not to judge. Steve Hardy unfortunately couldn't make it to join us today. Steve Baker has kindly agreed to sit in for him. I'm not the Steve you're looking for. There's a lot of Steve's on this project and we treat them as interchangeable. How many people here saw our talk in Atlanta? Do you have a quick show house? Okay, that's cool. A lot of new people so that's exciting too. Steve Hardy and I did a talk in Atlanta which was called an introduction to heat. We've done a bunch of these talks in a bunch of places and it seemed like there are now a lot of people who are giving introductions to heat talks. People I've never met are out there teaching people how to use heat so that's really exciting for the project and it's really cool. But what's kind of needed is to go into a little bit more depth and a lot of people have started now with heat and they want to know, okay, how do I deploy really complex applications? So the idea is today we're going to go through and see some of the building blocks that you can use to deploy those sorts of applications. So this is kind of an intermediate level talk. We're calling it Beyond the Basics and I know many of you are asking, Sozane, is there going to be an advanced level talk in Vancouver where there will be advanced level heat templates? No. The people in this room, you guys have to go out and invent the advanced level templates. So what I'm going to talk about today is some building blocks that you can use and they can combine together in very complex ways, many of which have not yet been worked out how that's going to work and the full range of things that heat can do has not been explored yet. So I'm hoping the people in this room are going to go out there afterwards and explore them and maybe someone here will be giving an advanced level talk in Vancouver. So I'm going to go through some of these top level concepts and then Steve's going to kind of dig down into a bit more detail, a bit more concrete examples and we don't have a demo prepared for you but Steve Hardy has put together some screen casts demonstrating this so it's not live but it's the next best thing. So Steve will be going through those for you. So this is the last slide that I showed in my talk in Atlanta. Hopefully you're all familiar with this now so the basic unit of work in heat is a stack. Stacks are comprised of resources which can have various different types, you know, servers, volumes and they're connected together with various relationships and one type of resource that you can have is a nested stack. So a resource can be anything in open stack that has an ID that is a noun, a thing that you can reference and a stack is one of those things so that's one example of resource you can use. And so the first kind of technique that I want to talk about is decomposing your application into a tree of nested stacks. So you're going to have a top level stack always that's the one, the template that you pass to create stack but you don't have to put all of your resources in one template. You can split them out and logically organize them so that stuff that's together that leads to have connections between it is all kind of grouped together in one template. You can, you know, reuse templates so say you have a template that defines a server with a database installation you might have a bunch of applications that want to use a database. You can write yourself one template and then include it like this in any number of application stacks or, you know, trees of stacks. So that's kind of the first starting point when you go to deploy a more complex application is don't think you have to put everything in one template, split it out, split it up logically. The Python Heat Client can do some clever stuff to help you with uploading all of those at the same time. Quite often it will just pick up all the files for you, you won't even have to do anything special. So there's a couple of different ways to implement a nested stack. I've gotten to change one of these slides. The first way is with the OS CloudFormation stack resource. So that's just a resource type which you give it a URL to a template and that creates nested stack. Pretty soon that's going to be also augmented by native OS HeatStack resource and we're working on making that cross region as well. So you'll be able to create a stack in a different region. Bartos here is hard at work on that, I gather. Hopefully that will land soon. The second way to do it is just to specify as the type of the resource instead of one of the native resource types, just specify the URL of your template. And Python Heat Client will pick up that URL automatically and include it in the data that it posts when you create the stack. So we call these, we're now calling these template resources. When we originally developed this feature we called them provider resources and that was kind of confusing for everyone. Hopefully this will be better. As you see I forgot to change the name of the provider stack there. So you may see both terminologies around. And how a provider resource works is basically, you know, every resource has properties, data that's going into it and also attributes that you can get out of it. And by heavy coincidence a stack template also has parameters that started going in and outputs that started coming out. And so we just reuse the same template format to implement the provider resource. So you implement a template, whatever parameters you define in that template become properties of the resource, whatever outputs you define in the template become attributes of the resource that you can retrieve in the main stack using get attribute. So it's kind of a fairly transparent way to extend the types of resources you can have in Heat. And Steve's also going to talk about later, you can use the environment to map resource types. So you can map a built-in resource type to a provider template resource type. And so when you deploy to different clouds which may have different built-in resource types, you can say on this cloud I want a Trove database because it's a big public cloud and I want them to manage my database. But on my private testing cloud I don't want to run Trove so I'm going to provide a stack that contains an over-server running MySQL and that's going to be my resource template. And so you can set that up in the environment. You have one environment for each cloud that you're running on. So the next thing I want to talk about is software deployments which Steve actually wrote and he'll be going into more detail about exactly how you use this. But if you're involved with Heat in the early days, you'll remember that or if you view cloud formation the same thing, you had to define anything that you wanted to go on a server had to be defined in the user data for the server. And it was messy if you wanted to put the same configuration on a whole bunch of servers or similar configuration on a whole bunch of servers, you had to go copy and paste that whole big chunk of user data and it was kind of messy. So software deployments are the answer to this. So you have a software configuration resource which actually has an API in Heat so you can go create it through the API but mostly you're probably going to use it through a template. So that's just a configuration which is parameterized so you can pass parameters into that. And then you have a software deployment which connects, installs that configuration on a particular server with a given set of parameters. So you can have one configuration that you deploy on multiple servers with different parameters each time and that kind of makes your software deployments much more modular. The configuration can be in a separate file which again gets automatically included by Python Heat client. So you might not just theme emerging here, it's don't throw everything in one big bucket, break it up into smaller things, keep them organized together. The same stuff you would do with software, right? Because this is treating your infrastructure the same way you would treat software. So that's the goal of Heat and that's what these features are allowing you to do more easily. So the next thing is scaling groups which there's a bunch now in Heat and we do now have a native auto scaling group so it's OS Heat auto scaling I believe. That's probably the one you often want to use. Also resource group if you're not doing auto scaling but you just have a fixed size that's all sort of thing. And so unlike AWS say where auto scaling AWS is part of EC2 and so it's very much tied to a server. We don't have something equivalent in OpenStack so we kind of built it in Heat and so it's tied to Heat. So a scaling group in Heat is actually a nested stack, not that you can kind of address it as that but internally it's implemented as a nested stack and the individual servers in the scaling group are resources. And one of the cool things with the native OS Heat auto scaling group is that you don't have to just scale servers. So you can scale any type of resource in an auto scaling group. And hopefully the question has now occurred to if you can scale any type of resource and there are template resources can you scale template resources and the answer is yes. So you can have a scaling group of stacks. So now you're not limited to only scaling up one server. You can scale up a whole template stack. I guess I've forgotten to change the name again. So for example you might have in your template your scaled unit you might have a server. You might have a volume so you can scale up servers with Cinder volumes attached which is not something you can do with EC2. You might have a load balancer member so you can attach that server to a load balancer which is defined outside of the scaling group. So in that way load balancers get handled automatically. You might have a software deployment. So you might have a software configuration maybe in your main stack and then deploy it on each server in your template in the scaled group. So hopefully you can see how some of these things, you know, you can start to combine them in more complex ways. So they're fairly basic features but they're kind of quite orthogonal and you can start putting them together, combining them and building them up into kind of creating much more complex applications but which are hopefully still maintainable because you split everything out into its individual components and group them appropriately. So I think Steve's going to take over and give you some examples and go into a bit more depth. Okay, thanks, Zane. So I'm going to show you a template resource example which demonstrates a few things. One, it demonstrates the use of the environment to influence the context that your stack is brought up inside. I'm also going to show you, you know, the template resource itself. As Zane said in the previous slide, you know, we can scale anything. We can scale a stack, a stack of things. But really the most common use case is going to be the thing you want to scale as a server but a lot of things can hang off a server so it would be nice to be able to, you know, say write a template which is your server plus, you know, the neutron ports plus the Cinder volumes plus the configuration resources required to bring that server up. So it's server with bits hanging off and that's the very simple example we're going to show today is a server with a volume, so that's the bit. Also this first example, when I do the demo it'll show the command line tools you have available to manage the resulting resources and stacks that a single stack can bring up. If you're decomposing everything into lots of little nested stacks then you're going to have to have the tools to actually visualise what's actually happening under the hood. So first the environment. This is a separate file you pass to Heat when you create a stack. The example here has got two sections, one is parameters. So when you create a stack you're specifying parameters on the command line, that can get tedious or you might want to have different parameters for different scenarios. So you can actually specify the parameters in the environment file. Another section of the environment is the resource registry where you can specify completely new resource types and those types can be implemented as stack templates. But that's not all you can do. You can also specify types that are inbuilt to heat. You're actually overriding an inbuilt heat type with your own implementation which can be useful in some circumstances where you have environments that are slightly different and you want to customise the behaviour of a particular resource for whatever purpose. So then in the root level stack that we launch you can see the type there is the my server with volume. This is an example that uses an environment but that type could also be the file name server underscore with underscore volume dot yaml. So you have a choice there when you're using these template resources whether to define it in line in your template, reference it as a file, or break it out into a separate environment and define a brand new type. And you can see that we're passing in some properties, key name, image ID, volume size. So let's just go to the demo. Is that looking a bit squished? So we've got the three files there, the environment, the actual stack dot yaml itself and then the template resource, the server with volume. So we're going to look at each one. We've pretty much seen them already. Sorry? I can try. Is that better? So yeah, we've already seen this in those slides. I'll just get forward a little bit. So here is our template resource. So we're declaring the parameters which we'll map back to the properties of the resource when we invoke it. Now one thing, another feature that we're demonstrating here is this custom constraint, nova dot keep here. And another custom constraint, glance dot image. This adds some validation to your stack. In this case it will check that there is actually a key pair with the name that you pass in. And if it doesn't exist, then it will fail early on in the create process with a validation error. Likewise it'll check that that glance image actually exists before the stack will launch. So in this way you're getting validation feedback early rather than finding out later when your stack fails because some resource failed because something was missing. So that's a nice thing to add to your parameters to improve your templates. So then further down we've got the resources themselves. We've got a volume, a volume attachment, and an instance. Let's just pause that. So, and then finally the bottom we've got the outputs. Now this maps back to the resource attribute which can be used elsewhere in your parent template. So now we'll look at the root template and it's very boring because there is a single resource with the type my server with volume and we're passing down those properties that will get matched to the parameters. And then finally we're going to print the IP address of the resulting server that gets created under the hood. So let's create the stack. So we're going to specify the top-level environment with dash f, we're going to specify the environment, sorry, the top-level stack with dash f, the environment with stack e with dash e. And we'll see what happens. So now we're going to demonstrate the new command line tools that let's see visualise what's actually going on. So we'll do a resource list of the resources in the stack. It's one resource. That's not very interesting. But we can actually expand the resource list to include the resources that all of the resources, including the ones that are created by any nested stacks inside it. We can specify a nested depth of how deep we're going to how deep we're going to go. And now we can see we have a server in the volume and the attachment. So that's useful. And this is where you can explore just how it is stacks all the way down. This is new in June, by the way, so don't get excited if you're still on ITS. But it's coming soon. Yeah, upgrade. So also with a stack list, there's now a show nested command. Normally with stack list, you see a list of all stacks that that tenant has created that you've got permissions to see. But if those stacks themselves create nested stacks, it's good to have some visibility on what's going on there. So if you include show nested, then you'll get a bigger list, including all the hidden nested stacks that you didn't explicitly create yourself. So next command we're going to show if we just do a... I think it's a resource show of our instance resource, which is that one top-level stack. So here the interesting thing is, in that list of links here, there's a URL which is tagged nested. And if we look at the URLs, you see that top one, which is itself, that's a rest URL to a particular resource inside a stack. The stack's called example one. The resource is called instance. But that nested URL is actually... it's a stack URL in its own right. So we can see some visibility there that this resource is, in fact, itself a stack. So if you see that physical resource ID on the next line down there, every resource has a physical resource ID. It maps to its own concept depending on the resource. So with Nova, it's a Nova server UUID. But if it's a template resource, the physical resource ID is the stack ID. So you can perform stack operations on that nested stack, which is another way of exploring what's been created. And then finally, in the root stack, we can see that IP address from that server that was two layers deep has propagated all the way back up to an output on the root stack. So there's just a summary of those commands that you can have a play with at your leisure. So I did a really detailed talk in Atlanta about software config and software deployments. This one's going to be a lot more stripped back, just the real basics, just to demonstrate the concept. So we have two new resource types in heat since Icehouse. One is a software configuration resource, which is pretty much just a store of config in whatever format you're using to do configuration on your servers. So first, when you define your server, all you need to do to enable this is specify user data format software config. And that changes the initial boot packet under the hood that gets delivered to the server. And from that point on, the software deployment tooling establishes the link between the server and heat for any further configuration and lifecycle changes and whatnot. So a configuration resource, it's a basic store of configuration. It also lets you define a schema for inputs and outputs that that configuration expects. In this case, we've got an input called message. And in this case, the config is a shell script. So if you see that group property says script, that says which tool we're using for configuration, I'll describe what other tools are available later on. So then we have a deployment resource. The deployment resource essentially lets you associate a config with a server. It also allows you to override the input values that were defined, which has defaults in the config resource. So in this way, you can have a single configuration resource, which is deployed to multiple servers just by creating multiple deployment resources. Or you can have a single deployment resource, which is deployed to one server multiple times with different input values. And another reason to model software deployment as its own resource is that it has its own lifecycle. It will remain in progress until some thing has happened for it to go into a complete state. And in this case, the thing that it's waiting for is a signal from the server that I have finished my configuration. So that signal transport, heat signal, all that says is that the form of signal the server is using to contact heat is the heat rest API. There are other options. We will be adding more options during the next cycle as well. So I'll just... Whoops, that's not a video. So in this example, we have another simple environment. We have an actual shell script, which we're going to deploy to our server. Here's the environment. So the image name there you see is... It's SIROS, but it's actually a modified SIROS, which has a really basic tooling which does the configuration. It's written in bash. Sorry. So the default tooling is actually... pulls in some of the tooling from the Triple-O project. There's an agent called OSClickConfig, and there's a series of hooks, one hook for each configuration tool that's supported that ends up on the image. So that... Hang on, I'll just pop back. This is probably a reasonable time to describe... At the moment, you do need some tooling on the image of the server that you're deploying. That just allows... It sets up the agent which can poll for changes throughout the lifecycle of the server. We've been using a disk image builder to build these custom images. It is actually possible to specify the appropriate boot config to bring up this tooling on a pristine image. Or in this case, there's just been an alternate, really simple agent for a really simple OS like SIROS. So there's a reasonable amount of flexibility in being able to set up the image to run this. So we'll just carry on here. So the script, it's really simple script. But see here, it expects there to be a shallon variable called message. And now the template itself, we have the config resource, an input named message, specifying script as the group, which implies the configuration tool being used. It specifies the example script in a get file call, so that gets included in the create package by the heat command line client. In the deployment resource, we're overriding the input value to be something else. So I'll just skip forward here. So one thing worth mentioning, the software config transport. So this data needs to go in two directions when we're using deployment resources. The actual configuration metadata needs to get from heat to the server, and the server needs to signal back to heat when configuration is complete. So this property, software config transport, specifies the method by which data gets to the server. So let's create the stack. This will fail for reasons I will explain soon. Example two already exists. Heat actually enforces unique stack names within a tenant or project. So we'll give it a different name and out-starting. We'll do a resource list. Generally, when you're configuring something, the deployment resource will stay in an in-progress state until the configuration actually happens. The configuration has happened, but because this is CROS and it's a trivial script, it happened basically immediately. So that's why it's already in the create complete state. But if we do a stack show, we've got the IP address, and then run a command, then we can see that yes, configuration has actually been done. Now, one thing which... I'll just go back to the template. One very powerful thing which you can do with deployment resources, which isn't demonstrated here, is you can specify outputs. When the server signals back to heat that its configuration is done, the signal can include arbitrary key values which represent the outputs of that configuration. And those get mapped back to attributes on the software deployment resource, which can be used elsewhere in your stack, whether as a dependency into new servers, a dependency into different deployment resources, or as outputs to your stack. So in this way, you can set up the whatever dependency tree is required to bring up your stack when configuration is happening on multiple servers. It can ping-pong back and forth, and heat will manage all the dependencies for you. So yeah, that's all we had to demonstrate today. I think we're doing pretty well for time. We've got time for 10 minutes worth of questions. Yep, if you can, can you reach the microphone? It's just in the aisle there. Sorry. So all these summit presentations are recorded on video, which is great for people who can't make it to the summit. So if you can ask your questions on the microphone, then they'll all know what they are. Okay, hello. I have a question regarding triple O, which you mentioned. Is this packages already supported by OS providers, or what is the status of development for this package? Because we need to implement it on our images if we want to use software deployment. Yeah, so are you talking about packaging at the operating system level? Well, if we are, for example, creating custom image-based on Red Hat, for example, do we need to install this package alone, or is it available as LPM, for example? So I'll talk about different scenarios. One is the disk image builder scenario. Disk image builder has this concept of elements. An element can contribute software configuration packages during the image building process. It's not limited to only installing from packages. It can install from source or releases from other environments. So the disk image builder elements will just do the right thing. It's generally source-based installs, because that's sort of what the triple O project does. So having said that, Fedora, the whole tooling stack is packaged fairly widely. So you can bring up a, you can create a new image, or at boot time, install those packages. There's a couple of extra files that need to be installed at boot time to make it all work together, but they're fairly small and trivial. So Ubuntu, the version of Ubuntu, which isn't out yet, has packaged the triple O tooling. So if you're not using disk image builder, then there will be another fairly easy way of setting that up. Okay, and second question, maybe not for this presentation, but about hit update. Yes. If we want update, for example, existing stack, right? We need to create, we need to dump current stack and add our additional resources, correct? Is it possible to just create a new template and add it to the existing one? So for example, we don't have to do a dump, and we just create, for example, one server additional and we want to include it in our whole configuration. So yes, you can do a stack update, which where you just add more resources into your template and then feed the new template to heat as a stack update and it should add them all in the right order. Yes, but my question is, if we need a whole template, which is already in the stack, or we can just create a single resource new, which will be just add to the existing... It will be added during the update. So I think what you're asking is, can you, instead of creating a new template with the resource in, can you just say, add me a new resource? Yes. No. Okay. And the reason is, what you want to do is you want to have your template in Git or something like that, and when you change something, you want to just add that new resource into your existing template and then send that new template, and then you have a version throughout history of your infrastructure in a similar way that you would to your code. So it's like what you're asking is the infrastructure equivalent of logging into the Ripple and just typing stuff, but what you really want to do is edit your source code and store that in Git, right? Having said that, if you're in a scaling group scenario where you have a certain number of the same thing, you could actually expose the number of things that you have as a parameter. So when you start it, they might only have five, and say you want 10, then you just do a stack update with a new parameter value of 10, and it will automatically scale up. So it's similar to your scenario, but yeah. Yes, and the last question. Sorry. Regarding nested stacks, if we, for example, create a network which will be created in provider stack, how can we use, for example, ID of this network in base template? Can we do this? Yes. For example, using outputs? Yes, so you can use the output from the provider stack and you can do get attribute on the template resource and that will get the output from the, so the output will do get attribute to get it from the network or get resource, sorry, to get the ID for the network and then you do get, in closing template, you do get attribute to get the output and then you can feed it into other things or use it wherever. Thank you. But just in terms of the network resources, we're pretty good at being able to specify the name of the network or the UUID. So if you have a convention where you're going to create a network with a particular name and there's only going to be one at a time, then you can just specify by name rather than UUID. You can do both. Hi there. So if I have a scaled resource group of servers and I want to specify different input parameters to some of those, how would I do that? You wouldn't, you'd have to split that up into separate scaling groups or have one, a common scenario is that you need to vote a master in a cluster or whatnot. Currently you'd have to model that as a separate resource and then have the cluster as a scaling group. Yeah. Yeah, I'll come and talk to you later. Thank you. Cool. We've got three more minutes. If there's any more questions. Regarding your scaling groups, have you thought about a scenario where you can actually scale out a region, not just an application? You mean scale across regions? Not scale across scale. Use triple O and auto scaling group to actually instantiate a region. Oh yeah. Triple O uses all of the heat's tools for managing a follow-up stack. So could we use a nested stack to say, when I reach a particular threshold, scale out a region, a full open stack deployment? Yes. Okay, yep. Hi. I got a question about the application deployment. So in one of my projects, we're currently actively looking at the Murano, the catalog, the application deployment thing. You know about that, right? And I see now in heat, and I knew about that before, that in user data you can pass the scripts to implement, to deploy applications. Now what do you think are the pros of using heat versus using Murano? I know that Murano has the web interface thing tied with that. Does heat have any kind of parameter grabbing thing on the web? What are the pros and cons of using one versus another, I think? To be honest, I don't know enough about Murano to make an informed... Okay, no comments about that at all, right? I think Murano, their goal is to be able to combine stuff that you don't necessarily know in advance how it's going to connect. So heat is a very concrete template format, right? So in the template you specify, I want this, and you get that, ideally. Whereas Murano is like, okay, this thing exposes this interface, and this thing requires that interface, and then if we stick them together, then it'll know how to connect them up. So with Murano, you can implement... You know, you can have a database interface, say, and say, you know, this application needs a database, but you can have several implementations of the database, and whichever ones happen to be available in the catalog it will say, okay, we can connect this to this and do that. Yeah, I know why that functioned on that. Thank you. So yeah, you probably want to talk to Murano guys to get a better explanation than I just gave, but I think that's kind of the high-level idea. Maybe one last question. Probably a short question I'm concerning. There were another session with Polina from Morantis before this session, and she mentioned that for a heat deployment, there is admin access required for some parts. Could you probably elaborate on what access permissions users need to deploy, which type of resource, because actually I think most of us will... Not most of us, but many of us will deploy in public clouds where we don't have full admin access. So it was interesting. I think that's fixed now, right? Yeah, it's fixed. You don't need to be an admin user to deploy these. When I mentioned the transports and the different transport options, that sort of feeds into that question. Currently our ideal situation is with heat, whenever you create a server or deployment resource that needs to do this communication, it creates a Keystone user behind the hood in its own dedicated Keystone domain, and it uses those credentials for any API operations that need to happen. I think as an aside to that, we're going to be writing other transports that don't necessarily need those credentials, so that will make deployment situations a little bit easier in some scenarios. So we're using Swift temp URLs, so it's essentially treating it as a webhook with a secret URL. So if you didn't want to have to deal with any of these access situations, then you can just use the Swift transports for signalling and metadata, and it'll just work. Perfect, thanks. So historically you're correct. It was like anything that needed to create a user in Keystone would not work unless you had admin. So I think the number one message is when you deploy heat, you should enable Keystone domain users, which I think might be the default in Juno. Not the default in Icehouse, but I highly advise everyone to deploy it with Keystone domain users enabled, and then you won't have that restriction. That assumes the existence of a Keystone v3 API. Yes, you do need v3, that's correct. OK, thanks very much everyone. Thanks everybody.