 Good. Welcome, everybody. Good morning. Thank you very much for coming to our talk on workload, life cycle, managing with heat. My name is Lance Hage. I'm a solution architect with Mirantis, and this is Florence Stinkauer. Thank you. Thank you. Who's a system architect with Mirantis. We both work out of Germany. What this talk is about today is that, you know, OpenStack community has got really good at deploying or is improving the process of deploying and creating clouds. And that is in turn making it easier for the users or more and more customers to start deploying these clouds. But unfortunately there's a kind of a lack of information on how one could use the default tool set that comes with OpenStack to be able to use the cloud effectively without having to retool or reskill people on all different types of technologies and environments that would need to deploy these applications. So this talk about what we're giving is to enable us to show you the value and the advanced way that you can use heat with software deployments to make your deployments and your life cycle management of your applications much easier to manage and handle. So the agenda for today is going to be what are workloads, what we see as workloads, and the OpenStack deployment options that we've got, the software deployments overview. So how, you know, what is software deployments, how it works, and the framework that we've created to work with heat and software deployments to make things easier for people to use. And we've got to do a demo on reverse proxy and demo with Kubernetes cargo to show you the more complex part of it, and then we'll have a wrap up and some questions and stuff off to it. The first part is what is a workload? Now, a workload is made up of a number of parts. Initially it's the infrastructure deployment, and then the software configuration deployment. Then you've got, you need to have software configuration life cycle management on top of that, and also your infrastructure life cycle management. So this, you know, this is a standard thing that we've seen across all the technologies and environments. It's a standard practice in most businesses that we need to have this stuff happen. And we normally use a number of tools to make this happen, where we think that we could use heat itself to do this type of stuff for us without having to spin up extra things. So there's different, five different types of workload deployment that one can use in OpenStack. As we all know, there's the manual, all manual deployment. So this is where a user goes in and creates all their resources themselves, having to link them, create networks, create environments, volumes, instances. But it doesn't get very complex because you can't really spin up a cluster from the EDI. There's some disadvantages of C to this. It's slow and it's error prone. It's not repeatable. It's not, it's not, there's no life cycle management. So, you know, if you want to increase the number of nodes, it's not easy. There's no option, really. There's no easy way to group your infrastructure into a group of things to be able to make clusters. There's just, there's no way of doing it. Although this is quite trivial to do, so you can really go in and configure all the environment yourself. This is still a very, very popular way of people doing stuff. We've noticed it in the customers we've been dealing with, that they still do this. They go through and create this manual infrastructure and think, okay, great, my application is now in the cloud and it's good. And we all know that that's probably not the best way to do it. The second one is where you build your infrastructure with heat, but then you still do your manual software configuration. So as we, you know, a user goes in and creates the infrastructure with their heat template. They spin up an X number of machines and then log into each machine individually and start manually installing and configuring the software to be able to do the tool that they want or the product or the project they want to do. There's a number of advantages to this. The heat template now serves as the recipe for the infrastructure deployment. So you get a repeatable process whenever you run this heat template, you will always have the same number of machines or the same infrastructure, which is really good. So it's something that you can rely on being there. It also, heat can be leveraged to link all these resources together. So you've got a network and a router and a disk that all gets plugged in together to make this a really good solution in your infrastructure. And then infrastructure lifecycle management can be used. So you can then, can be used. So you can take your cluster and spin up two, three, four, five, six nodes and it'll just automatically create the empty machine for you. The unfortunate part of this is that all the disadvantages, if you will, is that every time you redeploy or use scale, you have to manually log into the scaled out node, can configure it, manage it and do everything else manually. So this is a time consuming and not something that's automatable. And also then there's no software configuration lifecycle management. So you can't automatically upgrade or change things within your software configuration. The next version is software configuration infrastructure with heat software configuration with by cloud in it. Now we know cloud in it is a deployment method at system boot. We've all known that we've, you know, as OpenStack users, we know that a lot of the stuff can be done in cloud in it. And the different advantage or the advantage share that's different to the previous slide, you'll see there's a number of commonalities here where we still have the recipe, we still have the resources involved and linked to each other and the infrastructure lifecycle management can be done. What the best thing is that we've got a one click redeployment. So you can literally run a type of command or hit an enter key and voila, you have this redeployment. And the software gets generated and configured and everything else by heat, which is the cloud in it, which is really a good way of doing things. The disadvantage is that there's no software configuration lifecycle management. So if you update anything in your cloud in it library, it will destroy and recreate the machine. So anything that was on there is gone. And it's not something that probably that some people want to put in their environment. So it's not a way of keeping some of the transient data or keeping your system or your environment up and running. It deletes the whole lot and recreates it. And that's not something we want to happen. So the next one is software configuration with cloud in it. But then we install a cloud management configuration management tool, X configuration management tool. Again, we use cloud in it to deploy the configuration management tool library or least agents. These agents then once deployed, spin up, they connect to your configuration management master and get the configuration and run. The advantages that the extra advantage we get from the previous slide is that you get software lifecycle management. The disadvantage, unfortunately, is that now you've got two places to manage code and to manage different configurations and things. So it becomes really, which way do we change this feature or this thing to be able to make it work for us? And then the last one in this group is where we use the infrastructure with heat, which is really good, software configuration with heat software deployments. Now, the way that this works is that the software deployment part of heat, which is a resource in the heat environment, can now be utilized to manage the software configurations. So again, we get the very commonalities of the recipe for the infrastructure. It can be linked to link all the devices together. The infrastructure lifecycle management is easy. You can spin up five, six, seven nodes. Easy, down, up and down. Re-deployment is easy, as we said before. The software lifecycle management happens as well. And there's a single point of control. This is the key thing, is that you've got one place that you run everything from. You create, you update, you delete. These are the tasks you run from one plane of glass, if we want to call it that. And then I'll hand it on to Florin. We'll take it on with the software deployments. Thanks, Lance. So what are software deployments, right? So basically software deployment is a type of resource in heat that applies a heat software config resource to a particular instance, right? So in order to work with software deployments, there's a number of agents that need to be available on the instance. And these can be made available by either pre-packaged them in the image that you're gonna spin up your instance with, or you're gonna be using Cloud in it to basically spin them up, right? And basically what they do is they'll continuously pull heat for software deployments that are assigned to that particular instance. So in short, you know, it's basically, heat provides us with a configuration management engine, right? That can be used to apply software configurations to these various instances. So let me give you a very simple web server example. So you have on the right hand side of the screen, you basically have a software config resource in heat, and this is basically just, it's an encapsulation for a type of script. And you can see in the config attribute of it that basically this is just a hello world for installing an Apache web server. And there's also a group attribute, which I'll get to in a minute, just wanted you guys to keep that in mind. And on the bottom, we have the actual software deployment definition, right? So in here, you can basically see that there's a config attribute, which basically points to the HTTP config that we have above. And we have the server, which is the actual instance that this should be installed to. And then we have two more attributes, so one of them being the signal transport. And this basically is the method in which the agents on running on the instance will notify heat of the status of the software deployment. And then we have actions, and these can be, there's four of them. It can be create, update, suspend, and delete. And this basically tells the software deployment at what stage of the process this actually should happen, right? So for example, when you first create the stack or delete, so for example, if you have a clustered application that has a bunch of slaves, you can have a create script for the slave upon joining the cluster, and then you can have one for delete when the actual, when it needs to re-register or something like that, right? So also, I mentioned earlier that group attribute in the software config, right? So that's actually indicates what type of hook this should run. And basically, this hook is just the interpreter that the agents will use to run the script you put in the software config, right? So we can see here that these can be things like an Ansible puppet, salt, and in our case, it was script, which is just bash, like a bash script, right? And the point I want to make here is that, right, if you start using heat as a configuration management tool, right, you don't have to like necessarily learn a brand new language like you would when you're learning Ansible or salt or whatever, right? You can already use your existing configuration management scripts to do the same things, except now you're also managing infrastructure with heat, right? So this is kind of what the software deployment flow looks like. This is how things actually happen under the hood. The agents, I'm going to be referencing the agents that are actually doing things. So OS collect config basically that lives on the instance. We'll use the software config transport attribute of the instance of pole heat for software deployments. And upon determining a new software deployment, OS refresh config will trigger heat config. And heat config will basically determine the appropriate hook that will be used to execute the software configuration, right? So this is the Ansible puppet, chef, salt, whatever. And lastly, you have heat config notify, which will use the signal transport attribute of the software deployment to deliver the status of deployment. So if it's successful or not, or if there's a problem, and this also includes outputs. So I want to make a couple of distinctions in compared to cloud init, right? So for example, with cloud init, right, we, Lance already mentioned this, you can't update, right? Software deployments let you update. If you've ever had to work with cloud init and create interdependencies between two different instances that are both running cloud init. And for example, you have a package on one, and then this one has to wait until this package on the first one has to come up. You would have had to deal with things like wait conditions and wait condition handles. And then you have to actually insert it into your script. And it's just, it's very messy, right? This is actually built in very nicely into the software deployments because you can just use the depends on attribute. And I'll show that in one of my demos. And how easy it is to actually create interdependencies between different software package, software configurations. And lastly, they're, when you're using cloud init, there's no way for you to pass back information into heat, right? So I can, for example, if I have a, I don't know, like a yum install package as my cloud init script. And for example, I don't know, some somehow the instance doesn't have internet access or something like that. And it fails, you would never know. He will come back to you and say, yeah, you know what, the stack was created successfully, right? But this, with software deployments, you can actually use outputs to determine the status, STDL, STD error, and also any custom outputs that you might define things like, for example, if you have something that generates passwords, and you want to spit it back into the outputs of heat, it's going to be very easily done with, with software deployments. So I guess putting it all together, right? So how can you actually have infrastructure, LCM and software configuration LCM with heat, right? So we kind of came up with this framework, right? That's should, you know, curse collaboration, it should leverage version control. And it should minimize the amount of different systems involved, right? So it should bring everything together into heat. So this framework kind of came out in the form of a get repository. And so I'll actually, at the end, there's a link to it. So don't worry about that. And I'm going to talk a little bit about the contents of this get repository, right? So we basically created kind of a library, right? So this is the lib folder. And in this folder, we have a couple of different things. So we have, first of all, the basic building blocks for open stack infrastructure, right? So we define a whole bunch of, you know, generic templates for things like networks, instances, volumes, clusters, low balances on and so forth. And then we also have basic building blocks for various software configurations, like installing a web server, adding a user disabling, I see Linux, things, very basic generic scripts that are, you know, I find that I constantly have to use sometimes, right? And these are split onto the actual operating system that you're going to use, right? So for Ubuntu, it's different scripts and for REL, so on and so forth, right? And lastly, we have the boot config scripts, which I mentioned earlier, that actually install the agents that are required, right? So you don't have to actually, you know, figure all that stuff out. You can just reference this cloud init software config that will install it at boot time. And from there, it'll just take over and manage the software configurations using software deployments. And then we have the env directory, which is basically just a directory that holds heat environment files for different operating systems. Then we have the env ext, which basically is the same as env, except the actual environment files are links to the actual GitHub repo, as opposed to just local paths, meaning that if you actually ever want to use this library, you only have to reference the one library file in there. You don't actually have to download the whole library. And then we have tests, which is basically just, you know, as the name implies, a bunch of tests. However, these actually serve as really good examples. If you want to get started with the library and you want to see how things work, there's basically a test for every OpenStack infrastructure component and for every software configuration component that we've written. And lastly, there's a readme, which is a pretty good overall repository, readme. I definitely, if you, especially if you don't know he very well, this is a great way to start. So now I'm going to actually go through an example where I'm using things from this library. And it's going to be a reverse proxy example, right? So the format of this is basically going to be like on the top left hand side, I have the definition. On the bottom, I have the parameters. And on the right hand side is actual visual representation of what's what's happening. So in this case, the only thing we have originally is just a public network. And that's a provider network, right? So on the left hand side, we're actually defining here a heat lib, right, network full stacks. All right, that's the name we gave for for network. And this actually, what it'll do is it'll provision a network, some network router, it'll link that router to that network, and it'll also link that router to an external network, right? And the next thing is actual security group. And you can see that it has three different inputs and name port protocols, right? So we're actually just opening up port 22, port 80, port 443 on the TCP protocol. Then we have the actual instance definition, right? So again, heat lib instance basic, right? So basic instance with takes our name, a key, an image, a flavor. And then it actually links to the network we earlier created, and it also attaches a security group to the instance port. And next one is a floating IP, right? Very basic, nothing special. And we also have a data volume, again, volume basic, nothing special about it. And this is where things kind of get interesting. This is where we start using software deployments. So the first software deployment we have, you see that it takes four different parameters, right? First of all, it takes the volume and then the instance. So where should this be attached? And what this actually does is this isn't the actual, you know, cinder volume attach, right? This is actually a script that goes in and it actually goes and creates a file system on onto that volume, and then it'll mount it on a particular path that you have. The next software configuration is basically just an Apache web server. This is the, you know, install HTTP basically. And lastly, we have the actual reverse proxy. And this takes two parameters, right? The instance, and then also the, you know, what's the actual proxy pass that you want to do, right? So we have slash Google. We'll now redirect us to www.google.com, right? And this is where I mentioned earlier, right? In order for us to have a reverse proxy, we have to have Apache installed first. So in order for us to do that and make sure that it's in order, all we have to do is just, you know, use the depends on, the depends on attribute that's native to, to heat, to yeah, to heat. So I actually have a little video of how this works. Hold on. I guess it doesn't. Let me just, yeah, we're not going to do live demos. We don't have time. These deployments take a while. So we've recorded them just to be able to compress the deployment process into a smaller amount as possible. Yeah, we've sped up these videos. So this is the actual template I use. So you can hear you can see the parameters. And then you'll basically, I'll scroll down and you'll basically see all the resources that I just described. So there's nothing really special about anything, right? It's just a heat template that just uses the library that we built. At the end here we can see the reverse proxy for Google. And we also see all the outputs. So basically I have an output for every software deployment and it'll have a status, right? If it was successful or not. And this is the actual environment that I use to deploy this. And this is the stack create command. And this is where I start actually start speeding it up because then you'll see the blinker start going very fast all of a sudden. So it's basically building everything. And here we're going to basically see the outputs of the stack and you'll see that all the software deployments were successful. You can see we have one for all of them for data volume, right? For the web server configuration, for reverse proxy, for install, and so on and so forth. And we're going to grab that floating IP and I'll show you that Apache was installed successfully. So, you know, we'll basically hit the Apache welcome page. And now we will try to do, we'll test the Google reverse proxy. So it's going to go to Google and we can see this works. And now let's say that I want to add a new reverse proxy, right? So how could you do this? So I'm going to basically have one for for Mirantis and we'll see that it doesn't work, right? So I'm actually going to go back into my template and I'm going to add an actual entry for it, right? So this is while the instance is running without actually destroying the instance, right? And it's very similar to what we just had and you just add another one. We're also going to add an output for it so we can test it after and I'll actually run it. And at this point while it's actually updating, I actually went back to the web server to show you that whilst in the middle of updating the instance is still alive, right? It's not destroyed, it's not being recreated from scratch, right? As it would if you were to use for example cloud init and you were to update like a little component of cloud init, the whole instance would be destroyed and recreated from scratch, right? So it's not the case for software deployments, it supports lifecycle management at any lifecycle, part of the instance is lifecycle, right? So here I'm going to do you'll see that the top, the first one here is actually the reverse proxy status for Mirantis that we just updated and it's successful. I'm going to go back here and I'll show you that it actually works and it should lead us to Mirantis's homepage. Cool, so I'll hand it off to Lance here who will now talk about a more complex example that... Thanks Lauren. As we've had trouble with the Internet thing, I think I'm going to just do the same. Cool. Can you guys see that? So this is a more complex version or usage of the library as you can see I've created a submodule in the in the environment here that enables us to use the library files inherently in the deployment. So we've got a... As you can see there we've got the different NVV files, we've got the tests, we've got library, because you better, and it shows you that we've got all these environments with us so you can see the library file here. I'm going to just pause again here to show you the library file itself. You see we've got all the different named instances, heat-lib instance basic with volume boot with volumes load balancer, security groups, networking, load balancer v1 for v1 albass, basic volumes and so on and so forth. So you create a library of resources within this file so that you can reuse them later. The... And you can see now we create a separate volume or a separate source volume for all our software with the for our Cobra Cube example. And as you can see here I've created another library file. Now the reason why this is quite significant is that you could create your own library file and add this to the current and as in augment the current library file and add extra stuff that you need that's custom to your application that you don't want to add back into the library but it's used just for your application. And so here we've created a SSH key for public addresses or for public SSH key deployment, private key deployment and some storage to install some packages that are related just to cargo that need that cargo needs to install that needs to have to install. So as you can see we've got different scripts in different folders it's kind of a logic thing where you keep your application stuff all in one folder structure so we've got a configure script, deploy script, install script. As we go down here you'll see that because we left referencing the library we have just have to supply the deploy host image and the deploy host flavor or master image, master flavor and that will give us what we need to deploy our cluster or environments itself. And so we don't have to worry about all the other bits and pieces within our cluster because it's already taken care of by the library. And as you see here we've got a server policy where we're adding in anti-affinity affinity which we can pass through to our clusters. Here you'll see we make use of the heat network heatlip network router to be able to create a router that because we want to have a multi-network environment and you've got the deploy network, the master network, etcd network and the node network in here and we pass all the parameters as we said before that our library expects of those different environments and it will then allow us to deploy it and as you can see I'm pointing out that the the networks we're using those library names within the type. As you can see here we've got the master cluster which is using heatlip cluster basic which is basically a resource group of nodes. We haven't done the load ban cluster version yet but that's this is the example of what you can need and the parameters here we just pass through we want to count the name the key flavor security groups subnets we can pass in this server policy and whether we need a HTTP or HTTPS proxy because sometimes you know a lot of our customers are behind corporate firewalls and one could use that there to be able to do that. So here you see I've got a my node cluster as well has been has got the similar type of format and then we've got here what's very significant again with the depends on is that we've made this deploy host depend on all the other clusters being built first. So they're all going to go together and be built and then we'll have once they're all built together then you're going to have the this deployment will run and there's a reason for this and you'll see a bit later on in the software deployment code why we need this and here we've got the different if you can look here we've got installing packages on the masters publishing the SSH key so squirting SSH key into them because as we know Ansible needs to have an SSH key access to everything so we generate the SSH key here within the environment we then copy the public key to the nodes that need to be managed and we copy the master key to the node that is doing the managing and without any interaction from a person here at all it just happens. So this keeps on going we've got now we're getting to the software deployment parts of it we can see here that we've got deploy SSH key private install and private install deployment again the same as what Florian showed us earlier we referenced it it's a group script so it's a bash script that's running and this is the private key that gets pushed into the deployment server that's going to run the Ansible commands. Then we've got the cargo install from the software config so we'll have a bunch of stuff that has to happen for cargo to install so it's a different phase and here we've utilized the create action so that on cluster creation this gets done and it's only on creation so you don't continually redo this. We're getting to an interesting part now where why we had to do a depends on for the deployment of the different clusters if you can see in the cargo config we have a bunch of inputs whereas the master's names, master IPs, etcd names, IPs and so on and so forth and the proxy details and we needed this because we needed to create the inventory file for Ansible because Ansible as we know runs on an inventory file so we have a bash script that creates this inventory file in the format that cargo needs and we pass in all this information and this if you look at the bottom here where it says get attribute master cluster cluster instances names this is what we provided as part of the library that you can get a full list of names of all the instances in the cluster by referencing the standard heat outputs commands and the same with the IP addresses as well and we feed that in and then it generates this and we've seen that there's a create and an update path to this so the action sorry and those actions will then on create and update this will run so that you always keep your stuff up to date and then we've got the deployment option where we run a run through the deployment but as you can see here in the we have a bunch of outputs as well where we can get the status of each the individual deployment option so it makes it quite easy to debug it in even in horizon if you would like well as Florence showed you in the console you can run you know get the outputs and see what's happened with your deployment here again we've also run a very fast deployment you'll see we've deployed we've spun up all the networking we spun up that we've created the netto key we're installing the key you'll see going further we'll have the public key gets pushed into some of the nodes now and the config for that's where we're generating this inventory file and then it's finished and then we should have the creation happen where it says deploy there and you can see that the deploy creating progress and create is done it took 38 minutes that's why I had to slice it down of it and that was the whole cargo deployment because cargo takes a long time to build the Kubernetes cluster and now that the deployment is complete so so what we can do now is we're going to ssh into the deployment host and I want to show you the ansible or these the the ansible inventory file that we created so I'll change over to the folder that I've put cargo in which is in opt cargo for now it could be anyway really it's and in the inventory file there you can see we've created the four the nodes here we can see node zero through three and and split it out as what as cargo expects this format of the of the inventory file so what we're going to do now is we're going to increase the number of the nodes to five and and then we are going to split the screen in half because what I want to do is show the availability of the system that stays up while you do your update so I've we ssh again into the deploy host and this is just a contrived example of people so I don't think that you always have to have a deploy this way of doing things but it just it helps to illustrate the point and then what I'm doing now is I'm sshing into one of the masters master zero for this and we can run a watch command so we can see all the pods so we're getting get pods from all namespaces it'll come up soon right now you can see we've got all the nodes here in the bottom zero one two and three and they've got engineet proxies on zero one two and three so this is a work in Kubernetes cluster installed via cargo and I'm highlighting there in the video so the next part of this is to run the update because we've now created an extra node so I want you to keep an eye on the right-hand side we're just running a standard update open stack update and you must watch the the nodes here on the right they have we've got node three and boom there's node four and nothing went down the master that was doing the queries didn't go down and what I'm doing now is I'm going to ssh into the master or the deploy host again and what I'm going to do is I'm going to run the standard sock demo sock shop demo that Kubernetes provide you the test to make sure that your cluster is there and but first I also want to show you that we've updated the inventory file so while I'm you know while we've been doing the update so we just added that extra line into the inventory file as you can see now once it comes up you'll see the node four is at the bottom there ready to be used by the the deployment that we've just done so now I'm deploying the sock shop demo and as you can see and boom we start getting a whole production of a working Kubernetes install and as you can see the containers are busy spinning up and getting the cells ready to be run and so we'll go back to is it on the other side yeah so back to the so in summary it can be leveraged to really run an infrastructure and software configuration lifecycle management for a project or an application that you're running it's it's very easy by using tool sets that we can create and use and reuse it makes it easy for us to do and you only have a single pane of glass which is really really a single command or one click command these key capabilities that we the software config and the life cycle management can become you know must be combined in a library or a framework that consists of a repository that is serving as a base for this framework and an ability for people to extend and grow it so it encourages collaboration within environments we've we've seen customers starting to use their versions of this type of solution to create better a quicker faster deployment somebody works on somebody else's stuff and improves their code typically like an insight customer open source scenario where they they have stuff that they need to do but they can't share it outside so they're doing it in insights really good and also you can work on vision control so you can have a heat library that you that you pull in with a sub module command that is on a specific pull request or a specific version and you can use that constantly and then you when you're ready you can update it and then it'll get moved to that you can use in your libraries there's a good workflow deployment workflow process now you can use because it just keeps everything simple and easy to use you have a single type that you can use to spin up a whole cluster of things or you can use nested templates to create multiples of these it's really the world's oyster I want to say the the update's also quite good because the update workflow will work and upgrade your application as and when if you run an update you have a better command in there from Ansible or puppet that says on update command do this and it runs and upsets your application without having to destroy your whole infrastructure behind underneath it and and this enables the the cloud user to minimize the time they spend running their infrastructure or creating the infrastructure and maintaining it and spend more time developing on the application and making the applications better I think that's a really great big win from this and this is the the links to the repositories we've created an organization called heat extras because we think there's going to be a lot more stuff that gets added here with we have in that heat that heat lib extra so the heat extras organization we've got the heat lib which we can't the current version of it's actually updated since this videos these videos were taken so there's things that are different so we constantly updating there's the heat tutorial gives you the whole highlight of heat basics cloud in it software deployments in vertical horizontal scaling this is what florin wrote and he's kindly allowed us to use it within the heat template it's part of what he's been doing and there's more to come and I really appreciate it if guys you know if you want to look at this take a look at it see what we can do see what if you can add anything extra if you think that there's a way of doing stuff create a pull request you know we're open to having this thing updated and grow as it needs to make it easier for guys to use the cloud my version of the cargo is not quite done because I couldn't get the downscaling working and so I couldn't couldn't finish it's not ready to go up yet but once I get downscaling working because it needs a specific command in cargo to bring the commit to empty the Kubernetes node and then shut it down so I'll do that once I'm I've got time to work on it thank you any questions you guys have questions there's Mike's on the other side yeah we're hiring by the way so if anybody's interested do a question so relatively new to heat when you send the when you send the update via software how does it actually get transferred from you know your launching platform to the actual image so the agents on the on the actual instance will continuously they'll pull heat right so you have different ways of doing it right it's not necessarily just heat they'll they can there's a whole bunch of different options you can even use like a swift storage but the most typical way is for you to just directly contact heat through its API point so as long as the instance has access to the heat API endpoint it'll basically be able to identify that there's a software deployment pull it and then it'll also use the same the same or different actually a signal transport mechanism to tell heat that okay now this was created successfully or there's a problem or something like that and if I can add that your software that you want to deploy it depends on how you normally store it if you store it in get and you want to build it on the build host or you want to if you have a pre-built thing coming out of Garrett or out of you know one of those other environments you can just use that in this deployment purpose I use get I do a get pull from the cargo install it comes down and then I work on that inside the deployment host but you have any number of ways of pulling it down just really up to you what you want to what you already have in your asset store you already have stuff in Ansible or Puppet or those bash grips you can use those any other questions fantastic thanks guys thank you very much