 Hello everyone, thanks for coming this afternoon. This is the talk on heat and its alternatives and it's about application deployment in OpenStack. Before I start, please feel free to snap this QR code. What it will do is it will lead you to the slides and if you want to follow along, if we have anyone who is visually impaired or is sitting in the back and there's some small fonts up on the screen, you can just scan this on a phone or on a tablet and you will be following along with my slides as I go. The slides will simply advance on your on your phone or tablet as I go. So I'll give you another few seconds before I start talking for those who do want to snap the code and want to follow along as I go. Five, four, three, two, one, going once, going twice, sold. Okay, we'll talk about heat and its alternatives for application deployment in OpenStack. So I advanced a little faster. And I want to talk specifically about automation of virtual systems. So we're going to be talking about specifically how we can automate virtual systems in OpenStack. This is distinct from the question of how can we use automation facilities to deploy OpenStack itself? Now, some of the tools that I'm talking about do actually have the capability to also be used for deploying OpenStack, for rolling out an OpenStack infrastructure, but that's not the focus of this talk. Here, we're going to talk about how we can deploy arbitrarily complex virtual systems using OpenStack resources in an automated fashion. And I'm going to talk the talk description, talk about three tools that I was going to cover. Heat, JuJu, and Cloudify. Because of recent events and recent things that have happened in the community, I've chosen to also include Ansible in that overview. So we're going to talk about four tools in total. Heat, JuJu, Ansible, Cloudify. And the first one I'm going to talk about is obviously the incumbent. Now, who has, who in here can I have a show of hands, please? Who in here has worked with Heat before? Has fired up a heat stack at some point or something? Okay, so I'm, I'm, I'm assuming that most of you are already at least superficially familiar with Heat. There is one issue that I think we're all having with Heat, right? And that's this logo. I mean, like, come on, right? That's, eh, right? It's, but things are changing there. As you know, we're having this push toward not only a new OpenStack logo, but also the individual OpenStack projects are getting new logos. So this is apparently the currently discussed draft for the new Heat logo. So it still maintains that flame metaphor. And in the future, perhaps if the, if the developer community agrees, this is going to be the new logo for Heat that will certainly count as an improvement. So what does Heat allow us to do? Heat enables us to deploy complete virtual environments in an OpenStack platform. In doing so, it employs a template language called HOT, the Heat Orchestration Template Language, that is 100% YAML. Now, as many things in OpenStack, Heat was originally in, what's the word, inspired by a service that's also available in Amazon Web Services, that service is called Amazon Cloud Formation. The templates that are being used there, used JSON for Heat the developers decided to go with YAML. I think that's a fairly sound decision. I generally prefer the legibility of YAML over the, shall we say, Byzantines-ness of JSON. What can we do with Heat? What are the things that Heat actually enables us to do? Well, the first thing that most people will probably be thinking about when they're orchestrating virtual environments is VMs. And in Heat, we do so by defining a resource type called OSNovaServer. We can use that to configure guests in Nova and an example resource definition in YAML might look like this. For those of you who are unfamiliar with Heat, a YAML resource definition is simply a YAML dictionary called Resources. Every resource has a name. It has a type. There is generally a type available for just about any type of OpenStack resource, any type of OpenStack object that you can also create manually or on the command line or through the API. It should also have a corresponding Heat resource and then a resource has properties and those, of course, are different depending on the resource type. So for Nova VM, of course, what we would do is we would name the resource or we'd name that server. We'd give it an image to boot from. We define a flavor. We might inject an SSH key. There's a bunch of other things that we can do there as well. And then with a Heat template like this, we could then simply create the stack. We can do so either from Horizon where we simply upload the template file. The template file gets parsed and we can then fire up the stack. We can also do so, of course, from the OpenStack CLI tools, either the legacy Heat tool where you do HeatStackCreate-f and then followed by the template file and the name. For reasons I cannot fathom for the unified OpenStack client, it has become OpenStackCreate-t for template file. So for no reason whatsoever, dash fchange dash t kind of violates the principle of least astonishment, but that's the way it is. As it is, if you were using Heat only like this, you effectively have a hard-coded template, you then fire it up. As it is, that wouldn't be very flexible. So Heat has the ability to tweak certain things about a template by using parameters. Parameters are just another YAML dictionary. It could look something like this. So again, the parameters have an identifier, they have a type, it can be a string, it can be an integer, it can be a dictionary, it can be a JSON object, it can be a number. Parameters have an optional description and an optional default and using these parameters from within Heat templates exposes another Heat feature and those are called intrinsic functions. So within the Heat template language, we have certain functions that we can use to cross-reference bits and pieces of the template with one another. And a simple way of doing that is with the getParam function, the getParam function simply references a parameter. And so rather than having a hard-coded image and a hard-coded flavor and a hard-coded key name, we can simply have these parameters and these parameter references in the template and then we can set these parameters. Again, parameters can be set from horizon as we upload a template, the template gets parsed, we get another dialog box that's enumerating our parameters. It gives us the descriptions of these parameters as tooltips. We can then inject the values that we want and then we say launch and we fire up the stack. On the command line, there are of course also equivalent tools and commands to do that. Again, at the top, it's the legacy HeatStackCreate command where you could set the parameters on the command line with a dash capital P option for the OpenStackStackCreate syntax that has, again, for reasons I can't really fathom, changed so that you are now forced to write out the entire name dash dash parameter. There's also another way of doing this which is putting all your parameters in yet another YAML file so you don't specify them directly on the command line. It's effectively up to you or essentially up to you how you want to use that and what makes better sense for you. Now, that, of course, is only dealing with virtual machines and that's not a whole lot and so as a result what you, of course, would want to do typically or what's a typical part of a heat template is adding some kind of network connectivity. When you add network connectivity, you first do so doing two resource types called OS NeutronNet and OS NeutronSubnet so you can, from within your heat template, create networks or network objects as you like and you then have to define references between those two and that is as you can perhaps spot in this template example done with another intrinsic function and that function is called getResource. You might see that there on the network line and getResource is pretty spiffy because getResource doesn't only build these cross references between resources but using getResource heat also builds an automatic dependency tree. So heat resources need not be defined in the order they need to be created in the template. They can be defined in any order and the heat engine when it parses the template then creates a dependency tree and generates, fires up these resources accordingly where whenever we have a resource that is dependent on another the independent resource gets fired up first and then the dependent resource follows and so on. And then there's a bunch of other things that we can do on the Neutron side. For example, we can fire up virtual routers from Neutron. We can plug our subnets into this for that we have the OS NeutronRouter interface resource type. We can define gateways so we can enable our stacks to provide floating IP addresses and so on. And here's another template there. I'm not going to go over the templates here in detail. I have another QR code for you at the very end of the talk and you can certainly welcome to review all these slides or even peruse them for your own presentations if you like. There's a few other things that we can do on the Neutron side. As I said, so for example, we can specifically tweak Neutron ports. We can make certain configuration changes to Neutron ports and we can then plug these ports into our VMs as we see fit. Another thing that we typically want to orchestrate within a heat template is in what way we want to enable our services to be available from the outside. With that, this in Neutron is usually done with security groups and we have a corresponding resource type in heat. That's called OS Neutron security group. And then for example, you can create a security group that allows your VMs to be pinged or allows your VMs to be SSHed into and so on. Speaking of that, you normally also want to make your stack available from the outside in some fashion or another. And this could either be directly to a VM or it could be to a Neutron load balancer, an LBAS resource and so on. Regardless, at least provided that your application uses IPv4 as opposed to IPv6 connectivity, you're going to also want to deploy floating IP addresses. There is of course a heat resource type for that as well. It's called OS Neutron floating IP. And here is how you use it. Typically, you would define what is the network that my floating IPs are supposed to come from, an external provider network. That is something that you usually inject as a parameter. Then you reference that parameter as the floating resource and then you allocate the floating IP to a port. Now, as everything in OpenStack, heat calls are of course asynchronous. If I fire up a stack and that's defined with a certain template. The first thing I see is the stack moves into a status called create in progress and it stays in that status for however long it takes to actually fire up the stack. And then perhaps a minute or two later, I can come back and I see that the state is now in the create complete state. When you look at floating IPs, that raises the interesting question of, okay, what's the floating IP that I actually got? How do I get this information out? And for that, we have outputs and with outputs we can return values from within the stack or attributes of resources. And for floating IPs, this is very frequently used. So for example, I could have an output that is named public IP and it returns the floating IP address of whatever application, application stack I am deploying on my public network. And I can return that using yet another intrinsic function called get attra. And these are the calls for how we can retrieve that. There's heat output show or open stack stack output show, depending on which client that you want to use. Now this enables us to build application stacks. It basically enables us to build an application framework with heat. There is however another resource type and another functionality in open stack that we can use to drill into the very details of our application and actually do application configuration and those modifications from within a heat template as well. And that is through integrating heat with cloud init. If you're not using cloud init and specifically cloud config for deploying your virtual machines, you're likely doing something wrong. You definitely want to look into that. What's really neat is that this is very tightly integrated with heat through the OS heat cloud config resource. So that means that you can manage a cloud config dictionary that you pass into a VM directly from heat. So something like this, you have a resource. In this case it's an OS heat cloud config resource and what it tells the VM to do, first thing right after it boots is to update its package cache and install all the available security fixes that are available from the vendor. And such a thing might be very helpful, particularly when you combine it with the ability to set cloud config parameters directly from heat. So for example, you could do something like this. You want to use cloud config to roll out a specific user and what the name of that user should be, that is something that you set as a stack parameter. Now obviously you can go arbitrarily complex with this and there are much, much and many more features that heat has to offer. But for now I'll leave it at this for heat as sort of being the incumbent for arbitrarily complex application deployments and talk about a few others and see how they compare. So the next one that is very frequently discussed in the OpenStack space and is heavily driven by one OpenStack vendor is of course JuJu. Now most of you will have heard of JuJu in the OpenStack context as a means of deploying OpenStack and in doing so you typically use mass metal as a service. What I'm going to talk about here is JuJu without mass. I'm talking about JuJu with no use of the mass provider because like I said I'm not talking about deploying OpenStack with JuJu. I'm talking about deploying applications within OpenStack with JuJu. With all the hoopla around mass it's frequently overlooked that there is actually a fairly large number of individual providers that JuJu supports. You can have JuJu talk directly to EC2, to Microsoft Azure, to Google Compute Engine, to vSphere environments, to Joints, to LexD hypervisors and there are also actually several providers that are related to OpenStack. There is the OpenStack provider, there is a Rackspace provider, I believe there is also an OVH provider if I have that down correctly and what this enables you to do is you can do things like JuJu deploy of a certain service. You say whatever you want, three instances of RabbitMQ deployed as a cluster and what JuJu will do for you is it will connect to the appropriate provider, grab a few virtual machines for you, basically spin up new virtual machines for you and then deploy the service on them, have all these services talk to each other and also fulfill all the JuJu relations, the relationships with other services that you may define on this service or you also have the ability to add units to existing services so you have a scale out component in there as well, which is quite spiffy. So you might ask well if there is already an OpenStack provider, well why is there also something specific to Rackspace and why is there specific to other OpenStack providers? Well it turns out that in order to actually make all this magic work you need some additional metadata that your JuJu can grok and if you want to run JuJu in your own private cloud you can create this. If you want to run the JuJu OpenStack provider with a public cloud you have to lean on your public cloud provider to provide this metadata for you and that's called Simple Streams. What you are doing is you can create this Simple Streams metadata with a command called JuJu Metadata Generate Image for an OpenStack provider and you tell basically what you do is you do a mapping of Ubuntu releases to OpenStack images, to OpenStack Glance images in a specific region. If you're an astute observer you will notice that this example here points to a Keystone V2 URL where endpoint metadata and so on is fetched from, which of course raises the question what about Keystone V3 support. If you are working with JuJu V1, your plumb out of luck there is no Keystone V3 support there. For 2.0, JuJu 2.0 just recently gained Keystone V3 support in the 2.0 RC3 release candidate which happens to be the JuJu version that ships in Ubuntu Yacquity. So in Ubuntu Xenio, no Keystone V3 support for you, except of course if you want to use the official JuJu PPA. So that's something that you, if you're running a private cloud, have to do or your public cloud provider if you're a public cloud customer has to do for you. And then this Simple Streams metadata goes into a Swift object and then you create a separate OpenStack endpoint type called product streams. And wait, there we go. That flipped back and forth. Sorry about that. There we go. So you create that service and then you create an endpoint for that. And it's actually kind of cleverly done. That endpoint is simply a public URL of a Swift object. I always thought it was kind of cute. And that's an endpoint that doesn't actually point to a web service API, but it's actually just a Swift object that you point to. And then once you have that, once you have all of that in place, things like deploying, for example, and say a RabbitMQ server application that your application stack might need, that becomes very simple. So you can do something like give me three units of RabbitMQ server and then that's that. So that's actually pretty nicely done. And again, you have all sorts of other features in JuJu. You can deploy with bundles. You have all your relations and so forth. So that's all pretty interesting stuff there. Now, one thing that I've always said about JuJu 1.0 is that it's infuriatingly close to awesome. So, which means what I mean by that is that with JuJu, you frequently find yourself into a situation of, in a situation of, oh, great, you know, this makes it really, really easy to spin things up and you can get going really, really fast. But then if you actually want to customize certain things, if you actually want to make your modifications where the JuJu charm authors haven't really thought of them, then things start getting tricky. There's a few other things that make JuJu deserve this infuriatingly close to awesome monocoque, which is there are certain features in JuJu that are only available in certain providers. That's fine in and of itself. But historically, the documentation on that has been a little lacking and you were frequently sort of reduced to finding out for yourself through trial and error whether something that was documented somewhere for one provider was also true for another. Fear not though, because JuJu 2.0 is just around the corner and in JuJu 2.0, everything has changed and not only has everything changed, but there is at least as of now no direct upgrade path. From a JuJu 1.0 to a 1.x to a JuJu 2.0 environment and we'll see what that's going to do for adoption. There's a whole lot of things that have changed, including terminology, a service is now called an application and so forth. And we'll see how that goes for JuJu users. Like I said, this is currently in the RC state. Maybe they'll, they managed to have a clean upgrade path churned out by the 2.0 final release. We'll see about that. But the fact of the matter is that there is a fairly significant number of changes to the core of JuJu in the 2.x releases. The next facility that I'm going to talk about, and that's the one that I parachuted in on short notice, is Ansible. And here again, I'll want to remind you that I'm not talking about deploying OpenStack with Ansible. For that, there's OSA, OpenStack Ansible, which is a beautiful deployment option that you should totally check out. I'm talking about using Ansible to deploy applications in an OpenStack environment, in a virtual OpenStack environment within an OpenStack tenor project. Again here, we had big changes between Ansible 1.x, and I apologize, there is a type on the slide. It shouldn't say 1.0, it should say 1.x, because this was very much true up until 1.9, and Ansible 2.0. In Ansible 1, there were only a handful of built-in Ansible modules that you can use to manage glance images, or users in Keystone, or compute instances in Nova, and so on, and a handful of things with Neutron. And with Ansible 2.0, that has changed dramatically. In fact, what happened, a lot of the code base that was originally written for Ansible built-in modules, went into a separate SDK, Python SDK called Shade, and then Ansible 2.0 now just makes use of Shade, basically acts as a Shade user. And now you have a big array of things that you can do with Ansible. You can create users, you can create volumes that's new, you can manage Ironic through Ansible. There's a whole lot more that you can do with Neutron, you can create objects with Swift, and so on. Then, this is very much the same as with Heat and with Juju. Ansible uses YAML as its standard description language. You can point your Ansible environment at one or multiple clouds for that. There's a clouds.yaml file in there. And then Ansible playbooks dealing with OpenStack clients look like you are invoking a bunch of these OS underscore module tasks on localhost to fire up your instances. And like for example, here's one for OS image. This is something you run locally in order to upload an image. And then you can also create a VM from within your Ansible playbook, like this. And if you're familiar with Ansible, you will not be surprised that a lot of these resources make clever use of register variables where you are executing one task and in doing so can actually retrieve some information back that other tasks in the playbook can then use. So for example, cross-reference to an image. And then there is something that comes in very handy. It's also shade-based and that is a dynamic inventory driver for Ansible that you can point at your OpenStack. So what you can do is you can run an Ansible playbook with dash i pointing to an executable. And that executable will use shade to enumerate your resources in the cloud and also enrich them with metadata and so on. So you can use those as inventory variables in your playbook, which makes this very, very flexible. So those are the things that recently happened in Ansible, the changes that recently happened in Ansible. And this combination of being able to fire up OpenStack resources directly from Ansible and then using a dynamic inventory to enumerate those resources and then deploy your services in them makes this a very, very powerful option for deploying cloud applications in OpenStack. Fourth tool, Cloudify. Now, Cloudify has something very specific for it going that the others do not. Cloudify is actually based on an industry standard. In Cloudify's case, this standard is Tosca, the topology and orchestration specification for cloud application. That's a template language that has been developed by Oasis, which is a standards organization in order to deploy the topology of a cloud and the applications that are running on it. Now, Tosca originally mandates XML as its description language, but there is something called the Tosca simple profile in YAML that eliminates the need of dealing with all the hairy XML. And instead, you can write your blueprints in YAML. And that makes it just a little more human-friendly. And so that means that we're actually talking about four different tools, all of which have settled on a common at least language syntax, namely YAML, for describing the application profiles, which is a nice example of convergent evolution, if you like. And this is an example for a Cloudify blueprint. One thing that you would, Cloudify Tosca blueprint, really. One thing that you will see if you look at blueprints like that, it's a little more formalized. It has more of the hallmarks of actual systems design considerations and to a certain extent designed by committee going into it. In Tosca, you deal with abstract concepts like inputs and nodes and properties and relationships and outputs. And so it's all a little more formalized. But whereas, for example, Ansible is completely procedural, and Juju has a purely application and relationship model going there. Interacting with a Cloudify environment, Cloudify basically enables you to run like pretty much any application in a Cloud environment. What you do is you have a Cloudify GUI, you can use a Cloudify GUI or a CLI, that talks to a RESTful web service called the Cloudify Manager. And via individual agents, then talks to VMs and configures applications on there. Cloudify, of course, like all the others are, like all the others are too, is an open source project. And you can find it on GitHub. And it is primarily driven by Gigaspaces, company Gigaspaces. So to use the most overused metaphor or mantra, if you will, in OpenStack overall, let's see how they compare. Let's see how they stack up. And I'm going to look at this from a few different perspectives. And I'm not going to say that either of these perspectives is most important or more important than others. That's something for you to decide. It's something that clearly applies to what your application needs are, what your deployment needs are, and what your management needs are. So the first thing, of course, that we can ask is, is this thing OpenStack native? Because you can always assume that if you're dealing with something that is part of OpenStack proper, you're most likely never going to lose support for OpenStack when you're using this tool. And of course, Heat being the incumbent OpenStack native utility there. The others are not. None of neither Juju nor Ansible nor Cloudify are projects that are actually under the OpenStack umbrella organization. However, and Heat also has a very good track record, has had a very good track record in the past in onboarding new features from the other OpenStack products or projects. You can imagine that when a new feature or a new resource type lands in, say, Neutron or in Sahara or in Trove or wherever, the way the OpenStack development cycle works is there may be a little bit of a delay until the corresponding management resource type pops up in Heat. And it may well happen that between those two, a release occurs. And so you have a specific OpenStack release that has certain new features in Neutron that are, however, not yet managed by Heat. Just as an example, sticking with Neutron. Neutron, over the past few releases, went from Elbas V1 load balancer as a server version one to Elbas V2. Elbas V1 was subsequently deprecated. Elbas V1 was already deprecated in Mitaka and marked for removal in Neutron. Heat support for Elbas V2 only landed in Mitaka. So there is only one release, namely OpenStack Mitaka, in which you can manage both Elbas V1 and Elbas V2 with Heat. And so if you're a public cloud provider, for example, or if you're a private cloud that doesn't catch every single OpenStack release but maybe only upgrades every other OpenStack release, that is something that can create trouble for you. Because if you made the push from Liberty skipping Mitaka directly to Neutron and your users used Elbas V1 in Heat because that's the only thing that they could use in Heat in Liberty, they're now out of luck. So you have to provide them with an upgrade path there and that's just one example where even with an OpenStack native client there's maybe a little or a native tool, there's maybe a little more friction than you would conventionally expect. Or there can be a little more friction than you would conventionally expect. The next thing I want to look at is does the facility have a capacity for handling full stack actions? And by full stack actions I mean things like not only spinning up a stack but also deleting it as a whole, making or enabling actions like suspend and resume. Think about that, suspend and resume is something that can be a huge cost saver when you're dealing with public clouds, right? Suppose you have a certain facility that you only need like maybe for a few hours per month, then one of the things that you can do is you can just spin a stack up, then suspend the whole stack and then bring it back up later. That is also a feature or a capability that currently is unique to Heat, none of the others have that. Let's talk about features and capabilities that are shared by several of these tools. The first that I want to look at is does this thing have native scale out capability? So are we able to say in a simple and easy and built in fashion add more of X? So I have this many of subsets of my application stack and now I want more of them. In Heat that is very simple to do. Heat has a resource type that's called OS Heat Resource Group and that has a parameter that simply is how many of these instances do you want. And combined with the full stack action of stack update I can change this parameter. I can simply say oh I now want more of this resource group. So that's pretty slick. Juju has that as well. Juju has Juju add units and it enables us to quickly and easily add more of something. So if I for example want another RabbitMQ node, then not only can I spin that up with Juju, but that RabbitMQ node will be automatically configured to talk to all the other RabbitMQ instances that are there. And for all the communication needs that in Juju are embodied by relations to other services, the new unit will get those as well and that's pretty spiffy. We have that in Cloudify as well, but it is a feature that is lacking from Ansible. Of course in Ansible you can always run another playbook adding more of these resources, but that's another playbook, that's not actually a native scale out capability. Next thing, is this thing standards-based? Again, there's only one here where that is true and that is Cloudify. All the others are not based on any kind of industry standard, which is a feature that may be completely irrelevant to you or not. Like Tosca specifically is something that's a very hot topic in the NFV space. So if you are in the telco sector, if you are headed for an NFV deployment, this might be something that's quite relevant for you, being able to base your orchestration needs on an industry standard. I said at the top of the talk that I was not going to talk about deploying OpenStack itself with the facilities that I'm talking about, but I do want to look at that now because arguably if you can use a certain orchestration facility, both for deploying your applications and for deploying OpenStack itself, that is something that may greatly facilitate your internal training and may something that could be a big cost factor, that you need to train up people on only one platform rather than two. You can deploy OpenStack with JuJu, as I guess everyone knows, with Ansible, as I guess everyone knows. You can also deploy OpenStack with Heat. Heat enables you to manage bare metal systems with Ironic and you can build OpenStack environments that are composed of an undercloud where you deploy to bare metal and then an overcloud where you're actually deploying virtual machines. Next topic, one that's a little tricky, and that is how easy is it if I use a specific tool, how easy is it to customize my applications to my liking? With Heat, that's very simple. With Heat, all you do is you modify a cloud config or software config resource. With Ansible, it's very simple. If you're unhappy with a specific playbook, you can simply add additional tasks and you can customize machines to your heart's content. Cloudify also has app customization facilities. With JuJu, well, JuJu aims to codify best practices. JuJu aims to be DevOps distilled, so in an ideal all JuJu world, you won't ever need to change anything and it always tries to make the best decision for the user. Unfortunately, some JuJu users see that that is kind of biting them because, surprise, the best practice for one specific scenario or for one specific use case is not necessarily the best practice for another specific use case. So this is something that sometimes, although it is, of course, supported in the sense that JuJu charms have a ton of configuration facilities where you can tweak certain things, this is something that leaves some users unhappy. And again, you may or may not find that off-putting. It may be relevant to your application or not. It's simply something that you'll have to evaluate for your application stack and decide. Does it have a GUI? He doesn't have a GUI, and of course it has full integration into OpenStack Horizon. JuJu has a GUI that after a few facelifts is actually now really, really well-organized and nice and easy to use. Cloudify comes with a built-in GUI. Ansible, well, Ansible does come with Ansible Tower. I do believe, and I'm happy to stand corrected on this one if Red Hat folks are in the room, but I do believe there is a commitment from Red Hat to open-source anything they buy. As I know, that hasn't yet happened for Ansible Tower, although, granted, it has recently happened for Ansible Galaxy. So, although there is a GUI available, it's currently not available to everyone, and it would be nice to see some improvement there. Is the facility or is the orchestration tool genuinely community-driven? Well, that's easy for three and a little tricky for another. Heat, obviously, is completely community-driven because it's driven by the open-source developer community. Cloudify, although it's an open-source project, is primarily Gigaspaces' open-source project, although there's certainly welcome outside contributions. Ansible is also a very community-driven project. I mean, of course, you could joke that Ansible was driven by HP and IBM and Red Hat, wherever Monty worked at the time, right? But still, it does have a very healthy developer community. With JuJu, and JuJu folks will correct me if they're in the room, would like to see a little more community involvement, as it stands, most of the work on core JuJu, that is to say, not only the JuJu OpenStack charms, but also the JuJu OpenStack provider, all that rests squarely on the shoulders of Canonical at this time. And finally, can I run this automation facility, this orchestration facility on any public OpenStack provider? Well, with heat, you kind of would expect you could, but in reality, unfortunately, that's not so much the case. If you have a certain heat template and you're talking to a specific provider, you should actually ask them, do you support this feature, do you support that feature, or ask them for a test account to drop your resources there, because you sometimes run into interesting surprises where certain features are not supported even if they should be. JuJu does not run on any public OpenStack. As I said, JuJu does require the simple streams metadata that I mentioned at the top of the talk. So if your public cloud provider doesn't do that, then you have to, and you want to use JuJu, then you need to talk to them, lean on them, and perhaps convince them to actually deploy these product streams, this product streams metadata. Both Ansible and Cloudify, with their respective management facilities, make use of low-level OpenStack APIs. They invoke Neutron and Nova syndrome and so forth directly. So you can reasonably expect them to run on any public OpenStack environment. So I've given you several dimensions as to how to look at this, and I don't know, is there anyone in here who kind of identifies with sales or product management or product development or something like that? No? Oh, great, yeah. Well, I put this in a matrix for them, so they're happy, and here's their matrix. Actually, this is their matrix. So, and there we go. So that's the full overview. Like I said, don't take pictures of this. I have a QR code at the very end. Which of these should I use? Which of this is the right one for me? Well, that's classic Boolean logic right there for you, right? You should use one. Which I unfortunately can't tell you that's a decision that you have to make yourself. I am, I regret to point out, out of time, so usually I actually don't have time for questions right now, although the next speaker is not in the room yet, so maybe I can squeeze off a few. If not, I'll be happy to take questions outside. One thing that I will say is these slides are available under a Creative Commons Attribution Share a Like license, so if you would like to use them, if you would like to run this presentation in your own company, in your organization, by all means, feel free to do so. They're rendered right here. The sources are up on GitHub, and I guess I can maybe take two or three questions. Yes, sir, in the red shirt, please. Okay, so the question is, which of these has the richest library of resources that we can support? Well, that's clearly heat. The general expectation in heat is that whatever pops up in the OpenStack API, whatever pops up in OpenStack as a project, should also be orchestratable in heat. Okay, so the question is, usable templates. I would say if you just want to... It's probably a toss-up, really, between heat ansible and juju. For juju, there's juju bundles that you can simply try out. For ansible, there's people publicizing their playbooks, and for heat, there's people publicizing their templates. So it's probably those three, and then it's a little bit of a gap and then clarify. Okay, I want to give one more person a chance to ask a question. Or if we don't have questions or you would like to, or you would prefer to ask them outside, not a problem at all. Thank you very much for coming. I do realize it's late in the conference, so thanks doubly. Enjoy the rest of the day, and safe travels back home. Thank you.