 Hello, everyone. Thanks for joining. My name is Animesh Singh, and these are my colleagues, Daniel Crook and Paul Zarkowski. Today, we are going to talk about Chef Ansible Puppet in Salt, and also in the context of OpenStack. This is going to be the agenda which we are going to go through today. The first topic which we definitely want to cover is why is configuration management critical for running OpenStack? And then, what are the four most popular configuration management projects, which by the title of this talk you already know, and to what degree each of them support OpenStack? And not only this, your role and organization further influences the decision to adopt a particular tool. And once you have adopted something, where to go next to find the right details and right set of information. With that, a bit about us. My name is Animesh. I have been with IBM for around 10 years in IBM Cloud Division. I have been working with a lot of products and technologies in the IBM Cloud Division, and then taking them to customers. Very recently, for the last three to four years, we have been focused around OpenStack, CloudFoundry, and more recently around Docker. And how do these technologies come together in cohesive solutions? Dan? Yes, I work with Animesh as well. We work with customers to get the most out of CloudFoundry, Docker, OpenStack. As well as the tools to manage them. So the tools we're going to talk about today. And I'm Paul Chikovsky. I'm a Cloud engineer at BlueBox recently in IBM company as of a few months ago. And I work on our OpenStack product and our open source tools, Ursula, and GiftRap. Great. Thanks. So I think one look at this chart, you will realize definitely why is configuration management critical for running OpenStack. And for those of you who have been deploying OpenStack, giving OpenStack deployments to the end users, you know the pain involved in creating consistent OpenStack deployments involving Cinder, Neutron, Compute, Controller, Glass, and then add to it a set of added functionalities like Heat, Cilometer, Ironic, Trub, or moving forward Magnum. You're looking at a system which is as big and then complex as anything you want to deploy. The other big reason why we need deployment and software configuration management tools around OpenStack is it's an open source project with a large velocity of change. More than millions of lines of cores are being contributed to each release of OpenStack. And you need tools which can absorb that velocity of change and be able to produce consistent deployments again and again environment after environment. Compliance, that's very key for IBM. And a lot of organizations are actually very sensitive to security compliance like HIPAA, et cetera. And you want to make sure that the environments you are producing again and again are being adhering to those compliance standards and definitely the quality. There are so many parameters to fine tune. And you want to make sure that you are doing that right again and again. The other part which also will affect your decision is what is your role in organizational culture? If you're an OpenStack operator, you want to deploy an OpenStack, hand it to the end users, and the only time you want to be troubled is if there is a problem. So you want tools which actually can deploy stable OpenStack and then be able to maintain it and have it made available in a highly available configuration. Then there is an OpenStack innovator. Their role is actually to evaluate the latest technologies which are coming in the OpenStack upstream projects. For example, the container support, et cetera. So their goal is they want a tool which can absorb that velocity of change very quickly. So if there is a new project which is being introduced in OpenStack community, let's absorb and can be deployed two weeks from now when it has been introduced. And last but not the least, the contributors I believe a lot of you are contributing to OpenStack community. And more often than not, what you are concerned about is if you're a horizon contributor or a neutron contributor, specific to that particular module. You want to create environments, iterate over your changes, go over it again and again to lead those environments. So with that, I'll go a bit into the tools overview. Before that, I want to know how many people here have either heard or used salt, roughly five to 10% of the room. How about Ansible? That makes it I think 50, 60% that's great. And Chef and Puppet, I presume all of you have either used it or heard it in some context or fashion. So this was a survey which was done by RightScale this year in terms of finding what are the popular DevOps tools respondents are using. And as you can see, definitely Chef and Puppet stand at the very top. And Ansible and Salt come towards the bottom of that list. An interesting inclusion there you see is Docker because a lot of people now are actually doing deployments based on container images, et cetera. Now this data actually correlates and coincides with the timelines with around which these projects were introduced. So if you see 2005 and 2008 when Puppet and Chef came along. So they are the most mature and oldest of the software configuration management projects we have. And the Salt and Ansible are recent entrants to this particular area now. With that, we definitely want to state that each of these tools have a very strong community and a clear mission and they're scaling really well. The primary motivation for Chef and Puppet for example was to create a tool and initially when these tools were created, it was to handle the end-to-end server deployments within those organizations. For example, Opscored when they actually created Chef, they wanted to use it for internal use and as they realized the potential of it, they take it out to the open source community and to the market. Ansible had a different mission. When it started, the main goal was that all the configuration management tools which came before Ansible, they were using an agent server-based architecture. And Ansible wanted to stay away from it because they have a more operations-based focus. So they wanted just SSS-based access to all the nodes they will be controlling. Salt definitely agrees with Ansible on that goal and they wanted to take it further with the main goal of Salt being that they wanted a highly scalable software configuration management tool. So they have done a lot of work in terms of making sure that the message payloads are small, there is no central point of failure there. So they have architected the solution in that exact same way. All of them have enterprise offerings with Ansible doing a lot more work on the hosting, consulting, and training side in terms of licenses, Apache license, most of them except for Ansible which is again GPL license. This is a snapshot of a month GitHub activity on these particular projects. If you see the number of releases, for example for Chef and Puppet 231 and 291, they are very, very mature. So that reflects in the number of releases which is much less in the case of Salt and Ansible. The other notable thing to note is that if you see the number of commits, et cetera, there are a lot more on the Salt side and that kind of reflects that the project is mature and stable. The commits kind of start becoming less but if something is very rapidly evolving and growing, you see a lot of activity there. One thing which also stands out for Chef is the number of branches which we have seen a lot in the Chef community, people creating their own customized branches and creating custom recipes. So with that, since most of you have heard least about Salt, let's start with Salt overview and what it does. So definitely it's a configuration management system and as I mentioned, the main goal of going through Salt wars to enable high-speed communication with large number of systems, distributed globally. It's a state-based system, so that means there is an AML file where you actually define your end state and Salt is then responsible to actually make sure that your deployment is reaching that particular end state. It's written in Python, so very easy to pick for sysadmins who are very familiar with Python and one of the goals was also to actually orientate it towards sysadmins rather than developers. It allows parallel execution using encrypted protocols so you are not bound by any sequential execution of commands and they have done a lot of work on the networking layer with using zero and queue networking library as well as using message pack which they use for data serialization so as to enable fast and very light network traffic. So as I mentioned, that was one of their primary core goals. It's both vertically and horizontally scalable. They also have one feature called syndicated master which allows one master to control multiple master nodes plus the minions themselves or the agent machines, they have a peer interface which actually allows minions to control other minions. So that kind of lends well itself to a decentralized scalable architecture. They also have a reactor system which actually listens on the message bus to actually react to any events which are happening downstream within that particular deployment. This is a high level architectural overview of Salt. As we're mentioning, they have a centralized salt master which actually is syndicated to the two salt masters you see on these charts and the goal here is that this actually lends itself to a geographic deployment. So if you have two data centers, you can have two salt masters residing there and you can have a centralized salt master which is acidic. Some of the primary components, salt master, the main job is to actually control the minions and then there is a master demon which actually runs on the salt master and it's responsible for authenticating the minions, communicating with the connected minions, et cetera. There is a salt CLI which resides on the same machine as the salt master and it's job is to actually take end user commands and then pass it to the master. Minions, as the architecture diagram suggested, are responsible for receiving all the commands from the master and then running the job and communicating the results back to the master. And then there are salt modules. They have execution modules and state modules which actually define the end state in which you want your systems to be in. They have an optional web UI which is a bit immature at this particular time. It's being built and being evolved as we see that community growing. Now within the context of OpenStack, salt is picking up for OpenStack deployments. What we realized as we went through this investigation, definitely there is very less maturity in the context of OpenStack though the tool itself has a lot of other recipes for, for example, for lab deployments, et cetera. There's a strong community on that side but in the context of OpenStack, that support is less. I've listed some of the links to the GitHub where the community has sprung up and actually created salt formulas. You can actually visit them. The ones which actually seem very active recently are listed here. There was also a talk given by Nitin Madhok if he's in the room here, the day before yesterday, which actually went into the details of using salt to deploy OpenStack and he has a very good recorded demo which you can go through. Now typical installation steps for installing salt, you install the salt master and the minions. If you are using an agent less communication then you don't need those minions but for high scalability strongly suggested to use those minions. Then you added the salt master configuration file and the next set of steps are actually to configure your salt grains which define the roles for different OpenStack components for example, neutron or controller or keystone so you can give them different roles. Then we have this concept of salt pillars which essentially are metadata files which contain a lot of metadata around your deployment and more specifically around your encrypted passwords, environment, networking, et cetera. And finally you have the state modules or the salt states which are responsible and where you actually define the end state where you want your system to be in. Once you have established the connectivity between the master and the minions, then you are ready to go. Now one thing which you do is actually give each minion different ID and then once their keys are actually accepted by the master, they are connected and then you sync all the grains and the pillar data across all the nodes and once that data is actually synced then you run the installation using the salt state run command. I have referenced the link to Nitin's talk which came there before yesterday here which has a very detailed demo of going through that particular installation so if you're interested, definitely go through that. So with that some of the summary, definitely as I was mentioning, it has a very deep technical depth, highly scalable architecture, easy to pick, easy to get started since it is Python based, very oriented towards this admins. The weaknesses which we found, definitely the Pistak support is not mature. Also our personal opinion while going through the documentation, it's very sparse and spread out so you don't have one single starting point where you think you can get and pick it up and get started. Web UI is being built, being taken to the next level and for non-linear systems, the support is pretty less. So with that I will pass on to Paul, a friend from Bluebox who's going to talk about Ansible and Puppet. Thanks Amish. So Ansible was founded a while back by Michael Dahan. He's pretty well known for making tools for operations. He did cobbler and some other tools which I think just about everyone has run or runs cobbler somewhere in their org. It is a remote execution system focused on solving the problems of orchestration and config management and it's written in Python and it takes instructions in the form of playbooks which are basically YAML files. And one thing it does that's different to the rest is instead of having an agent on the systems, it uses SSH to communicate which means for those of us with an operations background, it's a very familiar way of communicating with our servers and we know how to secure those servers with SSH keys, pseudo rules and stuff like that. I know that combined with the YAML file-based manifests make it very easy for people to get on board and start being productive with Ansible. It does have some other communication methods but SSH is kind of the primary one and it is highly scalable but as you scale up the SSH stuff, it takes a while to run SSH a lot of times and so that's when you start looking at the other communication methods. Some folks will run it on groups of servers at a time, other folks will deploy the playbooks to each of the servers and then run it in local mode out of Quantab every hour or whatever. So there's a few ways of scaling it there. It can share facts between the servers so you tell it to gather facts and it will SSH to every single server in the run and gather a bunch of facts. It's IP addresses, what's installed, a bunch of stuff like that so that then they can query each other so I can say in my playbook, what is the IP address of the first database server that I'm deploying and have my web servers connect to that. At its heart, it's a very powerful orchestration engine the way you put tasks into the playbook in an ordered fashion means it's then very easy to orchestrate complex things like upgrading OpenStack, say from Juno to Kilo, you can say stop the API, update the packages, migrate the database and then start the API again and do it in a very ordered fashion and wait and do some health checks and stuff on the way. So it lends itself very well for doing upgrades and also blue-green deployments of applications and stuff like that. Because it uses SSH, the way it communicates to all your systems is pretty standard and you can do stuff like tunnel it through bastion servers so you can access machines that are in a private network say in an OpenStack private network and you do your security in the same way you do security on SSH. So it's a very familiar pattern and it's very common to have a centralized Ansible server or Ansible workstation where you log onto and do your work there so you don't have everyone running Ansible from their own laptops and stuff like that. Components of Ansible. So Ansible itself is a fairly monolithic Python CLI. In Ansible 2.0, they're kind of rewriting it to be more of a client library with a thin wrapper for the CLI and that's gonna be super useful. You have your playbooks which I've already described and then you have roles which are kind of collections of playbooks, variables and templates that you can call for another playbook and so you might have a functional role for installing Pecona with Galera replication. You'd apply that to an inventory of five nodes and then run Ansible and it would go and install Pecona and set up the Galera replication between those nodes. And then you have your inventory which is in any file with a list of groups and the servers that are members of those groups and Ansible uses those to work out what tasks to run on which servers as it runs through. You also have the Ansible Tower which is an enterprise offering from Ansible and it's used for compliance and auditing. So basically if you're in a heavily-compliance organization you've got a HIPAA compliance, you're doing, you don't need role-based authentication, stuff like that. It puts those controls around Ansible so it's a little bit less Wild West which it can be if you're not using Tower. In the OpenStack operators community we're kind of spoiled for choices for tools to install OpenStack with Ansible. At BlueBox we use Ursula which is a tool that we open-sourced. Rackspace open-sourced, the OpenStack Ansible stuff. And then Kola is another interesting example where it's doing orchestration of docker-based containers to install OpenStack orchestrated by Ansible. And then out in Ansible Galaxy which is kind of the place for the community to put shareable roles and playbooks and stuff is a bunch of roles for installing. Say Keystone, Neutron, over et cetera. And you can join those together with your own playbook to set up and install to look the way you want it if the ones listed above it aren't suitable for you. So as I mentioned, at BlueBox we use Ursula. It's an in-house tool but it is open-sourced and anyone's free to use it. It has about 1,000, probably close to 1,500 Ansible tasks to deploy and manage OpenStacks. It's quite a lot of work to install so you do want to use a tool to do it for you. It installs the OpenStack projects either from source or from packages. We have another project called GiftRap which builds packages from source. We have some reasons that the distro packages aren't suitable for us because we like to apply security patches quickly and stuff like that. We don't want to wait on upstream. It also helps with upgrades the way we do packaging. It is very opinionated and is curated by BlueBox to be with a strong focus on stability and operability. We have a really good track record for upgrades so we always do upgrades in place. We don't ever do a rip and replace and that's something I think we do really well that a lot of the OpenStack deployment tools are still trying to figure out how to do that. And we also have some experimental support for the newer stuff like Magnum, Nova Docker and doing heat inside of Docker containers and stuff like that. Installing OpenStack with Ursula is super easy. You clone it down, you do a pip install to install the Ursula CLI and stuff and then you just run Ursula. I'm running it with dash dash vagrant here so it's actually going to spin up some vagrant VMs and converge those. If I didn't have that and I had an inventory with bare metal nodes it would actually deploy it to those. And so we use that exact same tool to deploy a dev environment, a staging environment and manage all of our customer clouds with. So it's really useful to have that single tool, that single command that can do that for us. And then on the other hand, using OpenStack with Ansible is a pretty good story. A lot of modules in the Ansible community for say managing compute resources with Nova, creating and removing networks with Neutron and stuff. A lot of them are using the Fade library now which is super strong. And we use those modules inside of Ursula to do a lot of our commands that are working with Neutron to add networks and stuff like that. I got the strengths and weaknesses here. It's more for reference if people want to go back later. It kind of covers what I've talked about. We're going to post these slides afterwards. We will post the slides. Yeah. Yeah, just look at Twitter and the OpenStack hashtag and get to find it later. All right. So now up we can talk about Puppet Labs and how Puppet differentiates. So it's the oldest of the four and so it has a long history. It has a long history with OpenStack. It has a long history in large enterprises and it's used heavily also by the OpenSec Infra team. Unlike Ansible, it has an agent and it sort of forms a client-server relationship with the Puppet Master. It is based in Ruby but a lot of the server components have been rewritten O-Lang for performance and scalability reasons. It has a custom DSL and it uses ERB for templating which is great because ERB is a really strong templating language. It is fairly easy to add and remove nodes from the master and it's fairly easy to scale the master. Usually, you add more masters and you use a shared file system to share the client keys and stuff so that they can all authenticate the clients. The tasks in the modules of manifests are almost always item potent so you can rerun it over and over again and it's only going to change things that have fallen out of spec what you've asked for and so it's almost everyone that runs Puppet will run it like every half hour or every hour and it'll only just like if someone else has logged in and started messing around with the system trying to fix something, it will revert that back to the known state and that's super useful so you always know what the state is and what it should be and then resources are really well abstracted say you want to install Apache you manifest looks exactly the same whether it's in Ubuntu or whether it's in Red Hat and that's super useful for most of us have a fairly heterogeneous environment so it helps sort of equalize that and you spend less time caring about the differences between the systems. You can run Puppet in a masterless mode but the common way is to run a Puppet master and then you run your Puppet agents on all the servers that you want to maintain and there's just a single TCP port that needs to be open from the agent to the server to establish the communication between them there's HTTPS, it uses certificates and stuff for authentication so yeah, the Puppet master obviously it's pretty obvious what that does Puppet agent talks to the master has pretty strong reporting and analytics and that's really good for enterprises has a web UI and I can show you like the logs of what's happened what's changed when it was changed and stuff like that so if you have you give people access to that to people that care about auditing and stuff and they can consume that a lot of that information themselves and you have to spend less time writing reports for them. Puppet Forge is kind of the community place where shareable Puppet modules live and so if you need to install a LAMP stack you can generally go to the Puppet Forge find someone else has already done the work for you and then you just need to maybe make a few adjustments to make it work for your use case so you're not constantly reinventing the wheels to install commonly installed tools and applications and there's also PuppetDB which is a way of holding information about all the nodes within the infrastructure so that you or another machine can query it and get details about the servers like IP addresses or whatever facts you want to store about those servers Puppet, as I said, Puppet is the oldest of the four so there's a lot of OpenStack usage the OpenStack Infra team uses it and it's currently in the OpenStack user survey I think it's still the most commonly used but tools like Ansible are very quickly catching up there's a bunch of modules out on the Puppet Forge and also it's big-tented into the OpenStack GitHub namespace and there's really good documentation up on the OpenStack Wiki for how to set up and deploy say a single node dev install of OpenStack and again, using the same modules and manifest you can install a single node or a very large scaled-out production cluster Installing it is pretty straightforward it's got a few more steps than say Ansible but it's not too bad first of all you set up your Puppet Master Puppet has a bunch of apt and yum repos and stuff so they're very easy to get to the packages and on the Puppet Master you have to set up a few certs, passwords, the usual kind of stuff and on the agent you install the agent on all the clients and then you register the agent with the master and that basically creates a certificate request and the master signs that and then you give that sign certificate back to the agent and that way the master and the client can trust each other on who they are and start performing tasks because you're basically giving remote route to any of these tools and so you need to make sure there's some amount of accountability and making sure that things are what they say they are when they tell your servers to run commands and then once you've got that sort of relationship established then you're just taking your modules from the Puppet Forge or whatever putting them up on the Puppet server and telling it what manifests to run on what servers and stuff and then the Puppet agent will start running every half hour or whatever and start converging your nodes to turn into open stack or whatever it is you're trying to install again strengths and weaknesses I'll give you a second to take photos but I'm not gonna go and read through them and list them out and I'll hand over to Daniel now to talk about Chef. Okay, thanks Paul. So like Puppet, Chef has been in the community for quite a while. It has some of the same community uptake as well as support within OpenStack. So it's the configuration here as opposed to some of the tools except for Puppet is the configuration is done through Ruby DSL so it's very powerful. It's named Chef, a lot of the tools too kind of take on that metaphor of cooking. So your configurations are divided into organizations, environments like dev staging production. Your components are generally grouped into cookbooks which themselves contain recipes and those have individual commands called resources and all of that is driven by attributes specified and that can be overridden or derived from the system. Unique to this, maybe Ansible is a little closer to it with its workstation concept but there's three main components. There's a workstation that drives the configuration of what's installed, a Chef server essentialized to store configuration as well as a catalog of all the nodes in the system. The nodes themselves with a client that offloads much of the work to do the self configuration and there's a company, a very strong company behind this that provides a bunch of other services around it. So like the other tools, it's designed to scale to tens of thousands of servers. I think that's the common number we've seen and just about everything we've looked at. The node itself, once it's been bootstrapped by the workstation, checks in by default every 30 minutes and what makes it unique from Puppet here is a lot of the polling binaries happens directly on the clients. Things aren't pushed to them. They don't rely on the central server for a lot of binary data. So that helps the scaling and because of the Rue DSL, it's very, you're able to do a lot of logic and detect some of the system configuration already and use that to behave differently to bring the system up to where you want it to be. It's very much around the idea of infrastructure as code so everything is versioned by default with the Chef workstation. The first thing you do is basically set up a Git repository to manage all of the resources that you have. And with a well-written cookbook like the ones for OpenStack, if a configuration fails, the system can usually recover pretty well to keep on bringing the system back into state. So your nodes, you'll never know really how many times the run may go. So the cookbooks are written in such a way that they can detect if something's already been done and if not, fix it or continue to resume from where they left off. Again, the Rue DSL here as opposed to YAML gives it quite a bit of power to do logic to turn based on the system that's being configured. For example, with sort of memories available, basically a huge amount of information is known about the system that it can act on. The three main components here, there's the workstation, there's the server, and then there's the actual clients. And there's a whole bunch of value-added tools that go around Chef, analytics, user interface, things like that, and then an integrated community of cookbooks, so for example, the cookbooks for OpenStack projects depend on things like MySQL and RabbitMQ. So there's a huge ecosystem of things that can be built upon. Again, the architecture here, it's similar to some of the other tools, but the key thing is it has that agent on each of the managed nodes. Those are directly bootstrapped by the workstation, and they pull their configuration from the Chef server, using SSH on a time basis. As part of the BigTent projects, OpenStack has a wiki page that provides all the information on the cookbooks, and there's lots of branches. Each of the cookbooks, each of the OpenStack nodes, modules is a top-level Git repository, and they each have branches for releases going back a couple years back the last couple cycles as well. So there's a history of the cookbooks, and they're maintained very well, going for the new features, the new components in the BigTent as well as the existing ones. The documentation focuses generally on getting started. Quickly, that used to be a pain point with Chef, is getting started, but the documentation for OpenStack is pretty good at helping you start with Vagrant, just like Ansible, but going from there to a highly available configuration or something more complex, they give you some of the starting points, but it gets a little bit, you need to kind of figure out some of the HA stuff yourself, but there's a community around that to help you get there. For installation, you're installing the Chef server first on a centralized node. You're installing the Chef workstation with the Chef development kit that can be co-located with the Chef server, if you'd like, or on an individual laptop or a shared machine between your administrators. You can download and install the cookbooks directly, configure them with some of the attributes that you wanna drive there, some of the passwords you wanna set up in the data bags, and then you bootstrap the nodes. But the documentation also simplifies this with some of the newer Chef tooling. Chef provisioning can actually help you stand up a whole cluster, bootstrap things in parallel and provision the machines for you. Okay, much like Puppet, a very strong community, very well-managed, high-quality cookbooks in the open-stack community. There's lots of support structures around it, a big business partners with the company, as well as the software as a service. It does require, in order to get that power of the Ruby, comes with it the complexity of installing a development kit and kind of understanding some basic Ruby definitions. And if you wanna look at existing cookbooks, you'll have to understand what exactly is going on with those resources that are installing packages. And it does have a very heavyweight agent relative to the other tools, but that's there to offload a lot of the processing. So if that's important to you, that's something to consider. So to wrap up, essentially, salt is probably the least mature, but it is gaining market share. It's very popular. If you're an operator, an innovator, or a contributor, if you already have tools you're managing or systems you're managing with salt, keep an eye on those projects that Anna Mesh pointed out. Those are rapidly evolving to bring open-stack support, but again, it's quite fragmented. Everybody's kind of approaching it slightly different. Puppet has the oldest method. It is managed through the Big Ten. It's older. It's not quite cloud-native. There's still some concepts in there that come from its older legacy, being basically 10 years old right now. Chef is also well-managed. High-quality cookbooks, big ecosystem. It can also be used to manage virtual machines, the bare metal, and containers. So it's got some more flexibility in terms of what you can deploy to. But we found Ansible is really the strongest across the board. Again, this is just from our research. There's strengths in each of these projects. This is all, whatever you choose, you can't go wrong with anything here. It's better than writing your own shell scripts, installing packages yourself, setting up configurations, updating database connection strings in NovaConf, stuff like that. So if your role is more leaning towards operator, there's three core projects you want to look at. As a contributor, again, the same three projects. But for the general flexibility, if you don't know what you want to work with, start with Ansible, look there first. Okay, well, we hope that was helpful. We want you to understand, OpenStack is powerful, but that means it's complex. It's a highly distributed system. You're gonna wanna run some sort of configuration management. There are four tools that we talked about here. They're all very good for OpenStack, for the most part. There are other options out there that we didn't cover, though, such as Juju and Bosch. There were some other talks at the summit around those tools. You can look there as well, but if you're brand new, these are probably where you wanna start. And depending on what you wanna do with OpenStack, what you wanna get out of it, we hope we pointed you, at least, to a default choice. And there were plenty of sessions at the summit, and the recordings are already up. So if you look for these, there was a couple of sponsor sessions, for example, the one on Salt. There was the one from Chef and the PTLs from the Chef Cookbook communities. So look for these on the Summit agenda. Some of them have slides, but yeah, all the recordings are gonna be there. There's an e-book out there that essentially takes the tools and looks at them in general, not so much the OpenStack context, but it's a very good taste test of running the same simple application going through the steps. And it draws some conclusions that we disagree with or are different for OpenStack, but it's a pretty good starting point. You can follow the short links. And are there any people from the four projects in the room? Okay, from Chef is here. Where are you from? Oh. My phone. Okay. I can just repeat. So he's the PTL of the OpenStack, I'm sorry. Chef, okay. And back there. And the puppet representative, okay. Anybody from Ansible besides Paul? Or, okay, great. So if you're new to these tools, talk to these guys. Look at those other presentations. We hope you gave you the right starting point for starting to use OpenStack effectively. Any questions, Horst? So coming from an organization that does OpenStack operation as well as OpenStack using, running stuff on OpenStack and AWS and other things. We've got a huge puppet background, so we use puppet a lot. We use it to run our OpenStack, our own public cloud. But we're also building applications using heat and plus something else. Now, I mean, I'll say something mildly controversial perhaps. Is it, is puppet really that suitable for dynamic stacks in the same way that Ansible is? It's sort of from what some of us are seeing. It isn't so, it doesn't necessarily want to play as well in this whole cattle versus pets world of deploying architectures or, I mean, but I haven't had anyone who can tell me sort of otherwise. I mean, what's your thoughts on that? There are certainly some challenges with puppet. When you're adding and removing nodes, you have to kind of do the certificate management yourself or you tell it to auto sign it and then you kind of lose some of the security. And that's one of the, I think, one of the hardest things in it, like a more cloud native thing is getting new machines out and then getting them removed when you're done. Aside from that, I think it's really at your, if you've got a very mature puppet organization, you've probably got a lot of the chops to do some of that stuff in-house. But from a starting from you basis, it's quite a steep learning curve. So to get to that kind of point is quite hard, whereas I think, sold in Ansible with a shorter, an easier learning curve can help a person get to that point a lot quicker. So I think it's not that it's not capable of it, it's just it takes you a lot longer to learn how to do it. Any other questions? Okay, as I said, we're gonna post the slides. So if you follow us on Twitter, either Anna Mesh or myself, we'll probably push them up there pretty soon. And again, thanks for coming. Enjoy the slide. Thanks, guys. Thank you.