 Good afternoon, everyone. My name's David Lapsley. I'm from Cisco OpenStack Private Cloud, formerly MetaCloud. I'm an engineering manager there. This is Jürgen Brindel, my co-presenter. He's an OpenStack engineer also at Cisco. Today, we're actually going to be talking to you about Ansible and how you can use Ansible or how Ansible actually makes it easy to be able to unify your test development and deployment workflow. Before we start, can I just get a show of hands? How many people here have used Vagrant? OK, great. How many people have used Ansible? Oh, fantastic. That's great. So I've been using Ansible for about a year, and that's one of the reasons I'm here. Since I've been using it, I've found it to be a huge boom to my productivity. It's really changed the way I do my development, and that's the reason that I'm here talking to you about it. When I first started, I actually came across some work that Jürgen had done, which is really cool. We'll be talking about that towards the end, and we'll also have some references there to the slides and then also to the source code. All right. So a number of you haven't used Ansible yet, so we're going to give a little bit of an overview of Ansible first. But this year, the unified development, test, and deployment environments, that's something that both David and I feel quite passionate about. Because if you don't have those unified, you are ending up with all of these problems that you know from in our industry for so many years already, which is you don't catch a bug until it is in production, or you can't reproduce issues in their development environment. You don't have access to certain test equipment. New developers have a hard time even getting started, because setting up a development environment might be difficult, and so everyone ends up with a slightly different environment, and in the end, you just can't reproduce stuff. How often have we heard that? We all have. And it's the bane of our existence. I don't know. I don't control my volume. Have you heard anything of what I said? Almost all. OK, good. So yeah, so this is not what we want. There should be a better way, right? It would be good if we could just with a single command set up a new development environment, and your developer comes into your team and you just tell them, run this, and boom, they have a development environment just like every other developer. And similarly, with test environments and deployment environments, and all of these should look exactly the same. And that's what we're going to talk about today. So to summarize, we will give a bit of background information for what configuration management, because that's what we'll use for this. Give an introduction to Ansible. Talk about how it can also manage infrastructure, how we unify our test and development environments, and also have a demonstration. All right, so just to summarize a little bit the world of configuration management tools. Well, first of all, how do you configure a server? There have been, we all have seen these servers which are sitting around in an enterprise for a long time. Some system administrators knew how to set them up, and they keep these closely guarded secrets. And you really hope nothing happens to that administrator, because if they ever do, you really don't want that server to fail. You have no idea what went into the care and feeding of the server over the years. That's not good. So maybe it helps if you start writing down the instructions. What does it take to set up the server? That's a big improvement, but still, it's obviously manual and it's error-prone. You can forget some steps as you write it down or as you try to go through them. Stuff might not be absolutely clear or ambiguous, right? So improvement, but still not good. This script is a big, big step forward, because now you can just run them, and it's pretty unambiguous. And automatically, these scripts will set up a brand new server. But if something has changed on the server and you just want to bring it back to the state that it should be in, then if you want to script this, you are writing all sorts of if conditions and error handling and exception handling and corner cases in your scripts. And they get really convoluted really fast. That is where these configuration management tools come in, because they take care of all of this, right? And we'll see in a moment how that is done. But at any rate, those two areas there, the scripting and the CM tools, certainly allow you to automate your setup. You can bring up 100 servers equally, very quickly and reliably. Now, CM tools, you don't say install this. You just say, I would like to make sure that this is installed at this version. So in a CM tool, you are describing the desired state of the server. You say things like, ensure that Apache is installed. Ensure that this user does exist, that the sources are there, or that you have a database of a particular version installed. So a configuration management tool will see a description of the desired states. And that's all you have to provide. And then goes through these desired states, looks at the machine and compares them to the desired state and said, well, OK, Postgres isn't installed at all. Let me install version 9.1. Let's say you accidentally did an upgrade and all the packages got upgraded on your machine. If you run over this again with a CM tool, it'll say, OK, you have version 9.3 installed. But it says it should be version 9.1. It was automatically downgraded for you back to the desired state. So you deliver a description of the state, very succinct. You don't have to write any if statements or whatnot. You don't have to deal with all the corner cases. You just describe what the state of the machine should be. And the CM tool puts the machine into the state. There are a number of these CM tools out there. We all have heard of Puppet and Chef, obviously. They've been around for a while. And they feel very powerful. There's a learning curve pretty involved to set up. Sometimes, to me, it appears like it's a bit like the world of Java enterprise sort of stuff. There's too much stuff going on. It feels strangely complicated, to me at least, to other people too. So a few years later, people thought it's not necessary for this to be so complex. There should be a simpler way to do this. And so Salt and Ansible came around. And certainly Ansible takes a simple approach to a whole new level. I just, for completion's sake, I put in Fabric, which is more like a script-piphone-based script style deployment tool, and also scripts themselves. They're not really CMS tools. I just thought I'd put them in there. So Ansible, that's what we're talking about today. It calls itself an orchestration engine for configuration management and deployment. And it says orchestration engine because it doesn't just deal with one server and configure software on this one server. It thinks about groups of machines and roles that you can have across these different machines. It can take information that it gathers as it sets up these groups. It can take this information back and use it then to configure something else. Like, you bring up 10 machines, and now you suddenly have IP addresses. And you take those back and then configure load balancer with it. So Ansible does this quite neatly and easily. And so that's why they call themselves an orchestration engine. It's written on Python. That's good. And it uses YAML as a format for its playbooks. And playbooks, these are these descriptions of the desired states. You can put in also explicit commands. So you can describe state, but it also allows you, if you want to do a shortcut or something directly, to run a specific shell script or a specific command line, you can do this as well. So you can mix it a bit. Simplicity. There are several points that come together to make Ansible a tool that is simple and easy to use. So first of all, there is no central server that you need to maintain with Puppet and other tools. You have to have a central server. And with Ansible, you do not. So you don't have to manage keys and who has access to what. That all falls away. You don't need to install anything on the target machine. In fact, you can bring up a brand new EC2 image or Ubuntu image on OpenStack or whatever you have. Just point Ansible at it and off it goes. There's nothing you have to do ahead of time to prepare this image for Ansible. Another interesting little thing is this explicit order that I mentioned there. If you have tools like Puppet, for example, it tries to be very smart and tries to run as much in parallel as it can. And then people end up with these strange surprises about race conditions. And so, okay, then you need to define dependencies. Now this depends on that, so don't run this yet. And Ansible, that whole problem falls away because it just does things in that order. So one less problem to worry about. All it needs is SSH and Python on the target machine. And most Linux machines these days just have this by default. And there is now an Ansible version which can deal with Windows. It uses PowerShell, I believe. I don't know anything else about it, I just read about it, I don't do Windows, but just thought I mentioned it. So yeah, so you can take Ansible into the Windows world as well now. That's the architecture. From your laptop or wherever you wanna run your Ansible playbooks from, reaches straight out to the servers, does stuff and leaves them alone. There's nothing else you have to set up. Modules is where probably the real power of Ansible comes from. There are hundreds of them. They have been written by all sorts of people. They're just small, very specialized pieces of code which do just one thing. For example, they can execute a shell script on a target machine. Or they can copy a file or do a template style sort of replacement of strings in there or operate on a line in a config file. Or maybe they can install a package. There is an Ansible module for apt. It does nothing but do an apt install, for example. Another one for setting up users and groups. But also there are Ansible modules to configure Rebit or databases. They're Ansible modules for Postgres, others for Apache, which know how to configure that. And even as you will see a bit later to access deal with cloud infrastructure from different providers. So how does it work? It's really straightforward. You have your laptop and you say to Ansible that you would like to go and configure something out there. So for example, install Apache. So there's an apt module, for example. What do we do? We take SSH and copy this little module is being copied over to the machine, where it has run and then deleted. And the results are returned. So Ansible goes, takes a little bit of code, puts it onto the remote machine, runs it and deletes it. So by the time Ansible is done, there's no trace of it left on the machine. Right, so not only do you not need to install anything, you also don't need to set aside any space for all the craft that accumulates. There's nothing. Right, so that's basically a small overview of how Ansible and Principle works. And David's gonna take you through some examples now and get you up close with some of the playbooks. Thank you, Jürgen. So I'd like to dive into some of the details around Ansible. So Ansible has this concept of an inventory file and groups. So in the inventory file, you actually are able to store all of the hosts that are currently under management and you're also able to associate them into specific groups. You can group them by function, by location, by hosting provider. So here's an example of an inventory file. And so you can see that we have four groups here. We have two groups for geographical location, so Europe and North America, and we have two hosts in each of those groups. And we have two groups based on function or role, so front end and back end. The names of the groups are totally arbitrary. You can make up any groups you want. And the association of hosts to group is arbitrary as well. It's totally up to you. As Jürgen had mentioned, Ansible provides both CMS-type functionality where you're actually specifying the state, the desired state that you would like to see on your servers, but it also acts as an orchestration engine as well. So you can actually get Ansible to perform specific tasks. So here, the first line there is actually telling Ansible, well, it's telling Ansible to perform a specific action. In this case, it's doing a U-name for all of the hosts that are in the Europe group. The dash i there is to specify the hosts file. So you can pick any, you may have a number of these hosts files, so you can pick a specific host file and load those group associations. The second command there is actually running a different command. It's rebooting all of the servers that are in the front-end group. So you can see that this is pretty powerful and this is the whole reason for having groups. Groups or actions are actually executed against either a single host or against groups. So the fundamental unit of description in Ansible is called the playbook. So playbooks are written in YAML and they basically tell Ansible what to do. So here's an example playbook. So this playbook applies to all of the hosts that are in the front-end group. It's going to execute with pseudo permissions and it has three separate tasks in there. So the first task is to update the system and this is kind of a CMS type operation because you can see it's using the apt module to install an apt package. So the package is the nginx package, but it's saying that it wants the state of the system to have the latest version of nginx. So if nginx is not installed, it'll install it. If it's installed, it'll make sure that it's upgraded to the latest version. The second command there is creating a user account. So again, it's kind of CMS-y because it's saying that it wants to make sure that the app user is present. And then the last command there is actually more of an imperative or kind of an action-based command. So it's basically copying files from the local host or from the host that the Ansible process is running on to the remote user's home. Ansible uses the Ginger 2 templating language. So this is so you can template configuration files that you're going to manage with your CMS. And so it also provides a mechanism for providing variables that are going to be rendered by those templates. So here's an example. So again, we have an Ansible playbook. It's going to apply to the group all. Pseudo permissions. And we're actually defining the variables here within the playbook. You can also define the variable separately in other files. And I'll talk in more detail about that in a little while. And you can also pass them on the Ansible command line. So here we're providing the username as a variable. And you can see that in the task list below, where it's creating the user account, the name of the user is actually parametrized. So the username variable will be inserted in here based on the value of that variable. So Ansible is very flexible. So you can lay out your project in many different ways. So here's a very simple way to lay out a project. So we basically have the inventory file up the top there, my hosts. You have a set of group variables. So these are all the ML files that contain structured variables that you can then use to render templates, for example, as we just saw. So the nice thing about group variables is they're automatically associated with the name of a group. So when you go and let's say we use our current example and we run against all hosts, we run an Ansible command against all hosts, say create users. When it runs against a particular server, the server, the variables will be populated from the role that matches that server. So for example, in this case, the variables in front end, in the front end file will actually be used to populate templates when you're running against a server that's in the front end and so on. Then down the bottom there, we have site.yaml, which is the top level Ansible file. It's kind of the entry point for Ansible execution. So there's also a best practice layout. It's a little bit more complicated. There's a lot more to it, but it's very extensible. And once you start using it and getting familiar with it, it becomes second nature and you're able to navigate around very quickly. One of the nice things about using this best practice layout is that if everybody in your organization or in your team actually uses it, it's really easy to look into somebody's Ansible repo and understand what roles there are. And I'll talk about roles in a second, understand what functionality there is and how you can reuse it. So here's an example of the best practice layout. So on the left-hand side there, you can see Ansible.config. So these are default options that you might want to set for your Ansible application. We have two inventory files in this case, deploy hosts and staging hosts. And I'll talk about this more in a little while, but what's really cool is the reason that we have that is because all of these playbooks that are in the rest of the tree there can be executed against two different sets of hosts. Your staging hosts if you want to test or your deploy hosts if you actually want to deploy. And that's really, really powerful. So we've got the group files over there as well. We have host files which are kind of like group files, but they only apply to a single server instead of a group of servers. At the bottom there we have site.yaml, the entry point. And then on the right here we have roles. So roles are basically collections of tasks. They're reusable units of functionality and they can be applied to either a group or a server. So roles, we've got three roles here. So at the top there we've got common and then down the bottom you can see web and DB. And so you can imagine that typically a common role will be used by all of the nodes that you have to install basic packages that everything needs, maybe monitoring packages or something similar. We have obviously for web servers or DB for DB servers. So if we drill down into the common role at the top there, you can see we have tasks, handlers, templates, files and bars. So tasks are where you actually put your top level playbooks. So typically depending on how complex a task is you may have quite a few playbooks in there. The entry point for those is main.yaml. Handlers are used for implementing functionality that needs to be triggered. So at some point in a playbook if you wanna trigger an action you can send a notification. The notification will be received by the appropriate handler in the handler's playbook and then some action will be executed. So reboot is kind of a typical example. Templates is where you would store all of the templates for the configuration files that you wanna manage using Ansible. So in this case we're managing SSHD. So we have a config file there that would typically have a bunch of variables to be substituted with the values that you pull in from group bars or host bars. And then files are just generic files. It's kind of a catch-all for any other files you might wanna copy over. So scripts or data files, whatever that happens to be. And then we also have bars. So these bars are just for this specific role. And so they do a similar thing to group bars and host bars. They're imported when it gets executed but they're associated with this specific role. So this is how we use, this is how we use a role. So we have a playbook. This is for all the front-end hosts and it's going to use the common and the web roles. And so all the functionality that you have in those two trees, in the common and the web trees that we just saw are now gonna get pulled in and executed against all of those hosts. So one of the trends in our industry is APIs. So we've seen, and working at OpenStack we've seen a huge proliferation in APIs and these can be very useful. So one of the really useful things that APIs provide us with is the ability to actually manage infrastructure not just the application that we're deploying. So in the past we had kind of local APIs on Unix. It might be POSIX or it might be, you could consider it the command line to be an API. We've got, now we've got infrastructure APIs where we can actually use APIs on public cloud to be able to spin up servers and configure network interfaces and so on. And then of course there's the APIs for the services, for services that we want to configure or applications that we wanna run on top of infrastructure. This is all really cool and it's very, very powerful as we'll see in just a minute. So Ansible also has a bunch of cloud modules for both public and private cloud. So there are modules for OpenStack, for AWS, for Google Compute, Rackspace has APIs and so on. And you can actually, from within your Ansible books you can actually create modules that will then go and spin up VMs in public and also private cloud. On the private cloud side there are Ansible modules for OpenStack, Eucalyptus and so on. So as an example, the AWS modules will allow you to create instances, images, virtual private networks, load balancers, you can interact with services like S3, DNS, databases and cache. So let's have a look at a couple of examples. So here are some examples for creating instances via AWS or via OpenStack. So here's an AWS playbook that'll allow you to boot an EC2 guest. So it's pretty simple, so we reuse the EC2 module and then you basically just have to populate it with all of the information there. So your key name, the security group that you wanna use, the type of the instance, which image you wanna use, which region you wanna run it in, the number of servers you wanna spin up. And then at the bottom there you'll see there's this register construct, which basically allows you to capture the output of that operation from the API. So that'll include information about the hosts that you've just configured. So there are IP addresses, names, all of that sort of stuff. So it's pretty powerful. Just with this and a little bit of boilerplate code, you can actually start spinning up instances in the public cloud. And you can do the same thing for OpenStack. And it's also pretty straightforward. So you just provide a few more variables. In this case, we're actually substituting variables that we could pull in from another place, but it's all pretty much the same thing. And then down the bottom we have the register where we pull information about the guests that we've just launched. So one of the powerful things that Ansible provides you with, or powerful features, is the ability to take the output from the operation. So we just, for example, spun up all of these instances in a public cloud. And now we have a bunch of output that is in JSON format. And we can actually take that output and do operations on it. So one of the useful things to do is actually add those hosts to our inventory file. And so here's how you would do it. So we have our EC2 action at the top there. It's running locally on the Ansible machine and basically going over the API to spin up those guests. We've got the register. One thing that's different is that there's a count there. So here we're actually spinning up three instances, not just a single instance. We're capturing the results in EC2 results. And then we're doing another local action below, and we're actually using the output of EC2 results. So the with items is Ansible's looping construct. So basically what it's saying is, I want you to iterate through all of the items in the EC2 results.instances list. And I want you to feed that information. I can access that within the loop as the keyword item. So you can see that what we're doing here is basically going through three times and we're calling the module add host to add the public IP of each of those items, each of those instances to our inventory file. So we would have started with, so basically we would have started with a simple Ansible script with no inventory file. We've executed it. It's gone out to EC2. It spun up all of these guests and now it's captured the results and stored it in our inventory file. So if we want to do additional provisioning or installing packages or in our application, we can now do that. So as Jurgen had mentioned before, we both feel extremely passionately about this. Working with an engineering team, it's, if you don't have this, it's extremely painful. And so, yeah, so this is super important. And the nice thing is Ansible actually makes this relatively easy. So Ansible works very nicely with Vagrant as well. So I know a lot of people here are familiar with Vagrant. Basically, you can run Vagrant from the command line of your laptop or server to spin up VMs. So you can spin them up locally. It also has plugins. So you can even spin up instances in the cloud in EC2 or Rackspace, for example. If you want, you can use Ansible as your provisioner. So you use Vagrant to manage the virtual machines that you're spinning up, your infrastructure, and then you can specify that Ansible is your provisioner. And then once those VMs are spun up, you can create an inventory file and then automatically kick off the provisioning process using Ansible, which is pretty cool. So what I'd like to do now is actually go through and show you an example of that in just a second. So Vagrant uses a Vagrant file, and that's actually your configuration. And it's actually written in Ruby. It's a Ruby language. So you can tell, it's pretty powerful actually. And you can tell Vagrant within the Vagrant file to construct instances with a certain amount of RAM, the certain number of CPUs. You can specify network interfaces, public, private, and added interfaces, and so on. Here's an example of a really simple Vagrant file. So basically at the top there, we're creating a virtual machine and it's an Ubuntu Sourcey64 VM. We provide it with a URL for where it can actually download the machine image. We give it a host name. We specify a private network IP address. And then we go into the next block, which is basically where we kick off Ansible. So in that block, we specify where the playbook is, however gross we want it to be, if we want it to be, a path to the inventory file, and then also whether or not we want to do host key checking. So if we were to execute that, then basically we'd get an inventory file that would look something like this. And so this is a little bit more complex than the one we had before. So here we have a number of different groups. So we've got Vagrant front-end host, back-end host, DB access and so on. And you can see that in addition to specifying the names, we can also specify IP addresses and so on. We talked before about group variables. So here's an example of a group variable that you might want to use. And it's basically where we're specifying for the Vagrant group the name of the SSH user. And the reason that we want to specify that is because by default Vagrant creates a user Vagrant that has pseudo access. And so here we're just telling Vagrant and Ansible what the name of that user is. So once you've gone through and done that configuration, all you need to do to spin up your virtual machine or your network of virtual machines is just type Vagrant up. The first time it runs, it will actually go through and it'll create all of the instances. And then after it's done that, it'll go and run Ansible against those instances and provision your application on top of those instances. After you've run it for the first time, you can also just call the provisioning directly. And so Vagrant provision will actually call Ansible directly without trying to create the instances. So one of the nice things about Ansible is that it can make it, if you capture your system configuration correctly, you can have reproducible test environments. As Jürgen was saying before, a single command to set up a test environment. A single command to use exactly the same playbooks to go and deploy that to the staging environment for testing, for example. And so the cattle versus pets saying I think is well known, we want our servers to be cattle not pets and this is the thing that enables that. So this is the workflow that we wanna move towards, right? So basically we wanna be able to do local development and testing. So be able to use Vagrant and Ansible to stand up from scratch, your environment on your laptop, for example, so that you can do your development, do your testing, unit testing and integration tests. Once you've done that and you're happy to move on to the next step, you can actually, Ansible will let you then go and deploy that into the cloud. So basically all you need to do is just point Ansible at a different set of nodes that are provisioned in the cloud. And Ansible will go, it can even create or update those servers in your staging environment. It can provision the servers with Ansible if you want to using the EC2 or Rackspace or OpenStack APIs. And you can also deploy your application with Ansible. And then finally, once you're ready to take the next step, you can go and deploy this out into production. And it's again, using exactly the same workflow, exactly the same runbooks. And that's extremely powerful. So now I'd like to show you a short demo. I cheated a little bit here and I'll explain why in just a minute. So this is the work, this is how I actually came to know Juergen because he's written some really nice Ansible scripts that will basically set up an environment like this for you. So what this is, each one of those boxes represents a virtual machine. So at the top there, we have a Keischer virtual machine. And so this is gonna cache both app Debian packages and also PIP as well. We've got, I've used, we've got MCP, which is basically a controller. So this is a node that's just gonna run OpenStack services but not Nova Compute. And then we have two MHVs which are hypervisors and those are actually running Nova Compute. That's where your instances are gonna be running. And then down the bottom there, we have a Git cache where you can actually check out all of the repos that you're gonna be working on and store them in that Git cache so that they're completely fixed instead of pulling from over the network from the remote Git repos, you pull from your local one which is totally fixed and not changing. And these are, this Git cache is NFS mounted to all of these virtual machines there. And this is super powerful. If you've ever run DevStack, you know how much network bandwidth it consumes and you also know that it changes very frequently and often those changes can break your development environment. So one of the really nice things about this is that you're caching all of your apps and all of your Python dependencies and you're fixing the software that you're actually testing because it's all stored in that Git cache. And so that's really powerful. So here's the demo and this is where I cheated. I was trying to think of the best way to be able to demonstrate this and doing a vagrant app would take a very long time. So what I did instead was take a video and cut out all of the wait time. So hopefully this will give you an idea. So this is showing me do a vagrant app. I'm provisioning two MHVs. So the hypervisors. So the controller and the caches were already up. So here you can see we're off and running. It's identified that the hypervisors, there's no hypervisor box. So it's downloading and installing the trusty image. It's waiting for the machine to boot. This takes a little bit of time. Now it's adding an SSH address, adding a public key. Now we're onto the second hypervisor. And so now you can see it's also written into slash Etsy host, the IP addresses. So you can reference these by name. So we're almost finished. Now we're mounting the NFS folders for the Git cache on the second MHV. Sorry, the second hypervisor, just checking to make sure all the devices are healthy. Starting the second virtual machine now, the second hypervisor. We're going through all the same steps we went through before. By the way, you can parallelize this as well. In this case, it's not, it's all serial. So setting the host name, mounting NFS. Now we're into Ansible. So now we're actually starting to execute the Ansible playbooks. Cacher needs nothing done, it's all set. So we skipped through that. It just, sorry, that went a bit quickly. So now it's going through the hypervisors and it ran through a bunch of steps. So what it did was it actually installed a bunch of packages, apt packages. It checked out DevStack from the Git repo. It copied configuration from local configuration for both of the hypervisors from the Ansible repo. And then it actually started DevStack and then you get a nice summary there at the end. And the thing that I really like about this, and this is the thing that's really changed my workflow, is that it's reproducible. I can do it every time. I can lock my configuration down. And it's also significantly faster because instead of having to go over the network to pull down a lot of packages, you can do that locally as well. So I have a reference to Juergen's repo at the end of this, if you would like to have a look at that. So the key takeaways here. So you just saw me do this locally using Vagrant. Using the Ansible modules, I can actually, well, using the same Ansible playbooks, I can do this against any set of machines, either virtual or real, as long as I have network access and I have SSH access. That's super powerful. And that's, yeah. So the other takeaway is that with these new cloud APIs, or relatively new cloud API, so OpenStack, AWS, Rackspace and so on, you can also use Ansible to provision your infrastructure. And that's pretty cool as well. So here are some references for you. So if you have any questions, please feel free to contact Juergen or me. We'd love to talk to people about this. The slides you can download from the URL there, the Ansible playbooks, that's a reference to Juergen's repo, Ansible Docs, Source, Vagrant, and there's an example project there as well. So thank you very much. Any questions? Oh, sorry. Oh, yeah, sure. If you have any questions, please use the microphone over here. Hi, thanks for the talk. When you're running Ansible to provision infrastructure, how does it know on the next run that these EC2 instances have already been set up? So the easiest way is just from, if we use the workflow that we showed there, you'll already have an inventory file and that'll be populated. And so you can use that inventory file when you run it. So, yeah. So the first time we run it, we have no servers, right? So then we run it, it'll go and spin up all of the guests and it'll pull back information, which is the IP address and name of each guest and it'll put it in a file. And then the next time when we want to run Ansible, we can just use that exact file and Ansible will know all of the hosts, their IPs in the groups and then it will just execute against those. Okay, and so the EC2 module will know, if you said spin up three hosts, if they're already in inventory file, not to do that again. Yes, it'll already, yes. And you would actually use, well, maybe Jurgen, you can jump in. Yeah, there are different ways to go about this, but basically once you have an inventory file that you dynamically created with these dynamic EC2 instances, you can run Ansible, not in the provision infrastructure for me mode, but here's some infrastructure I'd like you to use mode, right? So I usually have a little bit of some control scripts around it where I can either have a static deployment or a provisionless deployment, right? And so, but you basically, you create the inventory file dynamically and use that, right? Yes. Thank you for a great presentation. My question is, do you have any recommended Ansible playbook editor? Oh, not really. I use either VI or PyCharm, depending. I think Jurgen- Yeah, same here. It's pretty straightforward. And actually for many editors, you can get some plugins which can check the, that your YAML files are correct in syntax. So I usually just work with that. Thank you. So I thought the comparison of Chef to the kind of older, more enterprisey apps was kind of useful and explained why I might want to use Ansible. But how would you compare it against Salt? Is there any kind of very obvious reasons why you might want to use Ansible or what's better? So I don't know, so I don't know much about Salt. I know that, so we have a team of DevOps engineers and all they focus on is configuration management for our clouds. And they're very knowledgeable. And I know that they've evaluated Chef, Puppet, and we also tried Salt as well. And we didn't have a, this was about a year and a half ago, and we didn't have a good experience with it in production. So now we're a full steam with Ansible. So that's, Yeah, I looked at Salt quite a while back. And there was, I thought there was more setup required. It was a bit more complicated. And I said, oh, let me just go with Ansible. I stayed with that. And here we are now at this OpenStack Summit and we have all sorts of talks which mention Ansible. And I don't think anyone mentioned Salt. It's Ansible has the momentum, right? Yes. Thanks a minute. Is there any built in way in Ansible to deal with encryption decryption of secrets? Yes, there is actually. So Ansible has the concept of a vault. So you can actually encrypt sensitive data and store it if you want to, and then unencrypt it when it comes out. So definitely. Two questions. One nice thing about Puppet is you can set it up in cron or whatever so it re, it checks every half hour and resets the state. Is that something that Ansible is good at doing? Can you set it up to run that way? There's nothing left on the machine. So I don't know how you would. So there is something called Ansible Pool. And Ansible Pool is just basically, that is something you would install, maybe with Ansible at first, on the target machine. And you then run it in a cron job, for example. And all it does is it checks a repository. You point it at a repo. And from the repo, it pulls down whatever's in the repo. And if something has changed, it looks for a playbook file with a well-known name in that repo and runs it. And you basically just do local actions. So you can then go and let's say your configuration for your cluster has changed, then all of your instances get this and Ansible Pool will update then locally. Yes. So it's not like a different way of running Ansible? That is, yeah, yeah. That is for those who if you want to then keep a cluster that runs updated. That's for that. And this might be a related question, but with this sort of push mechanism, it's great for a few hosts, but if you have 100 or 1,000 hosts, it can just take forever to do an update. If it's, you mentioned something about running in parallel when I was curious what support Ansible has if you're gonna do 1,000 updates and you don't wanna wait for each one to execute sincerely. Yeah, well, if you have, for example, a group of 100 hosts and you want to make a run a playbook against that, all those 100 hosts would run it in parallel. Yes. We've been told time is up, so I guess the last question. Just a clarification for when you're connecting to APIs instead of Vest-saging to the host and copying Python. So for example, connecting to a piece of network gear. I mean, I'd have Python loaded on it and you're trying to use the API. Are you then always using the local Python on your box to connect to those and do that kind of thing? Yes, yes. Okay, yeah, thank you. Great. Okay, thank you everyone. Thank you.