 OK. Looks like we're ready to go. I hope everyone's awake this morning. And I was there party last night, so I'm pretty impressed that you all rocked up. I'm a little bit more impressed that my colleague Kevin showed up. It was really good for me because the second half of the slides are all his, and no one wants to listen to me for 40 minutes. Anyway, my name is Andy McCray. I'm a software developer working in the private cloud team at Rackspace. And I work primarily on the OpenStack Ansible project within OpenStack. So that's an upstream project that we've got. A little bit of history on the project. It was a POC in the Havana time frame that was actually started by Kevin. And there were a couple of us based out of the UK and in America as well that then took that project a bit further and made it from a Rackspace-only project to a more community and upstream project that we hope that everyone can use and that we'd love to see loads of people get involved in. Today, the aim of the talk is really to give you an idea of how the various components in OpenStack Ansible hang together. It can be a little bit difficult and daunting when you first look at it. And there's so many different things. And it just seems too much. And it's hard to get going. And we'd like to give a little bit of a kickstart on how you get going with OpenStack Ansible and the various components that are involved and just how you get that going. There is documentation, but again, you start looking at these reams of documentation and you're just like, I kind of want to give up. And we think if we give you a head start and give you some tips, you'll be in a good place to start looking at it and get deploying with it. Because once you start, it's actually really easy. And once you've got your first deploy going, it gets a lot, lot easier. So for those of you that aren't 100% familiar with OpenStack Ansible, I'll just give you a quick rundown of what makes it different. So it's a deployment project. So deploying OpenStack using Ansible, that sounds quite intuitive, but that's what it is. There's a few differences to a couple of the other deployment projects. For one, we use Alexi containers for some of the various infrastructure services for OpenStack. We don't exclusively use containers, but we do put all the API services and various other infrastructure services in containers when it makes sense. We also build from source, so we don't use packages from any vendors. We will build a pip package, and it'll be based on a SHA from the upstream repositories. That's pretty cool because you get the code that was written by the developers, and that's what you get. You don't get anything on the side. You don't get any opinions of how it should be. It's just how it is in the repository, and we can fully customize it using Ansible after that in the configuration files. We also use virtual environments to split up the Python package installs within the various containers and on the host themselves. This is just to segregate out the packages so you keep the OS and the other various services separate. Our aim for the project is to support production deployments. So this isn't a science project for us. We're not here to show that OpenStack can be deployed in containers. We did that at the start. We tried to put all the services and containers in. It was you can get it to work with a couple tricks and various things that are, I would say, not good for a production install. But we already have production installs running OpenStack Ansible, and not just at Rackspace. There are various other people that have started to use it very recently, and we've had really good feedback on it. And we want to keep it so that they can keep using it in production, and that upgrades can happen. Of course, that depends on the OpenStack services themselves, but we do our best to make sure that the upgrade path is quite smooth and that we utilize containers to make the upgrade path smooth. What's exciting recently is that we've had quite a few new contributors outside of Rackspace. So like I said, we started at Rackspace during the Havana timeframe. And it was, at first, just an internal POC. And once we got it pushed up into the OpenStack namespace, which I think was in the kilo time frame. Do you know? I saw it. Ah. So we got up in the OpenStack namespace in Icehouse. It was in Stackforge. Stackforge in Icehouse. And since then, we tried to take out all the Rackspace elements of it. So we wanted to be engaging for the rest of the community and not just a Rackspace like, this is what Rackspace does, and you can use it if you want. But we're not going to help you if you don't. So we just wanted to make it to that. We want other people involved. We want to make this project better with the community. And so we put a lot of effort into making the project decoupled from Rackspace and removing all the kind of Rackspace-isms and all the kind of opinions we had. And that seems to have paid off. In the last timeframe, in the last six months, we've appointed our first two core reviewers that are not Rackspace employees, which sounds like a minor thing, but it's pretty exciting when you've got a new project and you start getting contributors from all over, you know, joining in. OK, so let's get down to how do we deploy it? So this is the quick and dirty guide, so I'm going to start with the quick and dirty way to deploy it. There's really only five steps you need. So the first step is you get yourself a server with the Ubuntu 4.04, a 14.04 other. You need at least 80 gigs of disk space, I think. And then I recommend it can be anything. It can be a cloud server, a VM, or a physical server, you just need 80 gigs of disk space. Step two, you do an app get update, and you install git core. I like to install screen or T-Max as well, just because it keeps it going for you. Step three, you clone the OpenStack Ansible repository. Step four, you run the commit gate check script. And step five, you go and do something with your time until the deploy finishes. So I was told by my US colleagues that when you finish a deploy, the American thing is to go boom. And it needs to be like crowd participation. But you're all right, because I'm not from the US. So in the UK, we go, that was reasonably OK. And then we don't like do any crowd participation. Everyone sort of just has a cup of tea and walks away. Cool. So that's pretty much it. So that's a simple gate check deploy. We run this on all our gates. So whenever you do a commit to OpenStack Ansible, this will run against it. And it runs tempest tests, so it's not just all the deploy work successfully. It actually tests the OpenStack components as well. And so like a full end to end on one host. We've done some cool things with that. So for example, because there's only one host, you'd think I only get one database server or I only get one rabbit server. But we want to test the clustering, like the Galera cluster and the rabbit cluster. So we use a setting called affinity to create more than one container of that type. So you've actually got a full three node Galera cluster running on your single host for testing, which is kind of cool. But at the end of the day, it's not useful for doing an actual deployment. So the first thing you want to do when you're doing a manual install of custom hardware and not just an AIO is you want to start setting up the containers. So in general, I'm kind of against putting random com files in slides, because no one actually ever reads them. But in this case, I just wanted to put one example of one. So it's literally just we have a user config file called OpenStack user config. And it's just a definition of the hosts that you have, an IP to connect to them on, and what kind of group you want them. So in the example in the slide, it's OSInfrahost, which is the OpenStack infrastructure host, which will be the various API servers. As an example, if you just wanted Keystone, you could do identity hosts if you want your infrastructure services, so Galera, Rabbit, Memcache, and et cetera, you would do infra hosts. And it's just a specification of the IP for the physical host. And then using the dynamic inventory we have, it will munch that together with an environment file, and it'll create the container hosts within the Ansible inventory file. So it'll then have hosts that it can target that will get created by the plays that are then for the various services. So the OpenStack user config file is for your hardware definitions. So you also have to define your network infrastructure, what are my interfaces called, what are my bridges called. So one of the things that catches people out a little bit with the project is that we made a conscious decision right at the start to not handle your hardware setup. We think that that makes a deployment project a lot more difficult, because you could pick whatever hardware you want. We can't account for every different scenario within a hardware configuration, and we can't tell you and don't want to tell you how to set up your networking and how to set up your hardware and what hardware to use. And so we decided that we'll tell you what you need to have set up. So you need to have these networks set up. And if you want to use LVs for your containers, you need to have these LVs configured and various other things. But we're not going to do that setup for you. So when you're setting up the network configuration, it's mostly about looking at your infrastructure, making sure that the correct bridges and interfaces are there, and then just setting the settings in the file to say what they are, including IP ranges and various other things. This is actually the hardest part of deploying. Setting up this file correctly is pretty much the crux of what makes it slightly difficult or challenging for people, because it is important to get it right. If you get it wrong, you're going to run into issues where you have containers in the wrong places. Or if you've put the same host twice with different IPs, you'll get random failures or a different name with the same IP, and it'll try to do an app get update at the same time, and you'll get failures for that. So this is actually the hardest part. And I think this is probably the biggest barrier to entry for people, because you need to get it right. And if you don't, it fails. And it's quite daunting. There's a lot of settings for the networking stuff. But fortunately, we've created a sample file within the repository. So we do have normal documentation, which is just on OpenStack docs. But within the repo, we've got sample files as well that are commented out with, we hope, really helpful comments on what each setting should be. And yeah, it's a really good resource to start looking at. So the next thing is I've set up my host, and I've specified my hardware. How do I then specify how I want OpenStack to deploy the various settings with an OpenStack that I'd like to see, the changes to defaults and various other things? So we have a file called user variables, which is pretty similar. It's just a definition of variables. You don't actually have to set very many of them. We have adopted a principle where we want the default settings to be the OpenStack default as much as possible. So the only time you have to change them is when it deviates from the default with an OpenStack, which, of course, as a production deploy, you're going to want to do. But it does mean that you should only have to change the ones you care about and not care about the others. Here's an example. Just in the file, it shows you want to set Swift as your glance default store. You want to use KVM as your November type. You want to set some allocation ratio things for November. But there are variables for each of the projects. And that's how you figure out what variables you can set. There are a default file within each role. And within the role, there's a list of variables. And those are the variables you can override within the user variables. So this is on a deployment basis. So it'll be for the whole deployment. You can actually set container-specific variables. So if you wanted one container to have a variable that others don't, then you would do that in the user config file, because that's for your hardware specifically. So it's specific to a specific host. And this would be specific to your deployment. And we use the host-specific vars for things like Cinder where you want to set Cinder back end type or your Cinder driver on a specific container. So there's one other file that you'll need to set for customizing the deploy, which is your user secrets. Now this is just the file containing all passwords and keys and various other settings that you should keep private. The idea is that you could then encrypt this file using Ansible Vault or another tool. And then it'll just run through successfully. Now it's important to note that we actually don't set defaults for passwords. The reason being it goes back to what I said earlier, which was we want this for production deploys. And what often happens when you set default passwords is a deployer goes, oh, hey, I'll just run this thing. I forgot to set the passwords. And then you've got the super-secret password of super-secret. And we don't want that. And we've seen it in the past. And so we actually have no defaults for passwords. So if you don't set the user secrets file, it will fail out. The deploy will fail when it tries to find one of the password variables. But to help with that, we've created a nifty script that will create random passwords within the file. So it'll set all the variables that are required with a random password based on the type of value they are. So for example, keys get a certain type of password. And passwords get a certain length and bit value for those. Alrighty, so that's pretty much the two files you need to actually play around with. And once you've done that, you're actually ready to start deploying. There are a couple ways you can deploy. In my opinion, the easiest way is probably still to there's a script called run playbooks. This actually gets called by the gate check commit script. So there's three steps to the gate check commit script that I mentioned in the first slide about deploying. The first bit is the bootstrap ansible, which is essentially a script that we created to install the right version of ansible for the project to create a binary that we've made called open stack ansible, which is pretty simple. It just chains in the correct variables files for you. So you don't have to keep doing dash e variable file, variable file. You can just do open stack ansible, playbook name, and it'll include the variable files that open stack ansible uses. So that's the first script that you would run in the gate check commit script. That's the first thing it runs. So you'd need to run that. The second one it does is the AIO, which the bootstrap AIO, which essentially lays down those files we just talked about. And then the third part is run playbooks. So the run playbook script is quite useful because it will run the appropriate services for you. The service playbooks, it'll run them in the right order, and it'll also do what we've created called the successorator. So the successorator we created to help with the situation where there are upstream issues. For example, we've had issues getting keys for upstream repositories for various services or other dependencies on things that we can't control. And they'll fail intermittently because there's like a connection issue or they're having issues on their side. And the failure would then mean I have to see that it failed, go back and do it again. So we created a successorator which will just try the playbook again and run through. And if it really fails up to three times, I think it is, then it'll just fail out. But it also will start putting the output in verbose mode once it fails. So you don't have to go back and run it in verbose. It'll do it for you, and you can then see the output of where it failed from Ansible. So I like using that script. You can also edit the script itself. There's some environment variables that you can set to check which services you want deployed and which ones you don't want deployed. And that's pretty useful as well. Because there are some services like Solometer or even Swift or a couple of the others that Tempest is a good example that you don't really want to deploy or you may not want to deploy. And it's like a user preference thing. Cool. So the other way is if you don't use the script, you can use the playbooks directly. There are essentially three groups of playbooks. There's the host setup, which is essentially install the packages that the host requires in order to run. The next one is, well, also create the actually containers. That's part of the host setup. The second one is install the infrastructure services, which is like Memcache, Galera, Rabbit. And then the third one is install the open stack services, so it'll install the open stack services within a container. So this is useful because if you had a configuration issue with, let's say, Keystone, I don't want to have to run through all the host setup plays and all the infrastructure setup plays in order to then run Keystone again and have that succeed, because all the other sets of plays have already succeeded. So I could then just run the open stack setup play, and it will do Keystone and the services after that. Or if it happened further on down the line in, for example, Swift, which I think is the last one to get set up, you could just rerun the Swift play itself. So there's quite a granularity on that. And that's a pretty good way to go about efficiently deploying if you're running into failures that you're fixing on the first go. So yeah, there's that option to just use the individual playbooks. All right, so you've got to deploy running and it's custom and everything's great. But now you want to drop some servers that are dead, or you don't need any more, or you want to repurpose them, or something like that. Maybe you want to add new servers, it's probably more likely, or you want to change some settings. So that's actually quite easy when it comes to adding or changing servers, because we have what's called a dynamic infantry script, which will essentially take existing inventory, compare it to your OpenStack user config and the environment's file, which is a definition of where containers go on what hosts and what groups they belong to. And it will say, these are new servers, let's add them to the inventory, or these settings change, let's add them to the inventory. So that's really straightforward. All you do is add them to the user config, and off you go. Removing servers is a little bit more difficult. The problem with that is that the inventory file already exists. And so the entry for that host is in the inventory file. And if you remove it from the user config file, it will still compare it to the inventory and say, it's in the inventory, I'm going to leave it there. I'm not going to remove it. We did this on purpose because we don't want people running it accidentally with no user config file and then removing all their servers. And that's pretty much what would happen. So removing servers has to be a much more manual effort. But to help, we've created a script called the inventory manage script, which is inside the script directory in the repository. And it's got a help function, so you can look through. But essentially, you can delete nodes by using the inventory manage script. And you can remove the containers. And then you can run it and it will work fine. I give over to Kevin now. Hey, how's it going? So yeah, my name's Kevin Carter. I'm also a developer with Andy at Rackspace Private Cloud and working on OpenStack Ansible. So again, welcome. But to change it up, we're top. As Andy has been saying, we have the ability to deploy OpenStack and then rethink how that deployment was done from the very beginning. If Andy showed a picture of the allocation ratios for Nova early on, and if I wanted to change that six months after my deployment, I can certainly do that. I can do that with any service. I can do that with anything that we support and how we support it. And essentially, by modifying or editing the user variables, we're able to customize the deployment for the deployment needs. So that actually becomes important later on in life. Because again, the Nova allocation ratios, if maybe I want a specific host to have a certain allocation ratio, or I want to be able to attach to a different storage back end. But we have that baked into the system. And to help with that, you can run OpenStack Ansible, the name of the playbook, and a tag. So an example of a tag would be for Keystone. If I wanted to reconfigure Keystone and then restart all the services across the entire environment, it would be the KeystoneConf tag. So run OpenStack Ansible, OS Keystone install, Keystone Conf, a couple of tasks go by, and then all the services restart, and you're up and running with that new config. So we have the option to be able to customize everything from the top down. Now, with OpenStack Ansible, you can actually modify, add things, remove things, change your deployment up, and that can be done from, well, you can do that from the core. And the environment file, as Ania had alluded to, allows you to add a service that we may not already support. We may, like one of the services that we're working on now is Zekar. And so let's say I wanted to be able to deploy Zekar. You could be able to go into your environment file, add the Zekar services, and then carry on with our role that is being developed upstream. Now, the environment file is a little bit hard to understand, because it's a YAML file. And it brings up a whole bunch of different little primitives. But again, we have an example of how that works, why it works. And once you begin to understand how the rest of the deployment is put together, it becomes really simple to extend. Now, in OpenStack, the rate of change in OpenStack is immense from where we were a few years ago to where we are today, the amount of options or things you can change up is never-ending. So we have a tool as a module called the config template. And that allows you to reach into any comp file, INI file, YAML file, JSON file, and change bits. You can add keys, add values, add entirely new sections. And that gives you the option to, we may not have a templated variable for you for that specific option. We may not have some same default that we're generating for you for that specific option. But the config template will allow you to go in and add that. And then you can push that out to production today. That module is being PRed to Ansible Core. And I'm hopeful it can get in for 2.1. But we'll see. It's been there for a little while. But the other projects have adopted that like SF Ansible. They have a very similar capability to be able to inject configuration options into their environment. And that's using the config template module. So it's exciting to be able to get that kind of a dynamic environment and be able to push that out at a very large scale without having a template option for every single config thing that could potentially be done. And I think in NOVA, it's something like 1,800 potential combinations. And so we will template and produce the thing that will give you a production-ready deployment. If you want to customize that again to do anything you want, we can do that. So like Annie was mentioning, we're in the OpenStack namespace. And we're a community project. And we want to help you. If you need assistance with your deployment, we are on OpenStack Ansible on FreeNode. And our community is growing every day. We, myself, Andy, we're in IRC. And we're building a very awesome community that is friendly and super helpful for one another. One of the really cool things, as somebody who helped start this project and get it going, is to actually see people who I don't work with, I don't know, helping other people I don't work with and I don't know. And so the community is growing. We use the dev mailing list and the operator mailing list for some of our community discussions that we want to float out there and let people ponder on. But most of our conversations happen on IRC. We've also painstakingly, at times, created documentation. And Andy had alluded to our rafts of documentation that we have. And at docs.openstack.org, we have pages and pages of documentation on how all of this gets put together, why it works, and some helpful things like restarting a Galera cluster or recovering from a failure. Because we want to be able to have production-ready OpenStack, but the reality is things break. And you're going to have to deal with that. And we've created documentation on how you can recover from these kind of situations and or extend. So we're hopeful that those docs are helpful. So looking forward. OpenStack Ansible has come a long way, especially if you go and look at the tragedy I created in the Ice House timeframe. Actually, it's quite nice now. But we're looking at trying to do more with the project and do it faster, make things more stable. We're already putting this in production. We have very large clouds running this code today. But it can always be better. And we're always going to extend it. And so one of the things that we have done in this last cycle was something that we called the independent role repositories, where we took every single one of the roles and made it its own independent repository, which is all on the OpenStack namespace. With that, the idea was that you'd be able to work on something like Keystone as just Keystone. And not have to incorporate all of the changes of OpenStack Ansible in this giant deployment. If I only wanted to work on Keystone, I only wanted to see how the code is deployed and or work on that role, you can work on it from the Keystone role. Now, one of the really cool things that we did with that, actually, is that we started gate testing on every single one of those roles. And so it's creating a little micro cloud by running that role. And that's something we call developer mode. So one of the, actually, another good example would be Swift. Swift is not, it's a complex service, but we're going to set all of that up for you on a single node in developer mode, which pulls down the source from gate, builds a little VAM, a couple of containers, and then runs the Swift functional tests on. Because essentially, it's actually a really nice developer workflow, especially if you're looking at, like, I want to work on upstream Swift. So the independent role repository has enabled a lot of that for us. And we're going to continue with that and be able to more services, whether that be Designate, Zuccar, what are some of the other ones that we're working on. There's Solometer, Ironic. Ironic is currently under development right now. It's mostly there. If you follow me on Twitter, you would have seen a rant I had a couple of weeks ago about my success with Ironic. But anyways, yeah, we're going to be continuing that trends as we move forward, as we roll forward. We're also looking to start working with Ansible 2. Right now we rely on Ansible 194. I think 196 was broke for us, so if they do 197, we'll rely on that. But we have 194 at this time. 2.x is something that we're looking at going to in the next cycle or two, but also maintaining support for some of our 19 installations. So Ansible 2 is actually kind of exciting for us, because some of the new modules that they're going to produce may help with some of the pain points that we had earlier on, especially in terms of the network setup. If we can help out in that by using some of those core modules, then I think that'll go a long way for easing adoption. But Ansible 2 is going to be a good thing for us. Another thing we're working on is Ubuntu 16.04, and it was just dropped. But we are hopefully going to have 16.04 support within this cycle. But we'll also be able to deploy against 14.04 as well. Another thing that we have done, and we will continue to do, is to be able to upgrade and support our environment from the point of time when you deployed it. And so we had Ice House customers. And we've upgraded them to Juno. We've upgraded them to Kilo. And we're going to upgrade them to Liberty. And we'll upgrade them to Taka. And so by forcing us to be able to upgrade these environments, we're going to make mistakes. Things are going to happen. But we're going to always be able to help our deployers come forward. And so looking forward, we're going to continue that trend and try to make things even easier. So I have no more slides. But that is the quick and dirty talk of OpenSack Ansible. And I would love to have questions and engage the audience. Maybe we can get a boom. Anybody? You'll alert. So you mentioned that you're expecting the base operating system and, I guess, network configuration to be all set up. What's the minimum baseline that you would be looking for in order to use your deployment software? So the minimum base, like the amount of RAM, disk space, and a number of hosts? More in terms of what these machines have to be configured with. I mean, you mentioned the network, for example. Does it take care of all the Euclera replication, all this kind of stuff? Yes. So yes, our OpenSack Ansible project will set up a Galera cluster for you. If you have Galera cluster XYZ over here, we'll go and talk to it. But if you don't have that, we'll produce it for you. And we think we have done a really good job at making that scalable and fast. We'll also do that with RabbitMQ. And if you're using Solometer, MongoDB. But on all of those things, it is a connection string. And so if you have an amazing MongoDB cluster over here, I'll go talk to that. We're not going to enforce our infrastructure choices on you for that. In terms of networking, the reality is that we've simplified what we require from a network perspective to a collection of bridges. And we've named them in very not creative ways, like VR Management, VR VLAN, VR Flat. And they all have a very specific purpose. Now, you don't need any of them. They're arbitrary names. But yeah, to your point, we'll set up what you need. Otherwise, networking is kind of on you. Do you also set up HA Proxy or some other thing to worry about the load balancing HA? Yeah, actually. So OpenSec Ansible will set up HA Proxy for you if you don't have a load balancer already. At Rackspace Private Cloud, we're using F5s. And so all of our deployments have an F5 in front of it. But again, we'll integrate with that. Because we don't feel like that is a piece of the technology that must be forced upon you. If you have an amazing router over here or a load balancer over there, we'll use that. We're not going to set it up for you, but we'll set up the environment to consume that. Now, in the HA Proxy kind of setup, though, we'll set up ACLs and HA Proxy. And we also have, what is it, keepLiveD setup so that you can use HR, have multiple HA Proxy nodes. In this cycle, I have a patch set currently online to introduce customizable ACLs and SSL termination for all of the public services using HA Proxy. But yeah, we do have HA Proxy setup. And thank you. And how do you find the troubleshooting of the services when they're in containers? Not specifically Ansible, but it seems like it could be a little bit more difficult to get in there with the. So in the deployment, one of the aspects of our deployment is a logging node. And so all of our logs get published from the various services, various containers and hosts out to our logging node. And you can actually put your analytics system against that and start sucking out data from that. Or you can tail-f and watch a stream of consciousness fly by if you really wanted to. In terms of troubleshooting, a lot of the time, the issues with a service is it was misconfigured or you're on the bleeding edge and it's stack-traced and died. To be able to troubleshoot that, you have to log into the container. We're using LXC. So that's a machine container. That is a lightweight VM. So you can SSH into it. All of your GNU tools are there. And you can carry on troubleshooting. I find it simple. I do know that it is building a fairly large cloud. Andy mentioned our all-in-one earlier on. That is building a 32-node cloud with a single hypervisor, which is the host. And so, yeah, you're interacting with more services. But a lot of that brings us stability and scale. Hi. Could you talk about a little bit of differences in relationship with Opposite Cola? We haven't. I mean, early on, we talked with the guys from Cola, Steven Dake, and Mihail, Sam Yapple, Daniel Hansen. We talked to them a lot. They decided to go with the Docker technologies as their core, their fundamental core. We're using Ansible for Ansible in LXC containers. And so we haven't talked to them a lot recently, but we did work together early on. We're both solving a very similar problem. We've just chosen different fundamental technologies. Would you say you're closer to upstream or the same? I don't know that we're the same. I haven't looked at them in a long time. I do know that we've done very large deployments with OpenSec Ansible today. And we've done it across multiple regions. I don't know what their system is capable of at this time. The last time I looked at it, they were still using Docker Compose in the Kilo liberty time frame. And so I don't know how far they've come. So I couldn't answer that honestly. Thank you. Yeah, of course. You said that you can point your deployment to any running database of MongoDB or Galera cluster. Is that possible to point it to an already running cluster and take it over? Like a cluster deploy to something else and just swap the database and the control frame between the cells? I'm going to say yes-ish. And the reason is we can turn on and off the LXC containers. And if you had a running Galera cluster over here and it was on bare metal, you could turn off the LXC containers and point our system at that and it would take it over. The ish part comes because who knows how that was set up? What's the config like? Are you dropping extra comp files that are being sourced into the main MyCNF file? Like there's various things that are going to come into play there that we probably we can integrate with but may cause a problem. I'm not thinking about taking over the Galera cluster but actually swapping the control frame so all the APIs, all the Nova scheduler, Nova conductor, everything from what was deployed, our old tooling, and now using OpenStack Ansible for everything else except the database. That should work. The hostname stuff would, when you go in a Nova service list, you're going to see a bunch of hostnames. All that's going to need to be probably munged about. But that's all OpenStack API services. We are not going to take care of that for you. So I want to say I think the answer still stands. Yes-ish. Thanks. Yeah. Could you give some details on the log note setup that you just mentioned? Thanks. There's a log note setup so I think we've just gone. Yeah. You can do that one. So the log note setup is actually quite simple. It is an RSS log server and all of the services use RSS log to publish logs to it. We did actually use, I think it's called the IDT plugin which allows us to create a directory for every hostname that is attaching back to our RSS log server. And so it is not just one massive file. It is lots of massive files. But it's all very separated. So again, if I wanted to tail F the logs, I could do that for all the services or I could do it for that one specific host that's causing me a problem. And that's all being done by very, very simple RSS log setup. The plug-ins plus the RSS log. Yeah, and actually that's a good point. And in our RSS log client role, we can and we tested with Splunk and Logly. And we're just using RSS log to shove logs over there. And there's an example in the defaults YAML file that shows you how to do that and why it works. And that can be all over SSL to log service XYZ as long as they support RSS log. Well, I'm a puppet guy, so I'm used to things running to it, running constantly and being idempotent and stuff. Do you run it, do you run Ansible periodically to make sure everything is in a known state or is there another way to handle that? I'm going to say we don't run it periodically. I think the ideal is that you want to keep up-to-date with the bug fixes and various other things. So it would be a good idea to run it, but not so much because you just want to keep the state the same more because you want to actually be updating or updating fixes and OpenStack itself as well and various other reasons. With the container approach, Kevin talked about troubleshooting a little bit, but one of the things that if you're not super interested in why it failed, we actually recommend just deleting the container and building it again. For containers, that on Galera, where it's a cluster, just like API services. So we're kind of of the mindset that you shouldn't need to keep setting the config on those things. No one should be logging in there and changing stuff. But I mean, we used to do the Chef thing, and that was pretty much the same. So it does work. But our opinion is that you shouldn't be changing those. And that can be a little bit naive if people are going to go in and change them. But then when services stop working, you can either just delete the container and build a new one or troubleshoot it if you're really interested into why it's failing. Like if it's failing all the time and you keep rebuilding it, then it's probably worth troubleshooting. So I hope that answers it. Yeah, for sure. So my question is, how do you handle service upgrades? For example, a new version of OpenStack comes out. Is it a new version of your playbooks that you have to check out over GitHub and rerun? Or is it some version parameter you can plug in that will pull down the correct source files? So actually, both in a way. So the settings for the Chef or OpenStack, the OpenStack like Chef or the repositories, is a variable. So you could technically deploy using like Kilo and set the shards to be Liberty or Mataka. That would probably be not advised, but you could do it. So you could not download anything extra from OpenStack Ansible and just change those shard variables. But we update the shards regularly as we go to keep it kind of tracking. And so you could download the latest version of OpenStack Ansible, you'll get new shards. And then when you deploy all to get any bug fixes we've done within OpenStack Ansible. So I think it would be a bad idea to just change the shard and not update everything else because it might not work. We're not going to say it does. So yeah. I'll actually expand on that a little bit, but in terms of like OpenStack service upgrades, since Liberty, all of our OpenStack services are, not only are they deployed in a container, but they're also inside of a virtual environment. And so when we go and drop down new code, really the only thing that is changing to get that new code running is the init script. And we rewire that to point at the new Vem. And so we've done that to isolate OpenStack's Python bits from the host machine, because Linux operating systems depend on Python. And so we don't want the OpenStack versions of things conflicting with the vendor versions of things from Ubuntu or Red Hat or whoever. A lot of the upgrade logic, like the business logic for upgrades, because we're going to change the version of MariaDB eventually. We're going to change RabbitMQ version. We take care of that upgrade in the role for you. And so if the RabbitMQ version is new that will be installed today, it will go in and install that and then bring that cluster up to date with the new code, maintaining uptime. And that's all part of the role. So from an infrastructure perspective. So we're getting kicked off the stage now. So if you've still got questions, feel free to come talk to us afterwards. We'll probably be outside because there's another talk in eight minutes now.