 Yeah, okay, cool. Well, hello, everybody. How's it going? Well, today we're going to talk about deploying OpenStack with Ansible, and in particular what we're doing with the OS Ansible deployment project and how Aro is deploying their public cloud using Ansible. So I guess I could start. So a little bit about myself. My name is Kevin Carter. I work for the Rackspace private cloud. I've been working for Rackspace for about four years now. And I'm on the development side of the Rackspace private cloud house. And this kind of surmises who I am in a nutshell. But anywho, we are attempting to deliver OpenStack to our customers in a very easily and consumable fashion and Ansible allows us to do that. Curtis? Oh, yeah, and my name is Curtis Colquitt. I work at Auro. We have a public cloud based out of Vancouver, actually. I'm actually from Edmonton, so across the mountains in Alberta. And yeah, I'm interested in obviously OpenStack. That's what I do for a living, basically. Information security. And on Sunday, I was programming a film festival for the first time. So that was something new that I was doing on Sunday. And today, I'm here talking to you guys, which is also new. So thanks. But I'm just going to let Kevin sort of go through his piece, and I'm going to sit down. And I'll come back up for my piece. So thanks. So to start off, you know, I'm going to call out Robert Cathy. I don't know that you're in this room or anything. But this was up on Twitter. And I just kind of want to highlight this, is the fact that OpenStack is cloud-infra and cloud-infra is hard. And there's a lot of people out there who are saying that they're making OpenStack easy or trivial, or making OpenStack this, I want to say, this unicorn that is capable of being able to be deployed anywhere for any reason for some magic purpose. And the reality is that OpenStack is very difficult. And what we're attempting to do is actually make that process simple. We're not taking away the fact that OpenStack is complicated, and a whole bunch of different projects that are working together to achieve a common goal, to be able to deploy a cloud platform. And so our end game, using Ansible, is to make that simple for you, but not to take away functionality of your actual cloud. And so I saw that quote on Twitter. And I was like, ah, I must put this on the slide. So what is OSAD, as we affectionately call it, is the OpenStack Ansible Deployment Project. And we're really about the deploy experience. We want the deployers to be able to come into it and use this system, or use Ansible to deploy OpenStack in the same way throughout the lifecycle of the cloud. I'm using the word vanilla OpenStack. And that's probably taboo in the greater community, because a lot of people say that you can't use vanilla OpenStack. But I'm here to tell you that in our project, we are. We are pulling down OpenStack from the upstream Git sources and building it as Python wheels, distributing it throughout the environment, making it so that you can deploy OpenStack from upstream sources as it was intended by the developers. So no proprietary secret sauce or bits. Why we're here, really, is in late 2013 or so, we had a bunch of problems with our deployment system. And it wasn't that our old deployment system didn't work. It was just it didn't do the things we wanted it to in a consistent basis. So we set out to solve those problems. And we wanted to be able to make sure that we could maintain a scalable, stable environment. And again, like I said, a repeatable process throughout the lifecycle of the cloud. And so to talk about some of the problems that we were trying to solve is the first thing we had was packaging problems. The packaging, whether I don't want to call it any one set of packages that were a problem, but they were either out of date or they would update them and they would break in some spectacular way. There'd be a patch in there that you didn't anticipate. They would have a bunch of out-of-band configuration that was being laid down on the host that you actually didn't account for. Some of these out-of-band configs would reference old variables that didn't exist anymore. And while OpenStack doesn't do a lot of config checking when it's loading these configs, if there was a value in there that is no longer being used or was deprecated or caused some other unknown problem, we wanted to make sure that that no longer affected our deployments. And then broken dependencies, where you would go and add a bunch of compute nodes to an environment and you'd install a client only to find out that the client was updated and the client is referencing a dependency that actually didn't exist. These were problems that we ran into in various operating systems and various packaging vendors from upstream. And so we knew we needed to solve that problem. The next thing that we had was the deployment tooling. So the maybe sometimes eventually consistent kind of. We were using the RCB Ops Chef Cookbooks and the Stack Porch Cookbooks and a couple other deployment tools out there. And you would run Chef three times just to make sure that your environment was correct on a deployment. And you came into it knowing that you had to run Chef three times. The first time would create all your certs. The second time would install the packages. And the third time would start the services. And that was something that we didn't want to do anymore. We wanted to start a run and be able to know that it was going to do the right thing the first time. And we needed it to be very deterministic. Upgrades, in that eventually consistent kind of model, upgrades became very difficult. You would pull down new playbooks and they would run something that maybe it was supposed to run on the first time. But you knew it did on the original deployment. But you're not really anticipating it running again. And so an upgrade comes through and destroys a bunch of stuff. And then you have to go and unwind that. And upgrades became really hard. And even if it was a rolling upgrade in, let's say, going from one version of Havana to another version of Havana, that model just was unacceptable for us. And then it was a steep learning curve. Again, I'm only going to speak about what we were doing with Chef. But the Chef DSL is almost a language on its own. And so you have developers who are primarily Python developers, as we are here in the OpenStack community, coming in and you need to know Ruby-ish. But it's the Chef DSL. Unless you're calling the Chef gem and then you can do all kinds of stuff inside of Chef internals, or you're calling an execute block and you're running a bunch of bash commands, which then would change the precedence on how things would run. And so there was a very steep learning curve to using the existing deployment tools. And the legacy architecture. A lot of the time when people are talking about how they're scaling OpenStack or how they're standing up these architectures, they're running a controller one, controller two model with this floating VIP that would go in between the two, and all the services are running on those two controller nodes, and then the rest of everything else is a compute node. And maybe there's a network node in there somewhere, but a lot of the time they're putting a network node on this control node, too, in the case of using Neutron. And so this VIP failover would cause problems as your environment scaled out. And you could test VIP failover all you like in your controlled environment, because you would test that, yes, when this service would go down and the other service would come back up, and everything was happy. But the reason a VIP would fall over in the first place is because everything wasn't happy. And so the controller one, controller two model was something that we really struggled with, especially as we were reaching larger scale deployments in our private cloud environment. So what we came up with was we knew that we needed to go to source. We wanted to go to upstream OpenStack to get rid of packaging problems. We're building everything in LXC containers so that we can get lots of service separation within the infrastructure itself. And we're using a multi-master architecture. And everything is orchestrated via Ansible. So why Ansible? That's the big question. There's already all these other tools. Why don't we go fix them? Well, Ansible has a fantastic community. The people of upstream Ansible are receptive. They want to listen to what's going on. They know that not everything is perfect. And they want to work with the community to fix things. If you're in the Ansible IRC channel, you know that there's people in there chatting all day long every day. And so there's always somebody who has a question. And actually, there are more people who want to answer those questions. So the community engagement within Ansible itself has been fantastic. The orchestration part of Ansible. Ansible is orchestration. I can orchestrate a complicated set of tasks. And I can do that very simply. And that is inherent to Ansible. It's not something that I have to go kind of hack into Ansible. It is a part of what Ansible wants to do for you. And so we knew we needed that, or we wanted that. Again, there's almost no code in Ansible. Everything is YAML. And you can read it. And it almost looks like English. You have a task. It has a name. And the task does a thing. And that thing could be a shell command or call out to some module. And the module name is very descriptive itself. If it's something that's being run inside of an LXC container, the module name for the LXC module is LXC container. So you know exactly what it's doing and why it's doing it. And you can look at that task and understand what's happening. So that goes to the next point of a very low barrier to entry. New developers who want to come into being able to deploy OpenStack and use Ansible can pick up these playbooks and roles and do everything that they need to do and start contributing in a very, very quick amount of time. We have actually in our OpenStack Ansible IRC channel a couple guys who were told by their bosses that they needed to go deploy OpenStack. They were coming in and they were software developers but had no OpenStack experience. They picked up OpenStack Ansible deployment and were up and running in about a week. And they're actually contributing code to us now. So the ability to read what we're doing inside of our roles is awesome. So why containers? Well, we're treating LXC like more bare metal. We basically want to have an infrastructure that doesn't have a special circumstance for containers. So we're using Ansible and we're orchestrating all of that and using Ansible's native SSH. So we SSH into all of our containers and configure them. And that's all taken care of by the LXC module itself. LXC by itself, our native LXC using the user space tools, is compatible with a lot of different network types. I can use bridges and VLANs. I can use bridges and Veith pairs. I can use Mac VLANs. I can use raw physical interfaces and just give it to the straight up container if I wanted to. And so that was amazing for us because we needed to be able to use these containers in an environment that we probably don't own. As the Rackspace product cloud, we're going into a customer data center and setting up a cloud. And it's on gear that maybe we help them spec, but it's, again, their gear and their network stack. And so we needed to be able to integrate with whatever it was that they were going to provide us. It also, LXC supports LVM back end. So I can build my containers in logical volumes, which is fantastic because that will provide me file system barriers between every single one of my containers and my host. I can also move that off into its own set of drives. I can take snapshots if I want to work on a container. And I have a lot of capability with that. And it's stable. LXC containers, like I said, we're treating them like more bare metal. So we can build our containers and know that they're going to do the thing that we've told them to for as long as they need to. But they're disposable resources. So if trouble shoots something for 30 minutes, if it's not doing the thing you want it to do, destroy it, make another one. There's a good state that you can return it back to. You can also artifact it and send it often over to some sort of a lab environment and try to figure out what was really going wrong with that environment. And so LXC has been very stable for us. So what is OSAD? As we have effectively named the project, it is the OpenStack Ansible Deployment Project, which is up on Stackforge. Again, we're using LXC containers to isolate components and services. We describe the OpenStack services that we're running in the various containers as components of the stack itself. We are, again, deploying OpenStack from the upstream sources. And so we're pulling down Git from git.openstack.org or GitHub. Or if you're inside of a wall garden, you have a Git environment that you can pull your own source code and maybe do your own patches if you absolutely needed to. But the reality is that our only requirement is that we have access to something that is Git. And so that is easy to accommodate. It runs on Ubuntu 14.04 currently, and we are building it for production. We have no secret sauce either. I just called that out. Everything that we're doing is up on Stackforge. And from the rack space side of the house, if it's not actually in our OpenStack Ansible Deployment repo, it's in our RCB Ops repo, which is open and totally public to the world. So we have nothing that we're doing that is secret. And you could bolt on as much as you want. We have ability to extend what we're doing in OSAT already if you want different container types, a different service provider, a different backend, actually an entirely different environment, how you're laying out your containers. All of that is available to you through the stack itself. And we are trying to keep it simple, stupid. Throughout, as we're going through and building our stack. But more, a kiss is more, keep it stable. In Ansible, if you're not familiar, Ansible has tasks. And inside of all of our tasks, we are tagging everything, everything, everything, everything. So as an operator, you can come into your environment and if you need to run one task to reconfigure something because you're tuning it, you can do just that one thing. You can actually string a whole series of tags together and do just those series of things. But this makes it so that I don't have to just keep running, again, Chef Client until I get my maybe kind of sort of eventually consistent model. I can do the thing that I need to do and I can do it now. We also are doing process and service separation because of our container architecture. And we are microservice-like where it makes sense. We're not using the Docker model in the sense that we have one PID running in one container. And we have 3 million containers who are making that all of our stack possible. Our gate job is around 32 containers total. And inside of each one of those containers, if there are services that want or need or like to be tied together to talk to each other within the same name space, we're going to do that because it makes sense. So a little bit about the stack itself. We are using Galera like so many other people. And that Galera is being powered by MariaDB. Currently, that's MariaDB 5.5 and Galera 3. We're also using RabbitMQ, RabbitMQ 3.4.3, I believe. And we're pulling that down directly from the DEB package from upstream RabbitMQ. And I'm calling out the cheese shop, the PiPi index. So when you're using our environment, you are actually getting a set of Python wheels, which is that compiled Git source code, which is all running inside of this index in the environment itself, which makes it so that, again, you can deploy the same bits throughout the stack for the entire lifecycle of the cloud. You can also update it when you need to. And those bits will, again, live inside of your environment. It's a very simple index. And it doesn't consume a lot of space. Like the Kilo release is less than a gig. I think it was like 900 megabytes of Python wheels. So it's a very efficient way of delivering the compiled bits throughout the stack. Currently, like I said, we're on Stackforge. And we're gating it through the OpenStack CI, which means that we get a performance 1.8 or an 8 gig cloud server running on HP cloud or Rackspace cloud. But when we're doing these gate jobs, we are building it as if it was a multi-node cloud. It just happens to be in one physical box. We have the amount of containers that we would have in a multi-node cloud. So we're building a three-node Galera cluster, a three-node Rabbit cluster, two Horizon nodes, two Keystone nodes, a three-node repo infrastructure for our cheese shop index. We're trying to, essentially, make our gate job what you will see in production. And we're using the Linux Bridge agent, which is Linux Bridge and VXLine if you're using L3 networks. OBS has actually come a long way from where it used to be. And OBS 2.3 plus is actually fairly stable and fast. But Linux Bridge agent just works. So we've been very successful with that. As a community project, we have Juno, Icehouse, and Kilo. And Kilo actually was released on April 30th, six hours after it was announced publicly from the upstream OpenStack sources and the tags were dropped. And since its original release, we've been tracking the head of the stable Kilo branch for our Kilo branch as well. And so in our supported releases, Juno and Icehouse called them out. They contain a whole bunch of rack spacisms. Because they was moved from our internal project up into Stackforge. We didn't excise those code bits because we were relying on them. But in Kilo, we did. So with that, Kilo is our first community release. And as it stands right now, we have 41 contributors. And not all of them are rackers. Obviously, within the Rackspace private cloud, we're all contributing to this project. But there are quite a few folks who are not rackers, and they're contributing to it, which is fantastic. Because again, we're trying to build this community around being able to consistently deploy OpenStack using Ansible. And that seems to be growing. To talk about the code difference between Juno and Kilo, we have excised 81,000 lines. And if you've been in OpenStack for a while, this is essentially we keystone-lined in the repository. So with that, we still have a lot of code in there. But there's really only 9,000 lines of YAML. If I sift through all of the stuff in Master right now, there are 9,000 total lines of YAML. And we have a style guide right now that makes the way that we write our tasks essentially as a dictionary. And so those lines of YAML could probably be compressed into two per task. But we have chosen to write them as five or six lines so that it's easier to read. But what powers the stack? Around 9,000 lines of YAML. The rest of it are a couple libraries that we're looking at trying to get upstreamed. Text files, readme's, and whatnot. So to kind of touch back on what we're about, we're about the deployer experience. We really think that Ansible is a superior way of delivering infrastructure. And we really want the deployer experience to be a fantastic one. We're about vanilla open stack. I'm going to keep saying that because we are. We're pulling down open stack from the upstream sources. There's no proprietary shenanigans happening in the background. It's upstream. We also want stability and scalability because why would we build something that wasn't stable? I'm going to talk a little bit now about how this actually all works. And so within the open stack configuration, we do have config files. Again, they're YAML because we seem to like our YAML. But everything lives in a directory called open stack deploy because we have no originality in the way we name things. But we're using open stack's dynamic inventory, Ansible's dynamic inventory. And we're generating the inventory itself using the config that's found in open stack deploy. And so this allows you to add more nodes, delete nodes, tune variables, and everything else. It's basically the stuff in open stack deploy is your window into the Ansible inventory. And then we created a small little execution wrapper, which is a bash wrapper that is essentially running Ansible playbook dash e. And it brings in a bunch of all of the config within that directory for you. So instead of running Ansible playbook in a huge command, you can actually run just open stack Ansible on the playbook you want to run. And so we're trying to make this, again, simpler for the deployer. So if I wanted to actually do the deployment, once I have all the config in place, I don't actually want to show you config on a slide here because it would be kind of ridiculous to look at. But as you get into the playbook's directory, once you've cloned the source, get into some sort of a terminal multiplexer and run open stack Ansible, set up everything. Because again, we have no originality in the way we name things. So once this done, like within our gate job itself, this takes about 40 minutes to do the deploy. And to talk about that, this is what our gate job looks like, and I'm sure our docs writer people are going to hate me for showing this, because it's ASCII diagram. But anyways, there are 32 containers. It is, and there's a load balancer in there using HAProxy. We are deploying Swift. We have Neutron. We have VXLand and LinuxBridge. There's a lot of stuff going on inside of this all in one environment that we're doing within our gate job. But if you take what this is, this is actually what we are doing in production across hundreds of nodes. We can make the infrastructure nodes themselves as big of a control plane as we want it to be, within reason. And based on what are the amount of requests that your private or public cloud is going to be receiving. But it kind of looks like this. So as a deployer, we're adding compute capacity. So this is kind of what it would be required to add more compute nodes. You go into your config file. You add a couple of references to where these new hosts that you're bringing online. And at a minimum, you need a host name and an IP address. And when you're done with that, you run set up everything and you limit to your compute group, the Nova compute group. And that will go through everything that needs to be run within your environment and add more compute nodes. Now, compute nodes are a trivial, right? Should be the easy one. But what if I wanted to add more control infrastructure? Same thing. I go in. I have different groups for our different pieces of the infrastructure itself, which is your identity or your OpenSec infra, or if you wanted to scale out your Galera, RabbitMQ, and Memcache, or something of that nature, that would be in your shared group. And then you, again, limit based on those groups. And this will go through and wire up everything within your cloud that affects those new hosts within those groups. Now, the non-trivial example is Neutron. Neutron, what if you wanted to add another network within your Neutron environment for VXLAN, in this case here? I could actually do that by adding a Neutron networks config file and then have a global override with a provider network that specifies everything else that I need to know about that network itself. And then, again, limiting to the Neutron all command. This will go through all of your containers and wire up your new network. So our roles are intelligent enough to take care of this stuff for you. And it interprets what you have in config, represents that in inventory, and executes. Yes. So if I wanted to add, just let's say, I wanted to add more network hosts, I certainly could do that. I could all, we have lots of different groups that we can scale out independently from one another. And actually there is a blanket, like I don't want to know what's going on in for this group, and it will just do everything in the one node. So if you just wanted to build stacks of everything inside of these one nodes, you certainly could do that too. But if I wanted to actually scale out just the one Neutron thing, sure. Yeah, that's not a problem at all. And now I'm going to yield the podium to Curtis. OK, so I guess basically where I'm coming from is as a consumer of OSAD and a user of OSAD. And that's kind of the perspective that I'm bringing to this part of the presentation. So first I want to say thank you to Kevin and everybody who's been working on the OSAD project because that gives me something that I can go to work with every day and use at my job and try to make my work easier and actually create a really great public cloud. So thank you guys very much for that. So just a quick intro on Oro. We're based in Canada obviously, and there are quite a few organizations and companies and people in Canada that would like to keep their data in Canada for various reasons. So that's in a way a big driver for our business. In terms of what we're using for OpenStack, we're working on our second generation of our cloud. And that is a fairly stock OpenStack system. I guess Kevin in a way was using Vanilla and other than a big difference is that we're using MutoNet as our Neutron plug-in. So what are we using in terms of OSAD? So right now not as much as we'd like to. And part of this process I think is going to be as OSAD continues to grow and be a community project, we'll be using more of it as we go along. And that's really my goal is to, in a way, my work will be to consume OSAD and add on our additional project components. But right now we're using all of the main infrastructure components anyways that come directly from the OSAD rules. So the rabbit, Galera, Memcache, all that kind of stuff. So the basic infrastructure. We definitely have a lot of thinking to do in terms of the workflow. So I guess I'll probably keep repeating this. But one of the things that I'm basically doing is I think my job is going to be to determine how we consume OSAD and what that workflow looks like and how we layer on top of our customer requirements and things like that. So I also think that OSAD is invaluable as an example. Even if you didn't want to deploy with it, you could totally go in there and look through all the config files and see what production OpenStack system is using. And those config files are really invaluable, I think. That's a really useful thing to have. The other thing that we're sort of working towards is the team that I'm working with is somewhat new to configuration management. So we have a lot of work there to do in terms of being able to properly consume. And use OSAD and Ansible as well. So again, OSAD, a great example of not only using Ansible, which is so there's a lot of great sort of best practice-y stuff that's in the OSAD stuff and a lot of things you can learn from. There's, like Kevin said, there's 9,000 lines of YAML, which on one hand sounds like a lot, but on the other is just a really great example of using Ansible and how to do it in sort of like a professional production-stable, scalable way. They're also big users of the testing, like the gating with the OpenStack infrastructure. So that's something that we really need to take a look into and see how that's all being done and start to apply that to our own systems. Already supporting Kilo, as Kevin mentioned, they were able to deploy Kilo basically on the day that it was released within a few hours. I think that's a really powerful thing to be able to do, because part of what I need to do at work every day is to continuously improve my infrastructure. And I need to do that by making lots of small changes really quickly. And I can't really just wait for packages or the next release or whatever. So we're really hoping to get a lot more use out of that kind of source distribution model and being able to get right into the new stuff right away. Oh, and the community as well. So not only do we have the OpenStack community, which is great. The Ansible community, which is great. But now I can also ask questions to the people that are working on OSAD and also try to help to contribute back there. So I have all these different great communities to work with that make my job a lot easier. And then finally, the segregation of services model that OSAD is using is really helpful to us. And I think it's pretty important, regardless of what you think of LXC or different types of container technologies, we definitely want to use that. So I'm really happy that that's in there. And that's something that's important to us as well. So just a couple differences from what I do for what Oro provides. So obviously, we're a public cloud. So OSAD is typically a private cloud sort of production system. But we're a public. We're using MitoNet. We have a slightly different HA model that we're still working on improving. We also have to do billing, which is something that not a lot of clouds have to do that are using OpenStack because most OpenStack or many OpenStack deployments are private, so you don't really have to worry about billing as much. We have to do billing, which I think is a really interesting problem. And then our support model might be a little bit different, too. We have multiple tiers of support staff. And we have to find ways to include those support staff and still have a segregation of duties, but allow them to do the work that they need to do and be able to run playbooks, but not necessarily have access to all of the credentials and things that higher tiers might have. So in some ways, I have some of my own guiding principles of using Ansible. So these are really my own thoughts and problems that I'm still working on figuring out. So one of the things that I'm not sure about yet is when to restart services. So I haven't really settled on the whole Handler's concept and whether or not that's great. I don't necessarily want to run an Ansible playbook, have a change, a config, and then restart a bunch of services. So that's something I'm still thinking about. Every task tags, like Kevin already mentioned that with those ad, and how that's really a powerful system. It's something actually you have to do with Ansible playbooks, but it's really powerful to be able to only execute certain tasks in a playbook by tagging them. So we do that a lot. And also, through using tags, I can continuously run playbooks like every half an hour, every hour, either from something like Ansible Tower or Jenkins, I'm using both. And based on those tags, I can run those jobs and have those executed continuously. I also think that installing OpenStack the first time is relatively straightforward, but then I have to operate that for the next X number of years. And that's where I really think that Ansible comes in very helpful, is that I need to orchestrate maintaining that system forever, basically, until we deprecate it. And I've been working in this kind of stuff for a long time, and I haven't deprecated that many systems. So, and again, like I said, to do lots of small changes faster. And then of course, even though Ansible uses SSH, we don't really want people SSHing into boxes and doing stuff manually. So we'll use Ansible to execute whatever work they need to do. So I have a few personal struggles, too, with Ansible. And again, it's not necessarily something that's wrong with Ansible, it's just that in some cases it's a little bit too powerful, and it's easy for me to make silly mistakes. So for example, there's a lot of idempotent modules, but then I can easily add a shell script and mess all those up. And then I can say it changed when never, because I never like to see all the changes, and I'll be just like, oh, well, it just changed when, or false. And then it always appears like nothing happened, even though that shell command or that other command could have totally done something. Multiple environments haven't totally figured out how to do multiple environments. I have one way that I'm working with it now, but that's something I'm really interested in figuring out. And then finally, the Ansible reboot all, which I don't know if anybody's ever done, but I've seen a few examples on the internet of the same sort of concept where it's pretty easy to do that. But in a way, because Ansible is so powerful. So in the near term, with our deployment, we want to be able to deploy OpenStack from source, so we'll consume that model from OSAD, segregation of services. Again, OSAD is going to help us with that. We need additional monitoring. Who doesn't need more monitoring? Ansible callback plugins, I find those to be really interesting, because I'd like to know when a job creates changes or causes changes. And in order to find that out, one of the ways I've been doing that is to have a callback plugin that says, OK, when this Playbook ran, it caused a change, and that goes into our Slack channel. And then let's me know that there was an actual change when one of the automated jobs runs. I need to learn a lot more about the OpenStack infrastructure and how testing is done there, because that's how I'm going to be able to operate our OpenStack cloud over time is by doing a lot of testing. And then I could really use a couple modules, specifically Meadonet and Swift, for doing some backup stuff. So finally, some comments, ideas. How are we going to consume OSAD? How do we make it pluggable, whatever pluggable means? How do I layer my customer requirements on top of that? What kind of HA model can we insert into OSAD? And can I use what I'd like to get to some sort of ECMP, BGP style load balancing? How do I balance community roles and playbooks with our customer requirements? And then that whole process is about how do we consume OSAD and stick it into our workflow and processes. So some thoughts I've had about configuration management in the future and using Ansible is I have a real hard time with secrets, like passwords and stuff like that. How do we properly do that when you have so many variables? Where do those variables come from? How do I store secrets properly and avoid the whole like, oops, I checked it all in to get. Now what do I do? Continuous integration, like I said, a lot of our stuff is going to be running from Ansible Tower or Jenkins or both. So how does that work in? And caching of dynamic inventories, as Kevin was mentioning, there's going to be some work in dynamic inventories. Where does that information come from? What is the future of config management? I don't know. But it's got to be more than just installing packages, setting up a config file, starting services, and then occasionally some bootstrap stuff. And then for a big part of having a public cloud and having to deal with things like ITIL and stuff like that is some sort of change management process. So how do I use Ansible to work with that sort of change management workflow and figure that out? Yeah. So I guess Kevin will come back up here and finish this off. Is this thing on? Yeah. So now, just to kind of build on to what Curtis was talking about is, where do we go from here? We want to increase the community participation within OSAT itself. That's what communities want. We want to grow. We want to make everything better. We think it's pretty good now, but we know it can be better. So pull requests are welcome. If there's a bug, fix it. We also want to build out more of the operational modules. We're carrying a few modules in our stack currently, whether that be for Neutron or Glance or something like that. And we think that that code lives in core Ansible. We've had some conversations in the Ansible collaboration day that we had yesterday about how we're going to do that and how we can move some of that functionality into the upstream code so everybody can consume it in a more organized fashion. Like Curtis said, the dynamic inventory, having multiple backends. Ansible's inventory is historically a static file or some custom database somewhere that you set up yourself. And we think that there are some improvements that we can make in that space. And where do we go from here? We don't know all the answers to that question yet, but it's going to be a journey. And we think that we'd love for people to come along with us. So to open that up, you guys have questions? And hopefully we have answers. And if you have a question, step up to the mic. I have a question about how you manage different sources for different pieces of the stack. So let's say you've deployed Kilo. You find that Kilo is critically broken in a way that affects you. And you want to actually pull a change set from Garrett and deploy that. How do you do that? So if it was a change set from Garrett and it wasn't merged at that time, we don't have a good way of doing that yet. But the Git source itself lives on the repo servers. And so you can go check that out. We don't have an automated way of doing that. OK, but so you can at least have a separate Git source for each component then. So you can say, I want to pull heat from my local Git repository and everything else from the Git.OpenStack. Yes, you could totally do that. And you can do it on a tag, branch, name, or SHA. And actually what we have upstream is all SHA-based. And one last quick question, can it run, can it pull configuration from somewhere other than Etsy OpenStack to play? Yes, yes you can. So the OpenStack Ansible wrapper will source what's inside of OpenStack Deploy. But if you have additional config that you want to put somewhere or encrypt using Ansible Vault, you could pass those values to it, no problem. I guess the question was more along the lines of I'm not root, I'm not storing anything in Etsy. I'm storing it in a local directory other than Etsy. Can I tell Ansible to deploy, use this directory as your source of everything? Yeah, actually you could do that. You would have to make two changes. One would be in the dynamic inventory pie that we're carrying, and the other one would be in the OpenStack Ansible wrapper. And those would, you just point at those two locations. Yes, thank you. Yeah, no problem. Thank you. There are at least two t-shirts going around now that say, but it worked in DevStack. Which is one of the top three reasons that I as a developer hate DevStack. Yes. And I know you said- I have that shirt. I love that shirt. The DOSAT doesn't have, it doesn't currently understand the all-in-one deployment. Are there plans, like I know, obsoleting DevStack is a holy war, but are there plans to sort of creep into that arena so that I can deploy OSAT locally and develop on it? And I don't have to tell my operations guys it worked in DevStack. I can say it worked in OSAT. Yes, I won't say that we're encroaching on any of the DevStack territory because I don't want to be, you know, put up on a crucifix somewhere. But I will say that there are people who are doing that. And some of the people who work for Rackspace are actually doing that now. And I actually saw a blog post, it was up on Reddit the other day about a couple of the Horizon Devs who are doing that and how they're doing that by standing up an OSAT powered cloud on a single node deploying no Horizon containers and then doing Horizon work locally. And then I've seen a couple of things like that using Glance as well. But yeah, I can talk to you a little bit more about that offline if you like. But yes, the answer is yes. If you have any further questions, let's take them directly to the presenters. Thank you.