 Hi, I'm Robin, welcome back to the Ansible day stuff here at OpenStack Summit. I've been announcing other people all day, now I get to announce myself, and my friend Monty Taylor. How you doing? And we were talking about Ansible community things and modules, and it's all typed out there. We even have a who we are slide. Who are we? I don't know. That is a deep and troubling question of which there are really no answers that will be satisfying. So you're Monty Taylor? I'm Monty Taylor. Hi, how you doing? I work on the OpenStack, I work for this company called Red Hat, you may or may not have heard of them, something, something Linux I guess, or whatever, more specifically or importantly to today's chat, I work in the OpenStack Infra core team where we run all of the developer infrastructure for OpenStack, which has turned us into some of the world's largest OpenStack consumers, because we run all of that on top of many OpenStack public clouds. By the way, there's a bunch of OpenStack public clouds in case anybody has been telling you to the contrary, they're lying, they're just pissed off that like there isn't a single billion dollar public cloud in the United States, that's their definition of no public clouds. I would argue that if there had been one of those that would have been an abject failure of OpenStack, because there being a collection of different vendors running different public clouds is a wonderful outcome, that will be my soapbox on that particular topic. In any case, we consume them, which has turned us into skilled users of the OpenStack API, which we've funneled also into the OpenStack modules for consuming things in Ansible, which is why we'll chat about those today. I'm Mordred and IRC in FreeNode, I'm E-Monty on Twitter if you prefer those sorts of things, and that's that. I was right here on the technical committee or something too. Oh, also on the technical committee, I don't know that's really relevant here, I've been on the board, I may have been around OpenStack for an extremely long period of time, I may be dying of old age inside of OpenStack. No, that's not, no. Yeah, this is my old age face. So I'm Robin Bergeron, I'm the Ansible community architect. Architect, shit. It's okay, I do stuff in Ansible and I often work with other open source communities because it turns out we all work very well together. Before this, I worked on the thing called the Fedora project and I was in charge. Yeah. Anybody heard of Fedora? Like is that a thing? Yes. There you go. Would you like to tell the little story about the thing you dug up the other day on the internet? Oh yeah, it's actually really fun, this is fun. So we're having a conversation about communities working together and whatnot and the topic of the OpenStack announcement at OSCON in 2010 came up, which Robin was like, oh yeah, several of us from Fedora jumped into IRC and had a conversation with folks. And I was like, well, our IRC channels have been logged since our inception, so we still have the logs for that. I wonder who you talk to. So Robin's in my first interaction, 2010 on IRC in the logs. There's the benefits of many years before we really actually knew who each other were. We had it, and it was a very jovial and fun. And we probably stayed on topic. Yeah, it was great. It was a great conversation about, hey, there's this thing going on. Thank you, Flavia. Oh, sorry. Okay, thanks. We may or may not get off of the introducing ourselves slides by the end of the talk, probably not. So real brief thing that I'd like to point out, this is, there's a part of this is not, may not be fully true, this talk is free software. It's written for a piece of software called Presenti, which is a console presentation software that takes input in restructured text files. Similar to Sphinx, which does documentation for Python. This file is currently sitting in the shade source tree, as is another talk that I gave this morning, which amusingly enough, if you put the talk content in a Sphinx documentation thing, it also renders very nicely as Sphinx documentation. So sort of playing with this idea, I want to go bug Toshio to see about putting this into the ansible source tree rather than the shade source tree, because the same thing should apply there as well. I know what kind of need, but in any case, you're free to poke at things where this actually lives is subject to change pending where I can stick it. So, since I'm the ansible person here, although you technically work on ansible. We're wearing matching shirts. That never happens. It's almost never that I wear corporate livery advertising a thing that I'm actually talking about. Well, it's community. I suppose I am also wearing an HP logo, so it's an old HP logo of the HP that isn't with us anymore. Ansible. Some people remember to call it an orchestration tool, and that's, generally, it's actual purpose. That's how we often describe it. Some people will more often refer to it as it's a configuration management tool, like Puppet or Chef or Salt or CF Engine. Or you can keep going down that. Some people will just say, I use that to deploy applications and stuff, which ansible can do a lot of things. And so whatever you'd like to call it, as long as it's pleasant and happy, and even if you're angry, I'd like to hear about it. But it does lots of stuff. Super powerful. It's kind of awesome. I'm pressing there. Yeah. Do it. I like to call it automation for everyone, including like actual humans, like myself. Hey, humans. I was a sysabman for many, many, many years, and then I stopped, and I had children, and then somehow segwayed into community management stuff. I'm sure there's not any tie-back there that works well. But it turns out, ansible makes a lot of people feel really happy, and it made me feel very happy that I could still do sysad many things after being away for so long. It turns out it works out pretty well in an open stack. I read, who reads the user survey? Who's aware that there's a user survey? Who's a user and does not know there's a user survey? Because you should definitely fill it out. It's really important information for the whole entire community. Anyways, it turns out 45% of people who use open stack or operate open stack use ansible, not just for deployment, because I know that's where most people start getting wrapped around the axle, but it turns out you can actually consume things or manage things once they've been deployed, no matter what you deployed it with. God, are you saying you can use ansible to interact with things over an API? That's crazy. That's crazy talk, I tell you. And not just for open stack. I mean, that's just a place where you're doing awesome networking equipment, containers. Here, containers are popular, right? Is that a thing? Yeah. There's other public clouds I've heard about. I don't know. Not that billion-dollar thing. Whatever. Whatever. Windows, you can use ansible. Windows, you can do stuff with Linux. There's a module for, what's that word? System, anyway. Whatever. Yeah. I don't know when to use up one of my free times. And there's even insane people who use it to do things in other configuration management systems. Oh, hi. Hey, what did you work on? So in infra land, we've been puppet users for several years, and we don't have to go into the ins and outs of that. We probably have similar sets of emotions in relation to that. But there's some things in our world where the eventual consistency model of our infrastructure would converge on a state at some point. It didn't really work out for us because it's our life. It happened this way. There were sequences of machines that we needed to do in order. This is not really the way that the world is modeled with that system. So we wrote an Ansible module, which has been upstreamed. And we're the fine caretakers of now, which allows you to correctly run a puppet in both agent and agentless modes. And we use this in production, it turns out. And turns out you don't have to go and rewrite things if you want to find a new tool. We had a lot of puppet. We might like for it to magically have been rewritten by somebody else, but we don't actually want to spend the effort to rewrite it because it's doing its thing. So it was one of my earliest excitements with Ansible was, oh, wow. I can use this to improve the thing that I have today without deleting the thing that I have today, which is not always how new technologies are introduced into the ecosystem. So yay, go Ansible. Good job, guys. Well, and good job, you. Because the way we met, except for that first time and the second time when I met you about CI stuff, was through Ansible because I heard that you guys were doing kind of awesome stuff. Turns out there's a lot of people who were doing awesome stuff, 2,665 contributors, although that was this morning. And more importantly, I was like, well, that is so 2,988 stars on GitHub that was this morning. So we might already be over 23,000, but I'm pretty sure there's a couple of people in this room who just star it, right? We could make it happen now, and someone could read about it. Not to shill for clicks or anything like that. We can debate the relative importance of stars. I'm not actually convinced. But 23,000 sounds kind of impressive. Anyways, there's people who have written 11,000 plus roles in Galaxy to do actual things. But the nice thing about Ansible is that it was written in a modular fashion, right? So there's this engine. And if you're really into Python and you really want to hang out with Brian Koka and Toshio and Jimmy C and all the guys and the girls who work on the thing, you can do that. But if you just want to make sure that something you care about is able to talk to Ansible, and there isn't already a module written, which there might actually be. I mean, there's a Honey Badger module. There's some piece of software out there called Honey Badger, and they have written a module so that you can control aspects of Honey Badger using Ansible. The problem is that I think I would be in violation of the code of conduct to say the next logical sentence, so just. In your imaginations. But there are now hundreds and hundreds and hundreds of modules, including for all those things that I covered on the last slide. But the important thing is that we have figured out that we at Ansible, like Ansible Inc. We're now Ansible by Red Hat or whatever. We at Red Hat, who work on Ansible, can't really be experts in these hundreds and hundreds of things, right? Like there's how many open-stack modules are there? I was like 30 or 40 just open-stack modules. There are so many areas where we could be experts, but it's just not feasible, which is why we've empowered contributors to work. I might know something about open-stack. And frankly, they're the people who have worked in anger and happiness and delight on these things, and no way more about. All of those emotions. Open-stack, and the many qualities that the public clouds may or may not have on any given day. Oh yeah. Which means this is why you guys have done an awesome job making Ansible be useful and usable by the 45% of folks who now use it. Tell you all about that. Oh yeah, that was your. Yeah, it is. And this is where I subtly hand it off to you. And then I'll hang out over here and hit you if you veer off into a random direction. Just on the off chance that we have some folks in the room who haven't done anything with Ansible, I thought I'd real briefly, because I'm going to say some words. Who has not used Ansible? All right, so we have some people in the crowd. So this is not a waste of time. Yay, you will get a lot of this. So there's some words that happen in Ansible and that mean things. And these are some that are somewhat relevant for the rest of the thing. You'll hear us talk about the open-stack Ansible modules. Those are Python code that actually live in the Ansible tree that ultimately wanted getting copied over to the machine in question where you're going to run some tasks. But they're Python code that can be used in tasks in Ansible. Roles, on the other hand, are collections of YAML files with sets of tasks that they're describing. So like a role will be like, hey, here's the 20 tasks that I'm made up of. Play is an association with one or more tasks and one or more roles with some amount of hosts. So you've got some tasks, but they're in a role. They're just things to run. They're not a description of where to run them. Play is combining those two topics. And then a playbook is a collection of plays and is usually a YAML file. So a YAML file can be a playbook can be a YAML file with one play in it or more than one play. And things would get broken up into additional plays if you have different sets of hosts that you want to run them on. In terms of using Ansible modules, the different hosts that you're going to run things on, not so much of a big important part of the thing, running your workload after you've interacted with the OpenSec APIs pretty much so much, but you're probably going to run your API calls from probably like local host or something like that. Talk about them a second. So there's these modules, right? So I talked about them. We're not really talking about the other things. It's mostly about the modules. The things that are the upstream support in Ansible for interacting with the OpenStack APIs. So these are not modules to deploy OpenStack. You could use some of these in a deployment. Some of them do things that would be admin type tasks that you would do as part of a deployment operation. But in general, you go to the docs page. There's an OpenSec section. It's got them all listed there. They're focused on consuming the APIs. And we have a mix of end user focused and deployer focused things. They work on all of the OpenStack clouds that I have been able to get my hands on. And just because there is a collection of Rackspace modules also in the cloud section of the Ansible documentation, don't let that fool you. These work on the Rackspace cloud very well. We use them all the time. So I guess if you're Rackspace users, you have a choice of two sets of modules that you can use. But there is no need to be like, oh, well, I need to use a different modules for that one OpenStack cloud over there and the OpenStack modules for the others. So that's just a side note. We try and do a lot of work to work around deployer differences. So OpenStack has some really great qualities for our deployer ecosystem in that we allow lots of configurability, lots of different ways that you can deploy OpenStack. And that's fantastic. And in some places, we maybe haven't done the best job that we could have in not leaking those choices out through the API. So as a consumer, it might get confusing. This actually doesn't come up as much for the folks in the, I mean, the deployer folks have to worry about configuring it. But a deployer working on a cloud, well, they know what their cloud is. So they know what their choice is where they made as the deployer. It's not as much of a, not really, hopefully, I really hope they know what they're doing. They're like, wow, we have Keystone V3? How strange. So that's the thing. But for the large majority of the user-facing modules that we have for OpenStack, we try and hide as many of the deployer differences we can and present a common reliable interface so that you could write playbooks that maybe do something over multiple clouds all the same time. Like that would be maybe a thing you might want to do from time to time. And it would be great if you could just have one playbook that worked on all of your clouds and not 20 playbooks that you have to decide which are which. There are some bugs that will come in and we go out of our way. Like we work around some crazy, weird things that really aren't even OK choices for a deployer to make. And we work around them anyway. We have gotten a couple bugs. It's worth pointing out because I want, if you have bugs that have problems, please bring them in. We want to fix them. Occasionally, someone will go too far because that happens. You have to have barriers and boundaries in life. Recently, we got a bug and I felt really bad for the person in question because it wasn't his fault. Like he was consuming an OpenStack cloud from somebody and he's like, these don't work on their cloud. And I was like, well, that's because they've redefined the API and completely changed the semantics of some fundamental terms. So no, I'm not going to work. We can't do that. That would break the Ansible modules if we were to put that. It would break it for all of the rest of the users. So I'm sorry. You should go have a nice, respectful conversation with your cloud provider and be like, yo, this is making it really hard to consume a large amount of the value add as opposed to the lack of vendor lock-in. I have a question. Yes. When you say we, you're talking about OpenStack Infra. No. Well, sort of. So I'll get to a part of that. So the Ansible modules themselves, there's what Ricky, like four or five of us that are core on them. But in terms of people who are actually active and touching them every day, it's depending on the day, sometimes me, sometimes Ricky. Shrews and Julia and one other person have a bunch of things they're busy with. So you still like them to be core reviewers on those. But they've got other things in there. Jessica Keating. Everyone should always have Jessica Keating as a core reviewer and everything they're doing. And so he's a great reviewer to have on things. So yeah, on the flip side, there's a library that we'll talk about in a second that a lot of these are based on that is run as an OpenStack project. All of the OpenStack Infra team are core on that. And so anyway, yeah, there's some people who are always looking for more. There will be a call out for that. Actually, no, I think I accidentally just deleted that. I was actually getting like, you guys abused the crap out of all these public clouds. Oh yeah, yeah. Go on. Yeah. I have accounts on pretty much every OpenStack public cloud in existence. So most of the time, I can just go actually reproduce a problem somewhere. But people have private clouds. And it's always a fun experiment to figure out how to get the information from somebody about what their private cloud is doing when it's a private cloud behind somebody's firewall. So anyway, if we find ourselves in that relationship at some point in the future, I will do my best. But it may not work the first time, because it may not know what to ask. Anyway, so as you actually gave me a great segue. Thank you. So this is all based on the Shade Library, which is an OpenStack library that started in its life and in for project. Its job in life is to abstract the deployer differences. I gave a talk on it this morning with some examples that went over by 20 minutes, because I don't know how to shut my mouth. So sorry to any of you who might have been in there about the timing of that. There's also another talk on, this morning was just like examples of how to use it. Tomorrow there's a talk on kind of more like architecture, like what, why, that kind of thing. So there's talk about that tomorrow. So it tries to make those go away. It's designed for multi-cloud. When we started writing the code that eventually became Shade, we had two, three, at least two. I can't remember if we were to three or not. I think just two when we started it. But turns out two is enough. Two is plenty. You have to learn a lot of things to deal with two. And so we found a lot of logic that we had to do to deal with that in our node pool project, which is the thing that we use. It's different than the one from Keynote this morning, different projects. Not eBay's node pool, but our node pool, which makes all the build servers for the OpenZack CI system, which hits around 20,000 servers a day. So it's pretty good at scale, I guess, that those numbers are decent, whatnot. So it's pretty good at massive scale. Hopefully it's simple to use. And then we basically, as we started working on the Ansible modules, we're like, wow, we've got this exact same logic in two places, that's stupid, let's make a library. It's kind of how that happened. So the combination of Ansible and Infra is actually where Shade came from in the first place. So that's the thing. It's worth pointing out that we test every Shade patch in the OpenZack CI system against a collection of differently configured OpenStack clouds. As we can't get all of the configurations that exist out in the world, because there's some things that are just really hard to express in a local deployment. But we have several divergent cloud configurations that we spin up and create a cloud and then do Shade's testing against that. We also have a set of test roles in the Shade repository that on every Shade patch, we at least make sure that we don't break the existing Ansible modules and test that those still work. So that sort of part of the thing is there. We're hoping in the not too distant future, the hole in this testing is that changes to the Ansible modules aren't tested in the OpenZack CI system until they've landed. Which is possibly not the first time that you would like to test them. So there's a bit of a hole in our test coverage there. But as soon as we have the same thing that you have for... That's kind of, yeah, there's magic coming in the future lands of Zool and that we will all subjugate ourselves to and it will be glorious, yeah. So that's the thing. So real quick, I wanted to, why did I, I'm not sure that bullet list is actually even correct. I think I've reorganized the text since writing this segue slide. So yeah, let's assume that at one point in time I had a logical structure in mind that I was going to walk you through and since that point in time I've changed my mind and have not updated the intro slide to the thing. So let's see how bad that is. Yep, it's in the wrong order. Anyway, so there's a few things that are useful to know in terms of, so there's a bunch of, I don't know, I didn't count but I think, yeah, 30, 40, there's a bunch of modules. They all start with OS underscore something, right? Because there's a basic single global name space of Ansible modules. So OS underscore modules are, what's that? Sever, yes. There's no sever module, I'll just have you know. Oh yeah, there is. Sever. There should be, there should be. So the end user-oriented modules are named for the resource they managed, right? So OS image, OS server, if I could spell, and so forth. So the thing that you're thinking of as a person consuming the API that you would like to have Ansible help you manage, right? Things that are oriented more towards operators, not end users but people running a cloud, those actually are named for the service because as an operator the thing that you're doing is you're managing that service. You know what service you're managing. That part is sort of important to you as an operator, or it may not be. This is the taxonomy that we've decided on for naming. So whether it's important to you or not, you're out of luck, this is how we're naming them. But this is information that you can sort of infer from that naming. There's some things that you go either way depending on how the cloud has decided to deploy itself. In some cloud configurations end users are empowered to create users in groups and things of that nature. And in some that's an admin only task. In those cases we tend to break towards exposing it so that an end user of those clouds is gonna be using modules consistently. And so in this case the deployer gets stuck with the problem of remembering, oh well, that's also it's OS Keystone domain and OS Keystone endpoint, but it's OS user. Sorry, deployers, you got the short end of the stick on this one. If there's more than one service that provides a resource such as security groups or floating IPs, this is actually the reason for the naming thing. The code behind it is gonna do its best to do the thing the user asked for, which is manage the security group. The user didn't say, hey please manipulate the Neutron API to do a security group for me. They're like, I just want a security group. So this will do its best to figure out what service it needs to do that for. Luckily there's less and less of this over time, but especially with the NovaNet to Neutron migration there's several things. And the things that we're pulling out of like Nova Proxy APIs, depending on cloud config, there's things that can go either place and that's not really very friendly to user. So we're trying our best to do the right thing as far as that goes. There is also an OpenStack dynamic inventory script in the Ansible repository. So for those of you that are a little bit newer to Ansible, one of the things that you give into Ansible is an inventory, which is these are my hosts. If your hosts are on OpenStack, you don't need to keep a list of them because OpenStack already has that list of them and it's queryable. So you can use the OpenStack dynamic inventory script, configure Ansible to point to it and it will run the, when it needs to get inventory information it will run the inventory script and use that information as your inventory host list. If you have more than one cloud in your configuration it will treat all of those as one giant magical inventory because that of course would be the whole point of you having multiple clouds at your disposal and services deployed across them is you can treat them all as one collection of server resources rather than multiple. It does in its default configuration exclude hosts that do not have IP addresses. This is because Ansible operates by SSHing into servers. So if the server doesn't have network connectivity then it's not a very useful host to be in the inventory. There are ways, if you are in a situation in which that is not a valid statement for you there are ways to tell things to not do that but in general that is the truth for 99% of the people using it. So it makes a bunch of auto groups so it'll make an Ansible group for all the things that share a flavor, all the things that share an image. Pretty much any of the things that seemed like they would be a sane grouping that people might wanna group things by. So AZ, region, cloud, AZ plus region, region plus cloud, the things. So it's a bunch of those. Also if you put in a groups field in your metadata it will put the server into groups of those names as well. So it does its best to do those things for you. Also in the spirit of modules in community, talking about community. The modules for all of the OpenSec resources are very welcome. They're already separated into little files. It's not like we'd have to be really protective of oh I don't know that's only quasi official thing or they're welcome. Just because they're welcome doesn't mean that I, tomorrow I'm gonna set on and write one of them. So if it's a service that you care about that doesn't exist there, it's not that anybody has a negative relationship with it. It's just nobody's gotten around to doing that yet. So please write patches and didn't pull requests. They're all welcome. Same goes for features to existing modules. A lot of times it's just as easy as adding a flag to the config section of the module and be like oh I need to pass this flag again. We just actually ended one of those this morning. Pretty easy as far as that goes. So please, all of these things are welcome. Upstream and accessible. Oh sorry, all of these, well you're welcome upstream everywhere. The modules is home is Antibole's GitHub repository. Shade's home is OpenStacks infrastructure. So depending on where you need to make a patch, there might be one or two different places but that's our life in this lovely. I'm just gonna casually mention because normally I would be standing over here being like hey you've got like 10 or 15 minutes or something. I think we've got like. 10 or 15 minutes or something? Yeah, I'm not sure. Great. Thanks for. Sweet, that's exciting. We're doing it right. Okay. It's good. I don't think we're gonna go over by a half an hour. I'm excited. So that said, we're very welcoming. There are times we have to be strict and so I wanna just say yes please write patches, review pull requests, all of that is super helpful. The, we do try and keep a mind out towards making sure that the modules are gonna work well for everybody. And so there's, we might want to bike shed a little bit about the name of a parameter. Is that supportable? Is that available everywhere? If it's not available, what's the behavior of those sorts of things? If they're going to consume an OpenStack API, by God they need to use shape. That is non-negotiable. There are reasons for that. I'm happy to go into them. I won't go into them right now, but that is a non-negotiable rule for the Upstream OpenStack API consuming modules. Somebody came up with a great example. Just about a week or two ago, there's a fellow writing an Ansible module to generate a Tempest config for Tempest, to generate Tempest configs. It is like, do I need to use shade? I'm like, well, I mean, no. Because that's, you're doing, you're taking some inputs and you're producing a config. You absolutely don't have to use shade for that. But then it turns out there was another piece that was doing some introspection of the extensions that are available for the services in the cloud. And that tipped the answer back over to, well, yeah, for that piece, that's what we're gonna need you to do that. And a lot of this has to do with making sure that we're supporting auth and plugins and all that type of stuff consistently across the set of the corpus of modules. We certainly don't want individual modules behaving differently from each other as it relates to connecting to the stuff. So vendor difference should be hidden, probably beating that to depth thing, other than operator modules. So it's fine to expose some of those things because that's actually empowering to operators for them to be able to flip flags and it's not confusing. Or it's not any more confusing than anything else is. So all of your cloud information is all configured in a file called clouds.yaml. This is where you stick. So if you got 20 clouds, make a nice long clouds.yaml with all of your account information in it and you give them all names. And then you refer to those names in your Ansible playbooks and everything is great. Both yaml and yml as suffixes are perfectly acceptable. The file can be in home directory or system wide. Or actually I should have put this, I was gonna go back and edit the slide and I forgot. It can also be in the current working directory. So if there is a cloud.yaml file in your current working directory, that takes precedence, which is a great way to segment some things if you're like, I don't really actually wanna to really do this on. So I've actually used that in the examples at the moment for a reason. But anyway, so you can stick them there. Information in the local directory wins, home directory wins next, system level at the open stack comes after that, yes. You have to use clouds.yaml? You don't have to use clouds.yaml. Okay, that's what I thought. We'll come back to that. Okay, sounds good. The full docs for it are in the documentation for OS Client Config. I say the full docs, they're not as full as I want them to be. That's a due to this item. There's some things in there that you can, there's some things you can configure that are great that may not be as fully explained as you might like for them to be. Anyway, if you're a person who's running on Mac or Windows, some directories are different because of where the right place to put things on different platforms is. So these are the locations that files go on those locations. There's a Python library called appdir, I think, that has all of these encoded in it, which is what we use to get them. We try our best to do that. Just for Ansible, you can also stick a clouds.yaml into Etsy Ansible. So this is for folks who are like, I don't really want to stick config files for Ansible anywhere other than Etsy Ansible. I recommend not putting it there. One of the nice things about the clouds.yaml is a place to stick your cloud configuration, your cloud client configuration, is that other tools use it as well. So Python opensack client, the command line client reads and understands the file, like the different things do. So if you're consuming opensack things, it's possible that sometimes you're going to use Ansible and sometimes you might use some other tools. So you're probably better served by sticking it in a normal location, but that is there and available to you should that be an important thing for you. I ramble. I'm just saying, I mean, you have a demo, like, I don't even know what I'm saying. Yeah, we're gonna run demo on live public clouds, right, because that's always a great idea on conference Wi-Fi. So an important thing is we see this tripping people up. I've mostly been trying to focus on this stuff, you gotta know to get up and running the stuff rather than here's how to use an Ansible module. So Ansible executes code on remote systems, even if it's localhost, it's, there's still this constructor around that. And the clouds.yaml file needs to be on the system where, on the target system, it, I usually run it like, hey, localhost run this thing, because it's an API call, I don't need to shell out to another server to run it. But whichever server you're telling it to connect to, even if it's localhost, to do it, that's the server that the clouds.yaml information needs to be on. Putting a clouds.yaml file in your localhost and then saying, hey, go run open stack API commands on that remote host, not so much what's gonna work. Even just a little bit, like not even partially. So that is an important thing to keep in mind. You can also pass your authentication information to all of, as a parameter to the auth parameter to every open stack module directly. So you can put auth information straight up in just like some Ansible variables and pass it in as a parameter. It's totally a thing you can do. If you need to put other config in your clouds.yaml, you cannot currently directly pass that into modules. There's a thought floating around that every module takes a cloud name as a parameter that you can give it, which is referencing the piece of configuration in clouds.yaml. There's a suggestion to allow that to also take a full dictionary, which is the entire config description for that thing. So we may have a work around, there's been some folks that are a little bit frustrated. They can't just keep all of their config in their Ansible variables and they've got to keep this other config file. For right now, you have to keep the other config file if you need to use any other config variables. So that's just life, I apologize. Quick examples, this is a snippet from mycloud.yaml for city cloud. This is referring to a well-known cloud. So profile city cloud is saying, hey, the library already knows about a set of clouds and it has some of their known information in it. Use that one, I'm gonna name a cloud called my... So in my Ansible playbooks, I will refer to this thing as my city cloud and then there's my auth information. You'll notice there's no password there and that's not just because I omitted it for the slides. It's also profane. It's also profane, yeah. It's also like I can point out that in addition to clouds.yaml, there is an optional file called secure.yaml. If you want to put secrets into a different file than clouds.yaml, you can. The values get overlaid so this would be an example. Secure.yaml, my password is not actually eight Xs. Just yet seven Xs, clearly. So that is available to you should you want to use it. Kind of depends on your environment and whatnot. Additional entries and so this is where I mentioned that you cannot use, you cannot pass all of the config things that you can put into cloud.yaml. This you can pass into playbooks directly. So you can just say auth, colon, and then put the stuff. And in fact, clouds.yaml's auth dictionary is designed around all of the Ansible playbooks. That's the reason there's an auth dictionary in clouds.yaml is so that there's a clear delineation of what you pass because we can't do parameter validation otherwise in Ansible because the auth dictionary can have variable content based on what your auth plugin is and it's a mess. So anyway, there's other things like in this one, neither one of these extra settings are needed for Vexos. Vexos is a very nicely behaving cloud. But in this one, I'm telling in my config that I would prefer to use version three of the identity API and I would like to override the endpoint returned by the catalog with this endpoint and it will use those things. So this is additional things you can tell the config that it will honor. If you wanna get really crazy, internet does this really neat thing where when you spin up an account, they create you your own public network and your own private network and unfortunately Neutron doesn't have a capability for you to figure out which of those is which. The only way that you can know it is as a human you look at the names of them and the one that's named WAN is the public one and the one that's named LAN is the private one. Which is fine for humans, right? But it's a little bit harder for this. So we've added some ability to annotate some network information. If you need it, in most cases everything can find everything fine. This is an extreme case that you would need to do this for and all of these values can be set on a per region basis if you need to do that. Hopefully none of you will need to have complicated clouds.yaml files. Most of the time it should just work with your authentication information but it's worth pointing out. I've also included the auth URL for internet here. You actually don't need to do that. You could say profile internet and it would fill that information in for you. As it would also fill in floating IP source none. Shade will figure out what to do with floating IPs which involves doing some neutron API introspection but if you know that the cloud doesn't support floating IPs we can configure it to say you know don't even bother doing the introspection it's not worth your time. And there's a few other things like that. We could literally do a couple of hours on all of the different ways you can figure things. There's a few additional variables you can add into. Oh and wow that there's something formatting went terribly wrong with the slide. I apologize. Typing has failed. Yeah, typing, this slide sucked. Typing has definitely failed on this slide. I apologize for that. So there's three additional things you can stick into your cloud.yaml that will affect the behavior of the dynamic inventory plugin. There are some things that Shade does to fill in additional information around servers. If you set, and it does this by default, if you set expand hostvars to false it will not do those things because it's possible that you don't care about them and you don't want the extra API overhead. Also the original behavior was that if any of your clouds in your list had a failure that the inventory script would fail. We didn't wanna break backwards compatibility with that even though it seems like a bug but maybe that's the thing that a person would prefer hard stop if anything is wrong. So you can set fail on errors to false and it will give you as much inventory as it can get from your clouds rather than only the things. And also there's a behavior as it relates to what OpenSec Nova does not insist that your host names are unique, the names of your servers. And so injecting servers into an Ansible inventory then becomes problematic because how do you refer to it in the inventory if there's two servers named foo, right? So the original behavior there, and again this is a place where we put in a flag because we wanted to change it but not break people's back compatibility. I'm literally just going to interject. I know we do this in phone calls. Sometimes I'm like, we're five minutes over I think. And I mean there are people who are coming in, they want to see Paul's OpenSec and Ansible 101 thing. Yeah, it's absolutely, they should see that. I mean, I know. You can read up on those things. This is an example of testing your clouds.yaml. So if you had that clouds.yaml file, run this playbook and well I mean if you had my clouds.yaml file, run this playbook but this is basically showing iterating over three different cloud entries in a config file and OS auth is just going to auth and return success. So it's a really great way to verify before you start to figure out why your OS server command isn't spending up the server and airing out, make sure that all of your auth config is correct. Cause that's probably 95% of what's going wrong. Something is weird there. So this loops through for cloud and region of those. A slightly more interesting one is this will spin up a server on three different clouds. It will spin up an Ubuntu Xenial server on four gig instances on a Vexos city cloud and internet. And we're over. I was originally going to show you that live, but. Well, I mean, you can just run it while people are like walking in and out. I can stare in a fascinating way. It's really great to run it as, so if you run that, it sits here and we're now going to wait for Vexos to, yeah, so any questions while we're watching the playbook run? Everybody enjoying watching the playbook run? It's still running. It's still doing things. I promise. It's probably more fun than watching us. Yeah, well, that's, you know, it's in a couple seconds here. It's going to print a new line. It's going to be in a slightly different color in sort of a yellowish, yellowish gold color. Question. Question. Come on down to the microphone that's hopefully on. I think. Hey, look, we spun up a server. Isn't that exciting? No, we still get to take a question though. I don't know if that microphone is actually. I think it's on. Oh, it is. Sweet. So I hope this is not off topic, I guess. No, it's public. All topics are off topic. We're all off topic. Comparing from a cloud consumer, a cloud user perspective, using Ansible versus using Heat or maybe thinking more generically like Terraform. We see articles talking that Terraform's a better tool for that and things like that. What's your vision on it and the direction that this is going? Yeah, totally. So that's actually a really great question. I'm going to try and give you three answers quickly. So the first one is that as it relates to sort of Ansible versus Heat or Ansible and Heat or whatever, there are actually Ansible modules that will run Heat templates for you. So we have support for driving Heat with Ansible. And you can do the, you can run Ansible. And you can run Ansible inside of Heat templates too, yeah. So you can tie those things together. Sort of depends on what your workflow is. Heat of course exists inside of an individual cloud region. So if what you need to do is orchestrate some resources across multiple clouds, Heat isn't sitting in a position to be able to do that. So in that case, you could have a Heat template for instance that you run in each of your clouds and then use Ansible to make sure that you're running it on each of them. You could also do this with Terraform. Terraform is sort of, some of this also depends on what your larger overall environment is. So if you're doing more terraform-y things with your other content other than just spinning up servers then Terraform would be a good choice. There's some issues that are open stack level issues in that Shade has a whole bunch of business logic in it to deal with the different cloud configurations out there. And that's a Python library and Terraform's in go. And so Terraform at the moment doesn't have a way of consuming those same workarounds. So Terraform's gonna work great on the clouds that it works great on and it's gonna work poorly on the clouds that it doesn't which isn't a knock on Terraform. Like that's our fault. But we've got efforts, we've identified that in the wider open stack community. So ultimately the long-term goal should be that Terraform should be just as good a tool for you to use if your goal is using Terraform. If you like, if you're terraforming things then I would hate it that you have to ansible a whole bunch of resources up with the cloud APIs and then Terraform on top of them because the Terraform go for cloud driver doesn't work on your cloud. That's a terrible outcome. So ultimately the answer should be it kinda depends on the rest of your ecosystem or whatever it is. Windup being an infra very, very, very, very, very close to these modules because you're using them to do other things as well. All day long. The sort of next steps in walking through a progression of playbook is hey look, this playbook spun up a server and then I registered it with my inventory and RAM and then I ran some boot up scripts so it's not just to get cloud resources, it's to do things on it with ansible as well. So like there's steps there but it's a really good question. Real quick, I'm gonna go. There's no more real questions. I just wanted to give two shout outs. Check out Ansible Cloud Launcher which Ricky wrote, it's a role for describing sort of more complex cloud topologies that you might wanna spin up using Ansible. I believe there's going to be a demo of that in the next session. Oh that's great, there's gonna be a demo of that in the next session. So stick around for the next session that I'm bleeding over into. And also there's another tool called Lynchpin that our friends in Sentos wrote for similar but different reasons and it focuses more on being able to abstract different clouds and they're using that to spin up test environments but it supports all of the things that Ansible supports basically to do that. So those are both things that are worth for sort of further reading and stuff like that and with that I'm gonna shut up because I've already taken too much of Paul's time. Well he doesn't start for five minutes. Oh, well he wants to maybe come up here and. I don't see this is, I thought I had to like get off. Well I mean technically it was. Okay yeah it probably is. There's like 20 minutes or something between them so that people can walk places and do stuff. Do you have a talk in the next hour? I don't know. Let's put my phone down there. I know today's awful so. Clicking noise so. For you. The next thing I was going to do after creating the servers for what it's worth is to run the Ansible inventory script to go query all of the clouds to show you that we now have an inventory full of the servers that we just created because what live demo is positive without that. Also in follow up things these are all in documentation so go read it. You'll notice that it's taking a while to run a dynamic inventory against your Ansible, against your OpenSec endpoint because it's querying multiple clouds to find out what all of your servers are. That might not be your preferred choice every time you run an Ansible command to wait the period of time it takes you to run the actual dynamic inventory. So there's also facilities in the clouds.yaml file to express some caching information so you can tell it cache the inventory for however long and so it'll do that. So there's the, you can see this has got, there's my Vexos things. These are groups that it created with a bunch of different servers in it. Some instances, instances grouped by flavor and then here's the actual entries so it does basically the entire server record all goes into Open, the Nova server record goes into a variable in the Ansible inventory entry for that host called OpenStack. So if you go, if you're doing your Ansible variables you get the entry for the host and then there's an OpenStack parameter on that. And inside of that is the entire content of the Nova server records. You have access to all of the metadata that Nova knows about a particular server. And now someone's going to submit a module to remotely turn off your microphone when your time's up. That's a good module. Somebody should write that module. That'd be awesome. Awesome, all right. Well, I mean, I like listening to you talk but you know. Paul's being awesome and giving a talk. Paul, come, yeah, just, if I were you I would just literally walk up here and be like dude get your crap off this table. I'm moving in. Awesome. All right, is that the end of, we didn't do bad, like how often the weeds were we? Just partially. Jim, you'll give us the honest answer. Yeah.