 All righty, hello everybody. I'm going to be talking to you about the OpenStack Ansible project update for, well, for this current, the Pike Cycle. And then what we're planning to do for the Queens and Rocky Cycle moving forward. So my name is Andy McCrae. I'm the current PTL for OpenStack Ansible. And I was the PTL for the Ocata Cycle as well. So for the last two cycles. Yeah. All righty, so some of the key things, what does OpenStack Ansible do? So we're all about production deployments of OpenStack using Ansible. I guess it's in the name, so you probably figured that one out pretty quickly. So we used to be called the OpenStack Ansible Deployments project on Stackforge, but we moved to the OpenStack namespace a couple of cycles ago. And we're now just called OpenStack Ansible. So the whole key and what we're after is just doing production-ready deployments, so allowing you to scale and customize your infrastructure and your OpenStack services around your hardware and your needs for your use cases. So we don't enforce things. So for example, you can deploy services in containers, but you can put them on metal too if you want. And you can then decide what you want to do. So it's flexible and customizable. And we really just wanted to work at scale in production, and we want to do things like upgrades correctly so that we know they work and are tested, which we do have testing for upgrades and various other things. So a couple of things in the description there. We use Alexi containers to build some of the OpenStack infrastructure services. Like I said, you don't need to do that, but by default, we'll set those up as part of the deployment itself. We think that containers are a pretty good way to deploy your infrastructure services. You get some really cool benefits around how you can do upgrades, consistency, repeated deployments, various other things. It's just a really good way of doing that. So a quick background. It started off as a POC in the Havana cycle. I was working on that at the time. And we then moved it to Stackforge in the I-South cycle, so literally the next cycle over. And it moved from Stackforge to OpenStack namespace in the Keyleth cycle. So for those of you that aren't familiar, the Stackforge project was kind of like where you put all the OpenStack-related projects that aren't specifically like Nover or Keystone or like specific OpenStack projects. So like deployment projects were all in Stackforge. There were various other kind of analytics and things like that there. A lot of really useful stuff. And a lot of that got moved into the OpenStack namespace when they moved to the BigTent model. So I don't really like stats around contributors and stuff, but according to Stack Analytics, we had 106 or something unique contributors. A lot of those are kind of, I guess, like drive by one contribution, so I don't want to paint this picture that we literally have like 100 people developing this thing. That's not the case. We do have a pretty big set of people that have more than 10 contributions in terms of code commits. And we have a set of at least 10 to 15 people that are consistently doing like a larger number of contributions. So I think for every project, that kind of how many contributors we have stat is a little bit misleading. And I know there was a discussion about that for Stack Analytics. I personally found it useful to know who's kind of interested, even if it's just for one commit. It kind of gives you an idea of some people are at least looking at it. But yeah, I wouldn't read too much into that personally. Cool, so Pyke, what have we added or what are we adding? Because it's not over yet. We've got a couple of months to go until September. So we're a cycle trailing project, which means we get essentially two weeks extra at the end of each kind of milestone and release period to finish up features. And the reason for that is as a deployment project, we utilize the kind of head of master for the other projects. So if Nova, for example, hasn't released yet, it's very difficult for us to ensure that everything's working. And if they add a feature on the last day of the cycle or a fix that's really critical, it's hard for us to then implement that fix on the same day. So we get two weeks at the end. So our official release for Pyke will, I think, is September the 11th to the 17th. So these are the things that you can expect. Some of these are actually already there. So the CentOS support, we've been working on this not just for Pyke, but in Okada. And I think we started working on it in Newton. And so we've been slowly adding support for CentOS. And we now are at a state where we gate against it a full build. So we have daily deployments running. We've got some timing issues that it takes a little bit longer than our Ubuntu deploy. So it doesn't run on every commit because it times out. But we have a daily gate that runs. And it's building successfully at the moment on master and on stable Okada branch. And we're slowly adding in extra services that we have working on Ubuntu that aren't yet working on CentOS. And so I would say that it's now at a stage where, if people are interested in using OpenStack Ansible with CentOS, CentOS 7, then now is a good time to start trying it out and seeing if it works for you and helping us to move it to a point where we can happily say that, yes, now it's ready. And because I know it's always a question, we do have some people running CentOS 7 clouds on using OpenStack Ansible. There's two that I know off the top of my head. So it is being used. But yeah, it probably isn't as stable as the Ubuntu install that we've been doing for many cycles now. So any help getting it there is appreciated. So the next one's an OpenStack community goal. So all the projects have this goal around deploying API services using modWizgi or UWizgi instead of a ventlet and then fronting it with a web server. So we've decided, or we're deciding, to do it with Nginx and UWizgi apps. We think that this gives us a lot of benefits around scale. So you can put Nginx wherever you want and then link in the UWizgi apps. Also around consistency, all the apps will then be deployed in the same way. And UWizgi allows you some really cool things you can do to help upgrades and service restarts. You can do very cool things with reloads and the way that we handle service updates. So we think that's a really cool benefit. To be honest, we were already going to do it even if it wasn't a community goal. But the fact that it's a community goal really helps us because we rely on all the other projects to implement modWizgi or at least a Wizgi app instead of doing a ventlet. And so the link there is a spec. So we have a spec up that's currently being worked through with a work in progress patch. And then lastly, we have put a lot of focus on documentation over the last couple of cycles. So in the Newton cycle, we redid the deployment guide. And in the Okada cycle, we started working on an operations guide for OpenStack Ansible. And we've actually last week moved it from being draft to being a actual not-draft guide. There's a little bit more work to do before we release Pike. But it's at a point where we think it's got some useful information for people. So what it isn't is it's not a guide that's supposed to tell you how to operate Nova or Cinder or any of the other projects. It's aimed at literally being a guide to tell you how to do things with OpenStack Ansible. So because of the structure of how we set it up, it's slightly different to just running services in various locations. We use virtual environments, for example. So if you want to be able to use the CLI, you need to go to what we call a utility container or you need to activate your virtual environment for, say, Nova to get the Nova CLI. And so there's that kind of information, also information around how the built-in database that we have runs. You don't have to use that database, but I think a lot of people are. So we've got some operations information around that. So for Pike, we've kind of been asked to put up how we see the focus on the various things for the Pike release. So a lot of these basically link in with the three goals that were before. So for example, the scalability and resilience and manageability. We think that the Uwizgi goal addresses that. We feel like we can scale better. We can put nginx in various locations and we can do restarts in a much more intelligent manner. So that kind of speaks to the resiliency and the manageability and scalability ones. User experience, the guide and the addition of sensor 7 support, like I hope will help with user experience, it's been a focus for us. Security is always a focus. There hasn't been specific work going into it, although Major Hayden has added sensor 7 support to his STIG repo. So we've got the OpenStack Ansible Security role, which kind of addresses some of the security standards. So if you're interested in that, that isn't specific to OpenStack Ansible. It runs against OpenStack Ansible, but you can run it against hosts if you want. So if you're interested in that, take a look. And interoperability, well, we kind of always care about it because we want it to work with as many services as we can, but there was not a specific focus. And then modularity always put as not a focus because the way we've designed it makes it really modular. So we don't need to focus on doing anything to make it more modular because of the way it's designed anyway. So in Queens, these slides seem backwards, let me, here we go. So possible features and enhancements for Queens. So we're looking to build artifacts, which would essentially be, work's already gone into that. We've started to split the way we do deploy so that we tag based on different phases of a deployment. So you have installation configuration, and there's a third one that I can't take over off the top of my head. But essentially the idea is that you can do different stages of a deploy using tags. And the aim really is that you would then be able to deploy an artifact, so for example, container, and then do the configuration steps only by running a tag. So we don't have to change anything in the way we do our tasks in place, but we can just use the tags to get the benefits of not having to deploy individual containers and instead moving artifacts into place. So yeah, the separation of deployment steps there. So we've put some work into doing system deintegration, which has added some isolation reporting and monitoring for the services that you run. And I know Kevin Cardinal is doing a lot of work in that space to make it more reliable and manageable for all the services. And then there's been work that's already started to add Susie support for OpenStack Ansible. So it's work that's gonna be going on in Queens. I can't promise it'll be finished by then, but it's all things that are happening in that cycle. So I'll just go back to the other side. So these kind of link in with, again, the focuses. So the scalability and the resiliency is all around like the artifacting work and the system dework and the interoperability and the user experience. We think that the Susie work will add a better user experience for people who'd like to deploy not on Santos or Ubuntu. So for Rocky, if I'm honest at this point, it's like we haven't had the PTG for Queens yet. So it's quite hard to say what we'll be doing in Rocky. But we always have key focuses. I mean all six apart from modularity, which I would say we get as built in by the way we've designed it. They're all kind of like focuses for us. But I think that we're gonna start using some of the benefits of doing artifacting to improve our upgrade process. And I think that adds resiliency and manageability. So I think those are two of the key things that I would hope we will get out by the time we're moving to Rocky. And user experience is always important to us. I think we've put enough effort in to show that doing documentation and helping people utilize the deployment system is a key focus for us. And I definitely don't see that changing. A deployment project that no one uses is a pointless deployment project. So we were asked to come with a question for what we would like feedback for. And I guess the question that I always like to ask is what is the biggest barrier to people when trying to deploy their cloud using OpenStack Ansible? We'd love to try and make that easier if we can. I know there's some difficult things that as much as we'd love to make it easier, it's kind of about your hardware or infrastructure. I think some of the feedbacks always are networking is hard to set up. And it's hard to integrate those things. We'd love to help document those things, but at the end of the day, like setting up networking and having knowledge to set up your infrastructure's networking is probably still going to lie on your shoulders. But if there's something we can do to make this easier, we'd like to do it. As an example, we've had a lot of feedback around the way we install our virtual environments and PIP packages and there's work going on. At least last week, I saw some patches for it to move from using a kind of Python script that's hard to read the output of to using Ansible tasks that are a lot more modular and it kind of shows you exactly where it failed rather than just a dump output. So we do take the feedback that people give us pretty seriously and we try to build it in so that it's easier for people to use. And then if anyone has time, we'd love to get opinions on the operations guide. It is very new, so there's going to be some stuff in there that's not quite right. We're going to do a final kind of, at the PCG we did a review phase, we got all the developers to take a section and just like wholesale cutout things that were just wrong or fix up things that needed fixing up. And there's more of that because a lot more content has gone in the last few weeks. So we'll be doing that again, but if anyone's interested or would like to tell us what we're missing from it, that would be cool too. So yeah. So they told us to keep it short so that we can take a long time for questions on these sessions, so about 20 minutes. So I'm actually about five minutes fast, but that's pretty much the update for OpenStack Ansible for these past couple cycles. So if there are any questions, I'm happy to answer anything really related to OpenStack Ansible, what we've achieved in Accordia as well and moving forward. So if there are any questions, my only ask would be if you could use the microphone, otherwise I can try and repeat the question. But yeah. Yeah. Should we go on closer back? My question was I was always intrigued in your project but I was confused about the LXC container piece of it. Do you guys have any thoughts around kind of moving to Docker that should be more in line with what other people are doing? So when we started doing it, well, okay, so I think they're slightly different, like I guess ideologies behind it. But yeah, I mean, we get the question a lot. So it is kind of like, hey, you know, you need Docker and you can do like Kubernetes and stuff. But we actually investigated Docker early doors and we didn't really like the way it handled certain aspects of it. We also had like, we also run with like, so our LXC containers run as basic hosts. You can actually connect to them and do commands on them and like operate them in a way that you would operate a normal host but they're just lighter weight containers, right? And the Docker model doesn't do that. And the other thing, one of the other key reasons I would say that we didn't do that is because we kind of wanted the flexibility of OSR itself uses LXC containers by default, but, and this kind of sounds weird, but we don't actually care what the host is, right? We don't care if it's a container or a host, it's literally just a place to connect to and run some tasks. Whereas the Docker model is definitely more of a pre-create a thing and then move it into place and then you are in a way like locked into whatever that thing is, right? So in OpenSec Antsville, you can decide that we've had some use cases where people ran Swift on physical hosts, the proxy services. So they were having a performance bottleneck and they needed more power and so they just ran the proxies on physical hosts and then they then co-located memcache on those hosts. So if you're in a Docker container model, you either need like a memcache container and a Swift proxy container and then like kind of connect them, which is fine, like that will work, but you then there's always an overhead with running a container anywhere, like regardless of how small that is. And so we kind of just like the flexibility of being able to just point to the host and deploy some stuff. The LXC container bit is just our version of like, hey, let's just do containers because we think it's a really cool way to handle the infrastructure services. So on that note, I would say that we purposefully don't deploy things like Cindervolume if you're using LVM, Nova Compute, Swift storage services. We purposefully deploy those on metal because there's kind of like a one-to-one ratio between like the server and the service. So like you can't run two Nova Computes on one physical compute host, right? Like it doesn't make sense you're managing the same resources underneath and Swift storage, in fact, I would say is even worse because you wanna keep, make sure that you have consistency of the storage on your host and like if you put it in containers you now have to mount that storage and it just becomes a mess where you're like adding complexity for no real benefit in my opinion at least. But all the infrastructure services like APIs and everything is awesome in containers. So yeah, it's a slightly different ideology. I don't think we'd ever move to Docker personally. Kevin would probably kill me if I said he would. So yeah, I think we like the way we're doing it now and we're really after like stability and upgrades and making sure that we have a consistent deploy rather than constantly trying to chase a new technology that's an ever moving goalpost I would say. Like I imagine it's Kubernetes and Docker today and something that hasn't come around yet tomorrow. And if you try to chase that I think that your users suffer because you rip stuff out from under them and we really do not wanna do that. We'd like you to be able to upgrade to whatever the Z release will be on OpenStack Ansible from here. So follow up on that. So for people that feel like LXC itself is too exotic how well tested is a bare metal deployment of all services? And do you do gate on it? Yeah, so actually we have a scenario test that's just gone in. So we don't gate on it. We have a periodical that runs daily. I think Kevin set it up so it's a daily test. So it runs once a day. So effectively if code merged that broke the bare metal we wouldn't notice it but we would notice it when we check the dailies which we're actually doing because we have full upgrades in a periodical that runs daily as well just because it takes like longer than the one hour. I think one and a half hours we have for a gate. It takes longer than that to do a full deploy and then upgrade. So we do that on a daily and we check that and then we have our CentOS gate runs on a daily that we check as well. And there's a couple. So we check those pretty regularly. So I would say that by the end of a release bare metal will work but if you want to use master branch and you're halfway through a release I'm not sure I would guarantee that it will work but that's not so much about us and it's more about things changing in the other upstream projects that we deploy from master. So in our master we test the head of all the other projects just so that we know when they change stuff and it breaks our code and we fix it as quickly as possible. So there are definitely periods where things aren't working on master but the stable branches are good. Thank you. Hi there. First of all thank you for the talk. It was really nice. I do have a question. We are using OpenSec Ansible pretty successful in the Newton release. So it's really nice and helpful to us. Currently we are working on a CI CD implementation for that. Do we have any experience or is it planned to integrate CI CD to OSI as well? I don't know. Kevin do you know anything about that you could speak to? I mean I know that various organizations using it to CI CD but I'm not sure what we would add in terms of integration for it. It's not something that I would say is a bad idea to have upstream but I'm not sure what that would look like in terms of like different. So for example I can say from I work at Rackspace so in our private cloud team we have some integration testing between like OpenSec Ansible and like some monitoring stuff that is Rackspace specific that they've put in there and I think some logging bits but we're actually trying to move the logging stuff out of Rackspace specific and more upstream but yeah I'm not too sure what a generic CI pattern would look like but I'm not yeah I'm not sure. But if you have some ideas though I'd love to hear them because that is something that like a lot of people want to do right and yeah we'd be I'd be keen to see what your ideas are at least. Okay. Yeah so yeah that's a good point. We have an OpenSec Ansible Ops repository as well which is kind of where we put a bunch of tools that are not necessarily like related to setting anything up but are more just like I think there's some for like adding compute hosts and removing compute hosts and doing kind of operational tasks in like Ansible playbooks that will already link in with the OpenSec Ansible like inventory. So that would be a really cool place to have those kind of set up and play for like CI CD and also the Ops guide would like Kevin said that would be a really great place to put that kind of thing as well I think I guess it doesn't have to be very specific it could just be more generic around like he has things you can do and he has how you want to hook it in. I know that Kevin and major have been working on monitoring plugins that are very generic that we plan to use but the aim of them is to be more generic so they could be used by any deployment project. So yeah. Okay thank you. Thanks. So Kevin told me I had to come up and troll you but I actually have some nice things to say. Kudos on the OpenSec Ansible security stuff. Ah yeah. We have... Send it on to major. Yeah it's major. We have an adjacent project so we don't use your deployer but we started using the security and it works perfectly. Yeah. And then the doc stuff is amazing so kudos to the folks doing your doc stuff. Yeah I will send that on. Yeah we did put a lot of effort into the doc stuff and well getting the security stuff out was effort but yeah thank you I'll send that on. And then a quick question about the choice of NGNX over Apache given that Apache is kind of the default that OpenSec uses. Yeah. Are you expecting any issues with like some of the... Actually. We had a Keystone federation and... Okay so yeah that's interesting. So Keystone federation. So one of the discussion points we're having is leaving Keystone if you want to federate in Apache because that path already works. Although some of the Keystone devs have already told us that getting it working in NGNX would not be difficult and they literally just need to sit down for a bit and do it. There are some concerns around that. Also we actually ran into our first bug in Nova because we were using NGNX instead of Apache and they did some really weird things with like HTTP headers that they assumed wouldn't be there or like I don't know there was some weird settings and it kind of threw a stack trace and died but that's not fixed. But essentially I can't remember who it was but one of our community members basically did some like performance and was like well we can get better performance out of NGNX. So we had a discussion around it and went well if it's better then let's just use it and you know we gate on all this stuff and we hit master so if we run into these bugs by the time we release it it should be good to go. And actually we had in Akata if you deploy Akata you'll have the Nova placement service, Nova API placement service that's new. That runs behind NGNX and Uwizgi. So from Akata we've had that and actually we've had Keystone support for NGNX and Uwizgi for like three cycles now maybe but you couldn't do federation with that. So yeah there is that concern. Thanks. Go. Again this is my third OpenStack Ansible session this week because I like it a lot. I have another questions which I forgotten to ask yesterday. We are trying to set up CICD using OpenStack Ansible and we intend to use latest and greatest on master. And I see you are bumping versions on a weekly basis. Sometimes it is like every second week or something on master. Yeah. Do you have any plans to be more aggressive and automate that process so we get weekly instead of two weekly daily kind of. I mean to be honest I have no real preference about how often we do it. The only thing I would say is that the release team starts to struggle if we release too frequently because we were getting to a period where it was taking more than two weeks for the kind of just release patch in the releases repo to merge before we can actually do our shower bump. So the kind of process we do for releases is essentially we do the releases patch which goes into the OpenStack releases repo. They then tag our release so like 15.1.2 I think is the latest Akata one. And then we shower bump all our upstream pointers to point to the head of stable Akata for our roles and for the upstream services. And then we kind of leave it for two weeks to make sure there's no major issues and then we release again. But we were running into issues where the releases patch which we depend on to merge before we can do our shower bump is taking like more than two weeks to merge. But is the same with master? Do you do any release work on master? No, so on master we do do, we follow the kind of milestones for OpenStack. So we'll have like milestone one and then two and three. And then like, I think it's like RCs. So, but pretty much we just do like one RC and then push it out. So now there should be a milestone one for 6.0 to be 16.0.0.B1 or something like that. And that's the like milestone one release for master. But if I'm honest, if you're trying things out on master, just use head of master or like, you know, maybe a shower back or something like that. But there's, we don't pin, we pin the upstream project showers in master, but not the role showers. So you will always point to the head of master for each role in OpenStack Ansible. Whereas on the stable branches, we point to a specific shower on those upstream, on the roles, the OpenStack Ansible roles. So that's slightly different, yeah. So then we can implement our own jobs to chase master all the time and lock them for our users. Yeah, yeah. So on master, there are no releases. Well, I mean, those milestone releases, but they like once every couple of months, I think. So the next one is June. So next month is is milestone two. An additional question about the daily jobs you mentioned. Are they public? Are they running on public Jenkins or? Yeah, yeah. So the kind of CI, the OpenStack CI allows you to do periodicals. And so there's a, in the same ways, you can get like the Zool.OpenStack.org and see the like currently running jobs. There's like a periodicals you can go to and it'll show you the periodicals that run. So yeah, you can actually, you can see those and you can see the logs of why it failed in the same way as any other job. It just runs on a schedule and it doesn't have a time out the way the others do. So it's really good for our kind of upgrade tests and some other things. Okay, thanks. Cool. Hello and thank you for the presentation once again. This is more of a generic OpenStack and Sible question. Sure. How much trouble would I have trying to deploy multi-region installation using OpenStack and Sible? Would I need to deploy separate clouds and then work to manually merge them or can I do it somehow automatically? So we did have a multi-region deploy. So the OSU Cloud, which was used for gating, had multiple regions in it. So Kevin, I don't know if you wanna talk about the multi-region work you did or how hard it was. Do you wanna come up to the mic just so we can, yeah? That would be useful. It's called payback. So multi-region stuff. I have some configs I'd be happy to share with you. Effectively, it was two separate deployments with independent inventories that would hook back to one another so that we had one Swift deploy, one Keystone, Keystone that oversaw everybody and then different services for everything across the board. And I had a compute cloud and an ironic deployment that was independent of one another. And we had 250 ironic nodes and 352 nova compute nodes working together inside of a multi-region cloud. And yeah, all of it's public and happy to share all of that with you. Our Swift switch configs out, whatever you need. Yeah, that's all public, it's all open domain. But anyways, it totally works, but the ugly bit is that they are independent deployments. They don't have a succinct inventory across the two. So you have deployment one, which is region one, deployment two, which is region two. They just have some shared variables. And the Keystone is shared between them, right? Yes, so it's not federated identity, it's fake federated identity. Although we, yeah. Because it's one top level identity provider. Yeah, yeah. I mean, we do have some support for federated auth and Keystone that hasn't been, it's been there since, I think, or maybe it's been there. And that totally works. The federated identity totally works. It's the CLI interactions. That's a pain. That's become a pain getting a federated token and then having to know that you need to set that every time you wanna run a command. I'm like, oh, I'm running here, I'm running there. Another actually really ugly part with multi region cloud is that it looks at region one and then it looks at region two. Well, I added region two second. So if you don't specify the region in your command line call, it just always takes the last region. So like you don't necessarily always want to be running some command against your client. And if you take the RC file from Horizon, it doesn't have region in there. And so it's always running against region two. And so a lot of our users were like, why is this not working? Why am I uploading my images to region two? Why can't I see it in region one? And it was because they weren't setting region one. So there's some ugly parts there, but I have a huge write up on all of it. So happy to share. I'll ask you for a link. Yeah, yeah. You should have done a SwiftMoldy region deploy and then you could have put the images in both sides. Yeah, boom. See. Thank you very much. You're welcome. We've still got eight minutes. So if there are any more questions, feel free to ask. And if you remember something later and you want to come talk to me or any of the OpenStack Ansible team, feel free to reach out to us on RC hashtag OpenStack Ansible on FreeNode and yeah, or Twitter or email, whatever. And if anyone would like an OpenStack Ansible Buffalo sticker, feel free to come up afterwards. Yeah, I've got a whole bunch. So you're welcome to them. Cool. All right. Thanks, everybody.