 Okay, okay. Good morning. Buenos dias. Thank you again for coming this morning. I know it's early. I'll try to control my excitement about this specific topic. So just to make sure you're in the right place, this is room 113 and we're going to talk about globally dispersed active-active cloud regions. This topic, and I'll go more into detail of it, is actually something that interests me a lot, primarily because I'm an old school production support engineer and having active-active anything is really important to making sure that I can meet my business units, SLAs. And being active in multiple data centers is obviously the traditional model around active, you know, being highly available. And so the question boasts is, well, how can you do such a thing with OpenStack? And since OpenStack has started pretty much in the very early days of NOVA, there's always been the concept of regions. And there's always been the concept that you can have multiple regions in your OpenStack cloud. But what about the idea of being able to disperse those regions across data centers or maybe even across, actually, you know, geographically? Is it possible? And first of all, I start out by saying the answer is yes. It is possible to do it. But the question is, have you ever actually seen it done? I searched Google top and bottom, never actually saw any articles or any explanation as to how you exactly specifically do it. So that's what I want to be able to share with you today. We're going to go over the high levels of how I kind of conceived the concept of being able to do active-active regions at OpenStack. And then I'm actually going to show you an example of how I'm actually doing that as well. And I'll prove to you that it actually does work. That's the important feature, the piece of the puzzle in my opinion is showing you and proving to you that it actually does work so that you can actually do this yourselves when you have in your data centers. A quick introduction about me, and I promise I won't spend too much time talking about me. I actually do love talking about myself. I'm an Aquarius, so that's just one of our traits. I've been in IT for 18 years. Again, my name is Walter Bentley. I am a senior technical marketing engineer. You may ask yourself, what the heck is that? So basically, I'm too technical to be a typical marketing person because my background is system administration and being a solution architect and a cloud architect. But at the same time, they felt that I could be very useful to be able to produce content to help you guys to be able to consume OpenStack in many different ways. So my role is to write blogs, talk at events such as this, and just basically overall share knowledge. So I get paid to fly to talk to people, which is actually a really, really cool job. So I thank RackSpace for that. I've been in IT for 18 years. Like I said, these are some of the companies I've had the privilege of working for. I am a New Yexen is what I call myself. I'm from New York originally, but I moved to Texas. And if you're familiar with the United States, you'll realize that New York and Texas are completely opposites as far as places to live. But I've made it work. So New Yexen is my term, and that's something I've made up myself. I am a cloud advocate. I love hybrid cloud. Excuse me. So if you'd like to talk about hybrid cloud, come and talk to me later. I happen to be an author in a knowledge share. If you didn't get a chance to get my free book at the book signing I did the last two days, I'm sorry they were gone. But I actually do have one left. So if you guys are willing to arm wrestle or duke it out, we can figure it out. OpenStack believer through and through. And some of my hobbies, I'm a motorcyclist and I'm actually a DJ as well. But my Twitter handle is there. I do have stuff on GitHub, a lot of stuff around Ansible and OpenStack. So check that out if you can in my blog, which is actually pretty dusty and old. I haven't updated in a while, but I'll do my best to get back to it. So the agenda, like I said, we're going to cover today is first talking about the concept of active, active cloud regions. What is that, right? Making sure we all have a level playing field as to what the understanding is around that. And then actually go through some of the tips and tricks around how you execute that. Because I'll be open and honest to say that it's not exactly the easiest thing to do. And what I mean by that is that you have to cross register services between the two regions. And I believe in having something I call an administration region, which is actually the guy in the middle that manages all the other regions. So that administration region can be placed anywhere. It doesn't have to be where the other two regions are. But I believe that that is the best model to follow when doing active, active clouds in OpenStack. And again, I'll explain what that is. At the end, I will actually show you a working example. I'm going to be leveraging Rackspace's public cloud. And I spun up all in one OpenStack cloud in two different of our regions. And I spun up another administration cloud as well that's going to manage those two regions. And I'll show you that as well. So that's the important piece of the puzzle. So the active, active cloud approach, you know, to me, in my opinion, I always want to do it in the most native and OpenStack-y way. OpenStack-y way, I just made up a word. I try not to do stuff that requires you to use external software to be able to manage something like this. So if you're familiar with OpenStack, and it's many highly available ways that you can go about it, right? So this is just an example of if you had to look at OpenStack and it being geographically based in two different places, right? So you can build an OpenStack cloud to have high availability across multiple data centers, right? So you can have multiple regions across multiple data centers and different availability zones, right? So that's one approach to being able to solve for high availability with OpenStack. Then you have the ability to be able to access high availability in an actual single data center, right? So you can have multiple AZs and multiple regions in one data center as well. So that's one high availability approach. And then the last one that I, you know, that is the easiest one possible, right? Is if you have one single region and being able to set up multiple availability zones and then have those compute nodes isolated in multiple availability zones and use that anti-finity Nova Schedule filter to make sure that your instance is never going the same compute nodes, right? And that is one other method that you can acquire or come across a high availability with an OpenStack, right? So these are three different models that you can use to accomplish that. The model that we're going to actually talk about today, we're going to focus on this model right here, right? And making sure that we understand what this model is. So Active Active OpenStack Cloud Regions. This is a reference architecture that I kind of put together that made sense to me. So if you look at it from the sense that you look at Region A and Region B, you'll notice that they independently have the OpenStack Core Services installed there, right? So each region has Nova, each region has Glance, each region has Heat, Neutron and Cinder, both independent of each other, right? They share nothing in that sense. And then in the center, you'll see that there is something that both those regions do share, which is Keystone and Horizon. And you may say to yourself, well, why would you want to have those services shared and not the others? Well, the reality is that if you're having multiple regions or going to have multiple Active Regions running, you really don't want to manage two sets of users, right? So if you manage two Keystones, you've got to manage two sets of users, you've got to make sure passwords are synced. Now, could you do it? Absolutely. But why would you want to? And imagine if you have, you know, 10, 20, 100 regions. Imagine if you have 10, 20, 100 Keystones that you have to keep in sync, right? That's a nightmare. So the idea is to have both or all of your regions, no matter where they're located, be able to share Keystone and of course share Horizon because why? Who wants to log into multiple Horizon dashboards and chasing down IPs, right, for dashboards? No one's supposed to do that, right? So imagine that you have your region alpha, which is region A, you have your region beta, which is region B, and in the middle, you have what I like to call your admin region. Now, again, your admin region, the only services that your admin region will actually have running is Keystone and Horizon. And again, that guy can sit anywhere. It can sit with region A in that data center, it can sit in region B in that data center, or it can be in a whole other data center, which is the recommended approach, right? That way you don't have a single point of failure. And one thing to keep in mind is that, despite the fact that they're going to share Keystone and Horizon, I still recommend installing Keystone and Horizon on your region A and region B. What I recommend you do, though, is you disable the service. I didn't say stop the service or anything like that. I say disable the service in OpenStack so that it's there, the endpoint is there, and if you needed to activate it, you could activate it and still manage your cloud if you ever lost your admin region, right? So you're never in the dark, so to speak. If you go about it in the way that I kind of explained it. Everybody good with this? Yes? Thumbs up? Everybody awake? Cool. Excellent. So this is kind of the steps that you need to take to accomplish this diagram. Now, these words here, they're really simple words. You can read them and it's doable, but there's a lot involved in actually taking on all these steps. So the first thing that you have to do is, if you're going to have multiple regions, you've got to inventory your region's endpoints and take note of the URLs. And you may ask yourself, well, why do I have to take note of the URLs? The reason why you have to take note of the URLs is that you actually need to be able to take those endpoints of those individual regions and register them on your admin region. Remember, your admin region has to know about those other regions and services, and they have to be registered there. And we'll go through specific examples of what you need to do around that. Then the next thing to do is on your admin regions, actually before, and should be told before you actually register, I skipped some steps, before you register those regions, you actually need to create those user service accounts, and then you need to create the services on the admin region. So you're taking the, I don't know, I'm saying a lot, you're taking the alpha and beta regions endpoints and registering them on your admin region. And in order to register them as a service, you have to create the service accounts, and then you actually have to create the services and then register the endpoints. And again, we'll go through an active example of that. And then last but not least, which is the hardest part of it all, and the part that I've realized was a great challenge, so I'm actually going to show you a tip and trick as to how to do this, is now you have to go into each of your regions, however many number of regions you have, and tell it that it needs to authenticate against your admin region's keystone. Because right now, when it's built, it's going to be configured, the configuration files are going to be set up to talk to your local keystone. But you don't want that, remember? You don't want it to talk to your admin region's keystone. And in order to do that, you have to make some config changes. Anyone here who's ever had to start making config changes to your services, multiple services already know that it can get really ugly really fast, right? So I always recommend doing things programmatically. If you don't know me personally, I'm a firm believer in Ansible, I'm a firm believer in automating things against OpenStack Clouds. So what I did is I actually put together some Ansible commands that will actually make those configuration file changes for you. That way you're not going in VI, going in make a change, oh, you deleted an extra character by mistake, you saved and now you go to start the service and service fails, right? So to avoid that whole situation, let automation help you out there. And then last but not least, you just use your cloud and bless you. So again, we're going to go through some specific examples. And another thing I just wanted to share with you is that I actually did this as a lab or workshop in the last summit in Austin. So what I will do today is I will actually let you know on GitHub where I have the lab for that, which will actually go through the specific instructions, even give you the specific commands to be able to do what I'm going to show you today. So hopefully you appreciate that as a good takeaway. And if you find any issues, let me know. You shouldn't because I stepped through my own lab last night and it actually worked, so imagine that. So it's been a few months so I had to refresh my own memory. So before we actually get into looking at the specific examples of what I've talked about, I just wanted to again share that thought around configuring OpenSec services is not easy, right? There's many ways of doing it. It's very tedious. You have to have a lot of attention to detail, right? Attention to detail in your OpenSec services configuration files is very important and I'm sure you guys already realized that. So again, as I mentioned before, I like to use Ansible to be able to edit my configuration files. There's many ways of going about it. And again, I'm going to show you an example here of how I kind of approach it. So if you're not familiar with Ansible, I promise I won't give you too long of a tutorial around Ansible, but it's basically a command line orchestration tool. All it needs is SSH to the machine to be able to manage and configure that machine. And you can actually write ad hoc commands that will actually go in, connect to that machine, do what you tell it to do, and then return back a response, right? It's an amazing tool. Yes, it's open source. Yes, it's free. And if you need help with it, you can always ping me. I do a lot with Ansible. And that's just my preference. And another thing is that we at Rackspace use OpenStack Ansible to deploy our clouds for our customers. So in my opinion, Ansible's already there. We're already leveraging Ansible to deploy the cloud. Why not leverage it even further to do the things that I needed to do? And you'll notice here that in the examples, I'm actually taking advantage of our dynamic inventory that's generated from OpenStack Ansible to be able to leverage that dynamic inventory with Ansible to connect to my machine. So I don't have to know IP addresses. I don't have to know names. I just tell it the type of machine I want. I want a glance container. I want glance hosts or I want compute hosts. It'll know and contact those hosts that are registered in that region that you spun up. So to me it makes sense to just stick with Ansible because it's already there. It's already embedded. It already knows about your environment. And again, because I'm using OpenStack Ansible, it makes sense. If you use another distro, that's fine. You may choose to use a different tool or a different way of approaching this. But again, this way works very well. So if you're looking at this example, and this is a basic Ansible command that I'm kind of walking through. So in line two, if you have Ansible installed, which I think everyone here should install and say Ansible, space, the type of host that you want to call, space-m, and the module you want to call. A module is just basically something that executes an action in Ansible, right? So there's a shell module. There's a command module. There's an app module to be able to pull down updates or installs in Butoo. There's a files module. There's modules basically to do a lot of basic Linux commands already built into Ansible, right? So you would just call whatever module you want. Space-a and basically the ad hoc command. An ad hoc command is very simply a Linux command, right? A command that you would execute in a Linux command line to do something on that machine. So in this example, I wanted to talk to the Glance container. And again, if you're not familiar with OpenStack Ansible, the OpenStack services are deployed into LXC containers on the infrastructure or the control plane. So all the services are not installed on a base machine. They're actually each installed into LXC containers that are running on your control plane. So in this example, I want to connect to and do something with Glance. So I need to connect to the Glance container. I'm going to call the shell module because all I needed to do is be able to execute basic shell command. And then the command I'm going to, the ad hoc command I'm going to execute is a said command. If you're not familiar with said, I'm sorry, your life has been very boring. Said is a great tool. It's fantastic. It's been around for ages. So I definitely recommend if you have not tried or you said, you definitely want to check it out. It just basically gives you the capability of searching a file and being able to make changes to it, whether it be deleting a line or editing something inside of a file. And you never have to do VI. You don't have to know about any other command. And it's very specific, right? You can tell exactly what to search for and give it exactly what you want to replace. So I'm going to execute a said command. I'm going to do dash i because I want to change to actually be committed, right, when I make this change. And again, said gets a little weird because they escape some, you can use various different type of escape characters. And in this occasion, I'm actually using a plus sign as my escape character. Again, I'm not going to give you a said tutorial. You guys will figure it out. You're smart. I'm basically going to have it look for the auth URL, auth underscore URL line that's in the Glance API config file. And once it finds that line, I'm going to have it replace it with the new text that I'm going to add at the end. And what I'm doing here is I'm taking the IP address of the local keystone instance and I'm replacing it with the IP address of the admin, right, of the admin region. So that when this service or when Glance goes to authenticate and say, you know, it gets a token and it wants to go validate that token, it needs to validate that token against the admin region instead of itself. So that's what I'm going to do there. So if you look at the whole command that's in the red box from line 10 to 13, that's what that whole command would look like. And that command is something that you can just execute and it will make that change to that file. And you can line those commands up. And I'll show you an example of how you can line them all up, make that change by copy and paste. I'll paste it into your command window and it will execute all those commands. So I'm going to show you an example of what this looks like. So remember the steps that we're going to go through here. So I'm just going to, what I'm going to do is I'm actually going to jump into the admin region first. I'm going to show you how I created those user services, how I created the services and how I created the endpoints on the admin region first. And I'm going to show you how I created those user services, and how I created the endpoints on the admin region for each of the regions. And then we're going to jump into each of the regions and I'm not going to make the changes to the config files again because I've already done it. But I will show you the commands that I used to be able to make those changes and how I was able to kind of paste those into a window and make those changes actively. And then last but not least, we'll actually jump into the admin regions horizon. And I'll show you how you can switch between the two regions and how I can independently do different things between the two regions and they'd totally be separated in one horizon window and in one keystone. Is that cool? All right, here goes the live stuff. If you have a problem seeing my command window, please get me know, I made it as big as I could possibly make it. But I know that sometimes when sitting out there, it's hard to see things. So, is this relatively visible to you guys? Okay. Thumbs up. Everybody, wait. All right. I'm giving out free money at the end. Everybody heard that? All right. There's everybody. Okay. So, I have three tabs here. My first tab is my admin region. And again, I'm just SSH to these servers, right? I'm nothing special. I'm at the Vruder server. I have not done anything yet. This is my admin region, my alpha region. I'm sorry. This is my alpha region. This is my beta region. All right. So, I'm going back to admin. So, again, as I mentioned before with OpenStack Ansible, the services are actually running in containers. Let me stretch this window out so it looks a little bit better, right? So, the OpenStack services are running in containers. So, if you execute a lxc-ls-fancy command, it will actually list out all the lxc containers that are running on my control plane. And you can see there that each service is in its own container. There's nothing fancy about that. One thing you don't know if you're not familiar with OpenStack Ansible is that there's actually a container called Utility that's installed on each of your control plane servers. And what that Utility container is is that it basically has all the OpenStack CLIs installed on it, as well as the OpenStack CLIs installed on it already for you. And it's meant to be the place where you want to jump into to be able to manage and operate your OpenStack cloud. And the reason why they did that is that they didn't want you, you know, as suit or root at the root of a machine executing CLI commands against your cloud, right? You can make a mistake. I've fat-fingered things and destroyed a machine. We've all done it. I know you have. So by jumping into Utility container, if you do fat-finger and make a mistake and execute a command, it only destroys what's in that container. It does not mess with your base machine. Your compute node would stay up. I mean, your infrastructure node would stay up or your control plane node would stay up, right? So what we're going to do is we're going to jump into that Utility container because what we want to do is we want to execute a few OpenStack CLI commands so that I can just kind of show you what was done there. The thing I love about LXC is that you can SSH into your container just like you would a regular Linux machine, right? Nothing special there. We're just going to source our OpenRc file, right? So at this point, if I execute a service list command, right? So you'll see here on this admin region that these are the services that I have listed here. And again, prior to making the changes I made last night, the only services you would have found there was the identity service, right? There wouldn't have been the network or the computer orchestration or image service there. So if I go now and we look at the endpoints, sorry, my hands are freezing. So if we go and look at the endpoints and you know what, maybe I'll do it one at a time. Make it a little bit easier to look at, right? So the first thing that I did on the admin region was I registered what I did is I had to go in and register a public endpoint for my Keystone, right? Because if everything is internal and the other regions can't talk to it, then you won't be able to authenticate, right? So you'll quickly see here that this is actually my public IP that I've registered here and port 5000, version 3, so that the external regions can actually talk to my admin region. And I've registered the same thing as admin as well. You'll notice that my internal is still pointing to my local, so that's my local address that's still set up there. And then one thing you'll realize and note here is that there's two endpoints that I've actually disabled. You may say to yourself, well, why did I disable those, right? So the reason why I disabled those is because I enabled it to point and receive traffic from its public IP instead of the internal IP. You can't have them both active at the same time. So the first step you need to do is register those new endpoints for your internal and your, I'm sorry, for your public and your admin for your region and then disable the one that's pointing to the local, right? So that's that point. Now if we go and look at one of the other services such as NOVA, and I won't look at everyone but just to kind of give you an example what you'll notice that on this admin region now I've registered two public NOVA endpoints and two internal public endpoints. And what you'll notice there is that it's two different public IPs that I've registered. The 184 address is my region alpha and I have a public and internal for that and then the 199 address is my beta region and I have a public and internal register for that. So again, right now we're on the admin region and we've registered the endpoints of the two different separate alpha and beta regions. And one thing I actually didn't show you which just will prove my point a little bit more is the fact that I'm not pulling the wool over your eyes. So I will log into my cloud account the windows are hiding on me that's awesome. There we go. I pray that it comes up because the Wi-Fi is being very special today. So what I did is I logged into my cloud account that I have at Rackspace. If we go look at my, this is my alpha region it is running inside of IAD. So this region is running in Virginia our Virginia data center. I'm sorry I'll make this bigger I see people squinting. So this one is running in our Virginia data center so this is my alpha region. If I go look at my beta region that guy is running in Dallas I believe. Oh no, I didn't change him. Okay. Alright. I was supposed to split him. What I was supposed to be doing for you guys is showing you that they were in sitting in different regions and I didn't do that very well. Oh okay, so the admin region is sitting in Dallas. In any event, I was trying to prove to you that they can sit in different regions. I screwed up and didn't put the beta in a different region. I'm sorry guys. It works. Yeah, that was bad. Anyway, so we'll go back to this. Don't judge me. So and then I'll do the same thing and show you what I did for the other endpoint. So I'll pick another service. Again, sorry it got really slow for some reason. I'm going to close that. Alright, so we'll go look at Neutron. I did the same thing for Neutron for the alpha region which is 184. I have a public and an internal for the beta region. I have a 199 public and internal. So basically you have to do that with all three all four of those services. Nova, Neutron, Glantz and Cinder and of course heat if you're going to use heat as well. So we kind of covered that part. So now let's jump into the alpha regions. So let's say for example I'm in my alpha region now and I'll pull up an example of what it looks like. So by the way if I didn't mention it already this is actually my github. My github ID is W Bentley 15 and this is actually the location where I have the notes for the workshop that I did today setting up Active Active. And if you step through these steps here it will actually step you through as to how to set up that in the command. So if we go and look at this guy right here step 3a. Step 3a is the step where I'm stepping you through as to these are the open these are the ansible commands that you can execute to be able to make the changes that you need to make inside of your configuration files. And I've literally mapped out every command. The only thing that you have to actually go and do is copy and paste so basically replace this this placeholder right there with whatever your IP address is so you can literally put it in a text text window and basically copy and paste it of course I took away the example copy and paste or just search for admin region IP in the brackets and replace it with your IP and you can do that to this whole file and once you do that you can literally take all the commands and paste them into the window basically what I did is I stepped through my own lab and stepped through and put it in again I'm not going to dredge you through evaluating each command trust me it works I did it myself and then the same thing goes for the beta region if we go back here and we look at the beta region instructions the beta region instructions are the exact same thing the difference is that the same thing you just have to make the change and add in admin region IP and replace that with whatever your admin regions IP is here the same thing for all the regions that you're going to set up actively within the cloud going back here and again the other steps of setting up the service accounts as well as step two is what you would do there and these are the steps that I stepped through creating the actual user creating the role that they need creating the service registering the end points for each of the services whether it be the beta internal or the alpha and beta public and internal and then at the very end disabling these are two steps for you to disable that keystone service remember how I showed you to disable the local one because I've registered the public ones and then at the very end just making sure that everything looks similar to what this looks like so again I recommend these are actually the hard and fast instructions as to how you can do that and this is the location where that information is stored so since we're getting close on time we'll jump into the actual so again you know what I'll jump into one of these containers and just show you the change that I made there did they move the location or go ahead oh thank you as you can see I need more coffee sorry about that so if we do a search for auth in here so you'll see here that inside this configuration file before I made the change that I made this actually pointed to the local keystone that's running on that region and all I basically did was execute those commands to be able to update it to point to my admin region right so if I go here to the browser and I go to that same address and I'll literally copy it for you that way you know I'm not fudging you right that is going to be the address oh sorry I need to do HTTPS that's the only caveat there I didn't do oh no I didn't do it sorry right so this is our admin region so you're looking at the horizon to output for your admin region right now if I log in here what you'll quickly note is that once you log in you're not automatically dropped into any pacific region right so you're right in the middle right now and what you'll quickly notice that if you expand admin you'll notice that you don't see a lot of things right you don't see image you don't see instances you don't see network the reason for that is that in the admin region those services really don't exist but it's not until you actually go up here into this dropdown you can actually switch to one of the regions because notice you'll see that typically in horizon if you don't have regions active when you hit this dropdown this area won't be here but because the admin region knows that you have registered multiple regions there it's going to actually add that dropdown so if I hit alpha to be able to switch over to the alpha region right now what you should see is that under the system tab you should actually see things show you more things because those services are active there you have to give this a second here because it actually takes a little bit of time to actually connect to it the first time and again I should have probably did this before you guys started today but just give it a minute I promise it'll be here I do promise so how many of you are actually running OpenStack Clouds in your organizations right now oh wow that's a lot that's excellent when I first started with OpenStack it was probably like one guy would have raised his hand and he would probably said he's running it on his desktop so that's actually very encouraging are you guys actively thinking about running active active OpenStack Clouds is that why you're here today okay that's good it can be done and again it's not that difficult to do but it just takes a little bit of planning and a little bit of organization around it and I promise you it's not this slow in a normal world right now this thing is routing its way all the way back to Virginia and the wifi here is not exactly the fastest at times and what happens is the service the first time that it goes to authenticate or you use Horizon to connect to that region it takes a while and I've even found that to be even when I was local it took a while but it does come up that's why you weren't watching yes question well that's actually a really good question so the question was if you have Keystone sitting someplace else does it slow down the other services basically because all the other services depend on Keystone the answer is no not really and I'll say it this way so the first thing is you go out and you get your token so you got your token from Keystone and now you pass that token with every service call that you make whether it be the glance, nova there is something that is inside of OpenStack where it kind of I don't want to say caches but it kind of keeps it in acknowledges that you have a valid token so it actually doesn't have to actively keep talking back to Keystone and once it authenticates you or recognizes that your token is valid the service actually doesn't talk to Keystone anymore at all it just keeps working so what I'll say is when I say yes or no and I can't believe this thing is still going is first initial call may be a little bit slower than normal but then after that it'll be normal provisioning and normal API responses after that and again a little bit slower I won't say that it's drastically slower yeah this is actually rather embarrassing but I'll take more questions while I'm embarrassing stage here yes absolutely since I happen to have time I'll show you the repo for it I knew that was going to happen I knew it was going to be mean to me give me one second I'm going to go in and do some I'm going to do some massaging of a few things here I'm going to do exactly what I had to do last night which is basically going to horizon and restart it for whatever reason it was being mean to me last night too so we'll get it going so if you're not familiar with horizon which I'm going to show you guys all are it runs under Apache so basically what I'm going to do is I'm going to stop the Apache service I'm going to start the Apache service and then horizon will start fresh and new we should see the window in the back fail once it stops and now we'll just start Apache fresh again ok and what we will do is we will refresh this window and we should get a better feeling horizon and notably so this same thing happened to me last night so I expect it to happen on stage as well so while that continues to embarrass me I will go in and show you the OpenStack Ansible stuff so this is a GitHub repo where you can find the instructions as well as the playbooks and roles to be able to deploy OpenStack using Ansible it is under the OpenStack repo so this is a legitimate OpenStack repo that is supported by the community it was actually something that happened to be created by Rackspace but by no means was something that we decided to keep proprietary it is open source it is open it is actually managed by the OpenStack community and just so you know we personally use it to deploy our clouds for our customers right so it works and it's highly available it's highly scalable you can do in place upgrades with it there are a lot of neat things about it and I do not understand what is going on with this live demo guys are not being nice to me today and of course I'm out of time as well one more minute I'm going to pivot and you're just going to have to take my word on it see I'm a fixer I'm an admin by heart and when things don't work I want to fix it it's just my nature it's usually don't try to do this on stage but here we are and I know as soon as we we uh yeah there we go as soon as I get off stage it will start working fine so that's just the way it is okay so we're going to try this one more time and everybody crossed their fingers and toes we're going to now switch over to the alpha region and again it will take a little bit of time but I'm feeling positive vibes right now so let me take some more questions while we patiently wait for that yes sir yeah so you're saying okay so is this similar to like nova cells or well yeah so cells and actually that's something that we do in our so rack space we have a public cloud and it actually runs open stack right so if you use rack spaces public cloud you're actually running on an open stack cloud and we use nova cells cells are generally when you start to get into large complex open stack environments where you have thousands and thousands of compute nodes that you want to be able to segregate out into cells most organizations unless you're running open stack as a public cloud you probably won't cross into the realm where you need to be able to deal with cells because cells gets more complicated right because now you have uh kind of like a nova master and a bunch of control planes that are running in nova services underneath that and that's like a totally separate thing that we each and every other service so unless you're thinking to do a large deployment like your own public cloud running open stack um I don't think you'll probably need to move into the cells you can actually manage very well with running multiple regions instead question yes yes yes you can you can split your admin region and you can make your admin region into multiple regions themselves the only thing you have to do is you have to obviously put some sort of load balance in front of them to be able to manage the traffic right so in a reference architecture that we use for at uh rack space um we have a three node control plane that's active active and the way we're able to do that is that we put a load balancer in front of it whether it's be a physical load balancer or whether it be something like HA proxy as long as you have something in front of it and it will equally distribute the traffic to each of those control plane service then you can have multiple so the same thing that will go with multiple control plane you can do the same thing with multiple regions as well for keystone horizon and actually you know what I will take that in consideration for my next time hopefully if I get a next time to do this I will actually show that you can do multiple active admin nodes and I'll make sure it's in a different data center and I'll also make sure that this thing actually responds before I start the the presentation as well any other questions alright so I'm out of time and I don't want to mess up the speaker schedule today again as soon as I get off stage I know this will work so if you want to come talk to me right after I'll probably be able to show it to you at that point most importantly out of all of it I thank you for your time thank you for coming today if you need to reach out to me I'll be at the Rackspace booth all day today as well as before you leave if you want to be one of the rarest participants and I'll fix your stack challenge there are only 25 people at this summit who have one of these custom Rackspace soccer balls if you want to be one of those people stop by our booth and you can take to fix your stack challenge it's a really simple challenge where it has an open stack cloud that I broke personally I didn't break it too badly and if you can figure out how to get an instance running you get one of these custom soccer balls again there's only 25 people at this summit who have one of these alright thank you for your time guys I appreciate it