 So, hey guys, hey, hello everyone. Welcome to Introduction to Project Stapoft. I'm Arthur Berezin, Senior Technical Product Manager for Red Hat, and today I'm going to present with me Hugh Brock, Senior Development Manager, responsible for Project Stapoft. So, unfortunately I don't have a presenter, so I'm going to get a little cue to Hugh. I'm going to do the slides. So, what are we going to cover today? So, first of all, we're going to discuss on Project Stapoft and what it is. Basically, deployment tool, an intuitive and easy deployment tool for OpenStack. Hugh's going to present the features, basically what makes Stapoft what it is. And he's also going to present a little video demo of how the whole flow goes on deploying OpenStack from scratch to an actual production level and deployment. And we're going to discuss a few future roadmap features that we're going to present sometime in the near future. So, next slide please. Hugh, thank you. Yes, next one. So, OpenStack deployment. So, basically, can go several ones. Yeah, several. Yeah, so basically, back in the day, when personally I had to install OpenStack from packages, you know, back in the Diablo and Essex days, you just install the packages themselves and then they had to go to the configuration files. And basically, make everything like work together, right? We all know that OpenStack is basically about 10 major projects. They interoperate one with the other with the backend components, with database, message QN, so on and so forth. So, basically, make everything, try to make everything work together and then they discover something is broken. Obviously, it never works, not from the first, not from the second, not from the third time. Then try to fix it and then basically you give up, right? That's what happened back in the day and you just left OpenStack and try to move on with your life. So, then we introduced PackStack, which is basically a quick deployment tool for OpenStack, right? Just a single command line tool. Let's you set up all the settings and the layout for OpenStack services. Then you run PackStack, basically you generate an answers file and then you can configure that answer file that has all the configuration in one centralized place and then you just run PackStack with this answers file, which basically makes, runs puppets and makes all the configurations and deployment for OpenStack. Now, this is really great quick tool to deploy OpenStack out from scratch. You understand very fairly easily how to make all those configuration, but the problem with that is that PackStack is basically not robust enough. So, if you had to go from one layout to another, if you had an old one layout or basically a proof of concept deployment and you wanted to expand that layout, that's basically the deployment that would be very problematic and not achieve easily. So, introduced the project foreman, which is basically a great configuration management and provisioning tool. Basically, a very popular and accepted upstream project that does provisioning of bare metal hosts, generates kickstart files in a provisioned host right from bare metal and later on runs puppet configurations on top of them using foreman smart proxies. Next slide, please. So, as I mentioned before, foreman uses a puppet smart, an ocean called smart proxies. Puppet is one of them, so it's able to run puppet configuration and apply all those settings that are set up in puppet modules. And we shipped integrated puppet modules and host groups within foreman to make the open stack deployment fairly easy. So, you will be able just to configure those host groups and assign the host groups to hosts and basically it would run puppet configurations on those hosts to deploy open stack. So, that's fairly nice, but still you have to know puppet, you have to understand your way around puppet if you make any considerable change to the default layouts and you have to know open stack configurations and to know exactly which point you want to play with and change configurations to make a real production and working environment. So, that's not easy. So, hence we introduced project step-up. So, our mission statement is basically to provide an intuitive tool to deploy open stack, actually production grade open stack deployments. So, what does it actually mean? So, project step-up is basically a plugin that works on top of foreman and leverages foreman's capabilities, right? Foreman is a very rich tool and step-up basically leverages those capabilities from a foreman. And it's designed specifically to deploy open stack. So, it includes very rich, basically very intuitive user interface, which gives you a nice wizard, it goes from step-to-step to step, basically taking all the needed information for you to describe your open stack deployment and basically run that configuration. It includes also the orchestration layer. Obviously, it's when you deploy open stack, it's very important to first deploy database and then deploy the message queue and only then configure Keystone and all other control plane services. So, it also includes the orchestration layer. We also include in the auto-discovery feature, a very nice feature that lets you automatically discover new hosts in the environment. So, you'll be able fairly easy, easily to deploy open stack on top of them. And we also include, and this is a very important one, maybe it's last but not least, oh yeah, a very important one, a built-in high-villability layout. So, we'll know to deploy a highly available configuration for open stack is a very complex task. And you have to build the whole environment, the whole layout from scratch, including all the architecture of the environment, including actually proxy, including the pacemaker and low balancer and so on. So, we have this as a feature part of a step of project. You just say you wanna highly available environment and step of just basically deploys that for you. So, Hu is basically gonna explain more and go into more detail into those features. Okay, everybody hear me okay? I'm gonna have to walk back and forth a little bit in the process of this, but that's all right. So, my name's Hu Brock. I'm an engineering manager at Red Hat and I more or less run everything that we do around engineering the deployment, installation deployment management of open stack itself. So yeah, as Arthur pointed out, we have been, we have standardized on Foreman for Forello SP4, our productized version of Havana. And Foreman is a very capable provisioning tool, but it has some shortcomings when deploying a system as complex as open stack, in particular. It did not, Foreman did not have a, any way to orchestrate complex deployments. So, as Arthur mentioned, any way to stage the order that services come up in, make sure that block storage waits for controllers, make sure that computes wait for block storage, those kinds of things. And we also did not have in Foreman the ability to simplify the UI for collecting all of the deployment parameters you need. As you guys know, if you've been around open stack at all, there's a lot of information that has to be punched in in order to get even a modestly complex deployment to work. So we needed to solve those problems. We needed to solve them very quickly. And that was the purpose of, that was the purpose of Stay Puft. So what I'm gonna do is go through the features that we added in some detail and then show you my, the results of my adventures in Linux video editing, which I am happy to report after going through several tools was actually successful, I think. So we'll see. Anyway, so yes, the problem's difficult installation. You could get an open stack deployment installed with Foreman, but it was not easy. No workflow and incomplete HA modules. So Arthur, if you can hit my next slide, that'd be good. Just hit the mouse button, it should work. Hang on, let me just have a look. Let's just put the cursor right there and then click. Okay, right. So what we did, we built an installer for the installer, first of all. The hardest part of using Foreman actually, and I didn't even mention this, was getting it installed correctly, collecting the network information that you're gonna provision to. What's the DHCP range? What are the DNS names? What order are the NICs gonna come up in on the host? All of that kind of stuff was more or less left to chance before. So we did a lot of work pre-configuring the Foreman environment so that it's much easier to install now and I'll show you that when we go through the video. In addition, we had a whole bunch of stuff that was kind of all coming to fruition at the same time with Foreman. The first of that was the auto-discovery image. So we now pixie onto bare metal, a Foreman auto-discovery image that's based on our revh node and it pokes around and discovers everything that you wanna know about the host and then sends it back to the Foreman DB so that you don't have to do, you don't have to sit there and enter MAC addresses into a UI and other things like that. It should do that for you. So auto-discovery, nice clean UI for collecting deployment parameters with names for the parameters that are human readable and make some level of sense, which we didn't have before. I mentioned stage deployment. There is a, at the same time we got the auto-discovery capability. The Foreman guys also came out with the first release of their orchestration plugin which is called Dynflow. Dynflow is a nice little Ruby tool that lets you order Foreman tasks based on things that happen on hosts. So it's still in relatively early days so we weren't able to be all that sophisticated with it but for this version we were at least able to say, okay, deploy the controllers first, then deploy block storage, then deploy object storage, then deploy compute and don't try to do everything all at once. And finally, in parallel with all of this, our Puppet team, some of whom are in this room have really broken their backs over the last six months getting full HA Puppet modules put together that we can really use and support. So those are actually gonna ship with our RailOS P4 Async 4 release which comes out in a couple of weeks but they will let you deploy a fully, highly available configuration of A4 in a relatively automatic way. And then that configuration has some active active and some active passive services will be adding things like active active database for OSP5. So, next slide there. There we go. Right, so I'll get into some detail of what the features actually are. We have just actually, over the weekend I think, got a live CD of all of this working and what that means is that you can boot a live image that we'll have available online sometime in the next couple of days and it will collect some configuration information from you and then come up with the form in UI running in the live CD environment and you can go from that directly and provision your OpenStack which I don't know that people will use that for production but it's certainly handy for demos and POCs and that kind of thing. The live CD will also have the ability to install the form in software directly onto a host. By the way, if you have questions anytime along this just stop me and ask, go ahead Vinny. Does the open square re-grab all the Macs in the system? Yeah, it does, it grabs them all. And it tries to be smart about figuring out which one is management and which ones aren't. There's limits to that, obviously. Yeah, these are all in the OpenStack puppet modules upstream. Sorry, yes, yes, apologies. The question was the puppet HA modules I'm talking about, are they the ones from Puppet Forge? And the answer is yes, although we do have some patches against those modules that are upstream in the OpenStack puppet modules projects. We're not, I believe there are a couple of instances where we're a little bit different from the straight up stream. Yes, back here? Yes, but not in this version. One of the limitations we have with this right now is that it only works with the discovered hosts that we then provision rel onto. But yeah, we recognize that that's not going to work for a lot of people. So we will be adding the ability to work with pre-provisioned hosts. Other questions before I go on? No, OK. Right, live CD. Yeah, I think I covered that. Let's go on to the next one. OK, so auto discovery. Once we boot the live CD, we pick see a RevH image onto it. It picks up all of the facts that it can and puts them back into the form in DB. So the end result of that, as you'll see in the video, is a list of hosts that you can pick and choose from. On deployment, we then build those hosts for you based on kickstarts that you choose. We obviously ship some. You have the option, as always, in Forman of uploading your own. And as I mentioned in response to the question, we don't yet have support for pre-provisioned hosts. Forman will work fine with pre-provisioned hosts. Stay Puft will not yet, but we're working on that. So we added an install UI. Forman, as you guys probably know, is a Rails app. It runs Rails 3.2, I think, 3.3. And as such, it supports Rails engines. So what we did was just very quickly whip up a Rails engine that adds another tab to the existing Forman UI that is dedicated to configuring OpenStack. So when you're using Stay Puft and you install Stay Puft on top of Forman, your Forman UI gets an extra tab that's dedicated to installing OpenStack. This part of the process was actually pretty easy, because we were able to use a nice little wizard gem that we found. And we did do some of the hard work, not all of it, in inferring the UI from the puppet modules so that we don't have this sort of perpetual disconnection between what the UI is collecting and what the puppet modules actually want, which has been a source of a lot of pain. And we have more work coming up on that. I know the Forman team has a whole set of work planned around creating a metadata layer for parameters and that kind of thing. Yes, sir? Ah. Yes, the question is, what are we doing with the facts when we're collecting them? So when I mentioned out of sync here, what I was referring to was you build a UI to collect all of the stuff that you need in order to deploy OpenStack. That quickly gets out of sync with the reality of OpenStack, because the list of parameters and the ones you need change all the time. And we get bit by that weekly. It's horrible. So we have added some logic that tries to look at the puppet modules and say, OK, what do you really need? And do we really need to collect all of this and how much can we default in that kind of thing? We have more work coming on that later. Does that answer your question? Right. The form in DB. Yes, so question was, what are we using as a fact repository? We do not use puppet DB, which is the question everybody always asks for various reasons, including closure. We can't ship it anyway. So we use the form. However, Formin has a perfectly serviceable database for collecting facts. And that's what we use. Unfortunately, that means that we are, that gives us a problem with the upstream puppet modules, because many of them do use puppet DB. So we may still, at some point, have to figure out how to ship puppet DB. But it's going to be difficult anyway. Next slide. Right. So Dynflow. Dynflow is the orchestration module that the Formin guys wrote over the last six months or so, I think. It's a nice little Ruby plug-in for Formin that basically lets you write Ruby code to specify tasks and concurrent tasks and dependencies among tasks and weight conditions and that sort of thing. It is a Formin add-on, and you'll see it generally available in Satellite 6 when we release that this fall. Right now, we aren't using it to its full potential. We're just using it to stage the deployment of individual hosts. But that's not a limitation of the tool. It was a limitation of the time we had to actually build out the open stack specific workflow in it. Once we get, yes, sir? Question? Yes. The question was, how do we, does Dynflow have a mechanism for verifying when a task is completed and deciding what to do next? Yeah, the answer is Dynflow itself is, it's perfectly possible to make it do that. But what we ran into was, we did need to add logic to the puppet modules to report success, like, yeah, I'm really done. Puppet all by itself will just finish, and you have to figure out if you need to run it again or run it a third time or a fourth or a 17th time or whatever it is in order to get the box configured the way you want. So we did have to do some work to make Puppet smart enough to understand completion. Now the right way to do that actually is to make the puppet modules themselves more granular and put more work in the orchestration and less work in the puppet modules. But that's going to be a longer term piece of work. Next slide. Oh, sorry, yes? The question was, can we deploy things other than Red Hat OS? There's no reason you couldn't. We just haven't tested it. So Forman and Staypuffed, as we ship them right now, run only on EL6. Although Forman upstream, I'm pretty sure runs is perfectly happy to run on Ubuntu as well. Staypuffed also is entirely upstream, but we've been in a product chase with it, and I don't know that the upstream version is working all that well right now. However, the choice of operating system that you install on the host depends entirely on the initial pixie image. Yeah, and we have been developing on Sentos as well. So it's not just rel. Does that answer the question? Yeah, OK. So the HA work was a parallel effort for this release. We've defined four layouts for architectures, two non-HA and two HA. And since we're continuing to support Nova Network in addition to Neutron, those are the other two pieces of the matrix. Our Arthur has a slide that'll show a little bit of our HA architecture later, but it's the simplest thing we could put together that would be bulletproof. Yeah, go ahead. What are we doing for storage? Storage in this, the question was, what are we doing for storage? Storage in this configuration, for example, DB is on shared storage. That you, what are we doing for storage? Not even, we're just doing LVM for block storage right now, right? I mean, we don't even have Gluster yet. Yeah, we're not deploying either Ceph or Gluster right now. That's going to have to, we're going to have to add that later. Yeah, but yeah, no, no, the Gluster options are there. They're just, you just can't use, you have to use like an LVM backend or something. I can't remember. Ask one of these guys later. They can tell you. Right, anyway, three controller nodes, all running identical stacks. Database is active passive. Cupid in this iteration is active passive, although we've had support for Rebit coming for OSP5, right? And that will be active active, I think, in, so that's basically the architecture. You'll see the diagram later. 4, 5, again, we'll be getting more variation in the different architectures that you can have. Like, you know, you might, do you want to break, you want to break out Neutron Network or do you want to break out storage controllers, other things like that? We don't have that right now. Okay, next slide. Oh, yes, sorry question. Yes, we've run through the full install on bare metal using KVM and also on, the easiest way to test it is actually in a rev setup where you've got five or six VMs and you can just test the deployment. But we've tested that and bare metal as well. Another question? Yes, it is on the RDO wiki, isn't it? Oh, sorry, the question was, is the HA configuration documented anywhere? And the answer is yes. I believe it is on the RDO wiki. And it'll certainly be in the release notes when we drop A4. Sorry. The question was, is there an upgrade path if you want to move to state puffed and you're already on OSP4? Here's the thing, state puffed is an installer. So, it's not really gonna help you upgrade. There's definitely an upgrade path for using the new version of Foreman with your existing OSP4 deployment. We didn't get at all into trying to support upgrades with state puffed yet. That's gonna be another thing down the road. Yes. So, question was, when you're moving from OSP4 to OSP5, can you take your existing non-HA OSP4 environment and move it to an HA configuration? And my friend, Vinnie, here is telling me that yes, we have done it before. So, I don't know how much professional services is required to do that, but a lot is the answer. But yeah, in theory there's no reason you couldn't do it. Other questions before I try to put my money where my mouth is? Yes, sir. No, and the question was after the initial install, is there still a role for state puffed? And in its current form, no. Once you've done, you've designed your deployment and you're happy with it and you spit it out there, you basically would never go back there again. Now, on the other hand, Foreman is still there, still hangs around and that's gonna be the path for modifying existing configuration. Yeah, so basically that's a deployment tool, but you still can use Foreman to extend your environment, to add new compute hosts and so on. You can't use it to add compute hosts? No. So yeah, that's part of Foreman's functionality. Now, I would like to add the ability to go back and modify an existing deployment, add capacity, that kind of thing, but we don't have it in there right now. I have a question back here. Sorry, I didn't quite get the question. Could you speak up a little bit more? Yes. So basically Foreman is a... Let me repeat the question first. So the question was we already have fuel for Puppet and Crowbar is already out there for Chef. Why did we do another installer? That sounds like a product management question to me, so I'm gonna let the product answer. So basically Foreman is a deployment tool, but it's not only a deployment tool, it's also the management component for the OpenStack environment. So basically, when you manage the lifecycle of your OpenStack environment, you can use Foreman to manage that. So step up, basically it's not only deployment tools, the component that allows Foreman to install OpenStack, but you still manage your environment and your lifecycle of those hosts through Foreman. So it's not just a deployment tool, it does look a lot more. Yeah. So yeah, so TripleO is basically the upstream effort for doing the OpenStack deployment, right? But that's... Yeah, to be clear, I manage our TripleO efforts as well, which puts me in kind of an interesting position. Hang on, I'll speak to that a little bit. So there's a couple of other points on that question that I think are interesting. One of the things we're really interested in at Red Hat is we wanna tie content management and content versioning and the other stuff that we do with our satellite product into OpenStack provisioning. Some of you guys probably know, Foreman is the provisioning tool for satellite and it is gonna get rolled into satellite in the next version and we wanted a way to be able to provision OpenStack with satellite. Now having said that, I believe that we will wind up and everybody will wind up standardizing on TripleO as the long-term configuration and management application for OpenStack. And one of my challenges is gonna be figuring out how to sort of swap out the deployment and management mechanism so that we're using the upstream standard over time. But that's a problem I'm gonna have to solve. Why wouldn't we have picked up more pieces of fuel and integrated it with our stuff is what you're asking. Well, and it comes back to product strategy with satellite. I mean we wouldn't be able to integrate fuel with satellite, it's yeah. But it sounds like a lot of the pieces you made were adjacent to your strategy, they were just pieces you needed and the pieces already existed in the community. Well, the HA modules in particular are the same ones that everybody else is using. We've just done a lot of work on them. So I mean my understanding of the HA modules that fuel currently uses is that they're not, they were not in a shape where we felt like we could support them in a product. And so we've done a lot of work and it's all up there. In the community to make that better. The remaining pieces, the discovery piece was already there, the workflow engine was already there. So what we did was basically take a bunch of stuff that we already had and kind of jam it together. Yeah, let's watch the demo. Yeah, demo. Good question. Unfortunately it's a five minute demo. Right, so we'll see if YouTube will do what I want it to do. Okay, so I may have to start and stop this a little bit. Let me, oh, hang on, this is really tiny. That's not gonna work at all. Oh, that's even worse. Hold on. Here we go. Hold on. The very, there we go. That's what we wanted. Okay, so I'm gonna start and stop this a little bit because it goes really, we had to speed it up so we could get it into five minutes, but. All right, so what we're starting off with here, we're just installing, Stay Puft & Foreman onto a vanilla rel 6 machine. Okay, so what you're about to see here is just Mike Burns here in the middle of going through the installer stages. So we're defining a bunch of stuff. Here it goes, running the Stay Puft installer itself and we define a bunch of network stuff in advance. Things like, what's your DHCP range? What's your DNS? What's your DNS route? Oh, maybe I can't, how do I do that? Push this thing. Okay, that's all it's gonna give me, huh? All right, too bad. Can you focus a YouTube? I have no idea. Oh, the projector's out of focus. Hang on, that's not much better. Yeah, all right. Gotta use a bigger font next time, I guess. Anyway, so that's what's happening here and now we're actually running through the install of Stay Puft & Foreman themselves. Having pre-configured network and a few other things like that. We did pre-install some stuff to speed this up. You can see it supports subscription managers so if you're doing it on rel, then it'll pull content from your satellite setup or from our CDN. Yes, well, we're working on it. Right, so he's choosing, what are we doing? Or we're choosing what repo we're gonna install from. This is all in the form in UI now. Does the live CDN have a local mirror? Yes, it does. So we're just keeping it to do it. Yeah, didn't we, I thought we ran out of space because we were using a local mirror. Right, all right, so now we get to the auto discovery. I'm not gonna make you wait all the way through the auto discovery, it takes a little while. Yeah, yeah, and we cut to the end of auto discovery but you can see now you've got six hosts here and we figured out the MAC addresses and a bunch of other stuff, all right? Now we go on to filling in the wizard. So this is the actual UI deployment and I'll just pause right here. So this is a screen on which you cannot make any choices. The choices are coming and I've released down the road very soon but right now as I mentioned we only support the four layouts that I described. So you do that and you go to the next screen and this is where you configure all the different services. The services change according to the layout that you choose. So here we have a bunch of HA services that we're configuring. Yes, yeah, so I mean for the non-HA stuff you can almost take the defaults, right, Mike? When you get an HA, then like filling in on the VIPs and stuff like that takes more work. That's the thing we need to automate but we haven't done it yet. So we fill in a bunch of parameters and the nice thing about this is that it's, you know, the parameters are grouped. It's a lot easier to understand what's going on here than it is in the previous format setup. So I'm not gonna make you watch all of this, I will skip to the deployment stage in a bit. Solometer, I think we're almost done with this part. Okay, so that's what it looks like when you're done filling in the wizard. Now we go choose which hosts we're going to use for which roles. So you'll pick three controllers, one block storage, one object, and three computes. When this, for this go around, we happen to be doing this with VMs but we did also do it, we were doing with live machines for a while, it just takes forever. It would not have and in theory it would work, it just wouldn't really be highly available. Question was would it have stopped you if you tried to do two? And we talked about like how much validation do we wanna do? Really we should probably say, warning, this is not going to be a highly available configuration. But right now we're not. Some different puppet modules. So we get through that and here we are actually deploying everything. Anyway, you can see it going by, this is pretty much it. I was unable to get the section of the video that shows the horizon UI to encode without crashing my laptop. So I apologize for that. But I got the rest of it. The topology is not yet configurable. The question was is the topology configurable? We would like it to be, but we have to, we need to get to a point where I'll let you finish up, Arthur. We need to get to a point where we can support the configurations that you choose and that makes it tricky. So yeah, but that is part of the roadmap and this will be possible in your future. So, and we're out of time so I will leave you with, if I can get this to work, the last slide. Yes, there's this tab. Right, there we go. Named by the way, in honor of the late Harold Ramis. Thank you all very much. Any questions, I'll be hanging around.