 Okay, everybody got that? Who wanted it? No one's screaming knows. I take that as a yes Okay, so we'll get started This is the automated open stack deployment comparison talk. You'll find My email address at the bottom of the slide here. My name is Florian. I run has tech So we're a professional services company that provides consulting and training services around open stack among other things You'll find my personal Twitter handle there You find the company Twitter Twitter handle there if you Look for hidden meaning or any code in either you will fail. There is none. So there's one thing. I'd like to get out of the way first and foremost If you were ever under the delusion that it's a great idea to deploy open stack manually from packages and then hacking them by Hand just don't do so. Please just don't don't do that. Whatever you do You want to deploy open stack in an automated and repeatable fashion? Another thing I want to get out of the way is that contrary to what some salespeople might like to tell you There is no one true way to deploy open stack There is no one way that every one in the community and outside this community has agreed on This is the key way or the the standard way of deploying open stack. There are several In fact, because in the open-stack community We have a relatively unique situation that pretty much every vendor ships the same code as far as the platform itself is concerned So vendors have a really hard time differentiating on code alone If you're a software company and you're not able to differentiate on the code you ship That is something that shareholders tend not to like so therefore and this is not a bad thing by the way and therefore You're gonna have to expect your open-stack vendors to Differentiate in another way if they can't actually differentiate on the base of the code they ship then they're gonna differentiate in a different way and in open-stacks case it's deployment automation that is actually a key differentiator for open-stack vendors and What that means is that Very few open-stack vendors actually ship a deployment Automation scheme that is identical to that of the next vendor over and that for you as an open-stack user as an open-stack Deployer as an open-stack administrator means that you must yourself you must ask yourself two questions Specific to your use case and your organization First one is which is your preferred distro vendor? Do you already have one? Do you have a working relationship with a specific distro vendor that you're happy with and you'd like to maintain And keep that that relationship if that is the case Then that is something that is going to influence your choice of open-stack product And as such is going to influence your choice of deployment automation Or you might look at it from a different perspective, which is what's your preferred deployment automation system? Are you a puppet shop or you a chef shop an Ansible shop a juju shop? Which of these is the one that is preferred? And if you're very much locked in on one of these systems Then that is probably also going to define your choice of open-stack vendor or at least limit your available choices of open-stack vendors And it's also entirely up to you or up to your organization which of these two takes precedence So if you have a very good working relationship with say red hat, but you're a 100% puppet shop Then you're simply going to have to make a decision whether you want to go with something that is based on puppet Or you want to go with red hats product and that's not a decision that anyone can make for you It's something that you have to make in your organization and on your team Another thing that I want to mention for this talk is the fact that this talk is an overview And it is not a technical deep dive. You'll find plenty of technical deep dives in this conference But this is not one of them this is intended to give you an overview of the options that you have available and and discuss the pros and cons of each and Finally a bit of a warning a strong opinions ahead So I'm not going to worry too much about you know not taking anyone off I'm trying to give us Frank and an opinion About sort of the various topics that I'm talking about so with that we're going to get rolling and the first Vendor that I'm going to talk about relatively randomly the first vendor that I'm going to talk about is red hat And when we talk about red hats deployment automation or the deployment of open stack based on red hat products It's a bit of a never-ending story because If we look at the history of red hats Open stack deployment automation if you look back a few years it was like okay. We've got this thing called pack stack This is what we're going to use It's based on puppet wonderful, and that's how you're going to deploy open stack and then actually no Back up a bit and backs pack stack was always just a proof of concept, and we're going to do something completely different So then and this was around the relos p6 time frame It was all Foreman and puppet So now you would go from the sort of agent-less puppet operation that packs that was to something that was not only also puppet-based but also included bare metal automation multi-node orchestration that sort of thing except They came up with something else This is where we are now and the default deployment stack for Relos p7 It's called OSP director based on rto manager and that in turn uses triple O and ironic for deployment and also bare metal orchestration and so forth so This is the final word in Red Hat open stack deployment Perhaps because Right, so we'll simply see whether we're in for another iteration of of open stack deployment on red hat With that said a very quick overview of sort of the general process of how you deploy Open stack on relos p7 that is to say the currently available The currently available product so relos p7 uses this concept called OSP director And your general process there is you first bootstrap everything by installing your director node So that is basically your seed node your core node that you use for deploying everything else On that thing you have to create a director user. You have to create Image and template directories you register the system with red hat subscription manager that enables all of the open stack Repos or that makes them available to you you then enable them and then you install your director packages Now one thing that's interesting here And that's actually sort of a bit of a difference between red hat and the other products that I'm going to talk about is This stuff actually plugs itself into the unified open stack client as a plug-in, which is relatively neat So they don't actually ship their own. They're completely own Freestanding stuff they did they ship something that is that integrates itself with the open stack client So that's actually fairly neat what you then do remember This is triple O triple O is open stack on open stack And what you're defining is effectively two layers of open stack sitting on one another the bottom layer is called the under cloud and The under cloud you effectively configure on your director node That's the stuff that then of course manages your images your templates and so forth for the bare metal deployment of Your nodes in order to do that of course you have to fetch and install images for what then become What then becomes your over cloud and that's basically your second step The next thing that you install is in effect your over cloud. I should say that Red Hat deserves massive kudos for the quality of their documentation. The documentation is extremely extensive and very detailed And it doesn't only include, you know, helpful overviews like this one Which is a very nice, you know graphical explanation of what they expect your network to look like But it also has very very good detailed step-by-step Documentation for how you actually deploy this stuff So that's that's a real, you know, nice positive note there you have several options for actually installing your over cloud and Some of them or actually one of them is GUI driven and the other options are CLI driven You have the option to deploy what Red Hat calls a test over cloud and that Enables you to actually run the entire installation through a web UI and the web UI is essentially tusker now However, the Red Hat themselves say well actually that is something that you should only be using for testing and For proper use you would either deploy a basic over cloud using CLI tools and the CLI that you use is obviously either ironic or if you prefer the open stack The open stack integrated or unified client that will be open stack bare metal and Then you can also do what what Red Hat calls the advanced over cloud That's another thing that you can only deploy with CLI tools, but that includes handy features like Ceph storage better HA support and and so forth Now it remains to be seen How this progresses in the future because what you would of course want to have is Features to deploy basic and advanced over clouds with a UI as well We'll see whether they're going to expand on the path that they are now Or go back to square one and rewrite everything in Ansible We will see That much for Red Hat another vendor that I want to talk about is Ubuntu and Ubuntu open stack their default deployment stack consists of juju for application orchestration mass for bare metal deployment and then landscape as sort of an overarching uber management that is web based There your deployment checklist or your deployment walkthrough looks remarkably similar You're going to find that what I'm talking about here basically step by step in Concept actually is very similar from vendor to vendor the first thing that you do there is you install your mass server and mass is effectively equivalent to what in Red Hat is the director node what in SUSE is the admin node what in Mirantis is the fuel master node So that's the first thing you do basically what you That's a very simple package install more or less So you add a few repositories and you install the mass package What you can then do is you can wire up mass to talk to landscape landscape being a web-based server management environment and There is an open stack installer you can basically plug that into landscape as you go and then you can use landscape itself to deploy open stack to actually install your open stack environment Right, and then that basically gives you sort of this this web-based interface Which you can use for deploying and managing your open stack nodes if you don't like landscape which is entirely fine if you don't like the idea of Working with the web-based management interface, but you and instead you want sort of everything to talk locally to stick within Stay within your firewall. There is also the ability to use the open stack multi-installer or multi-system installer There's also a single system option, but which doesn't require mass, but that basically Assumes that everything runs on the same node in Lexi containers If you don't if you for some reason you don't like a mass either Then you can actually use juju in a manual mode as well Which is you basically manually add nodes to your to your juju environment and effectively getting juju up and running is a very very simple thing you basically run juju bootstrap and Then you juju deploy your charms as you go along now one thing that I like personally about juju deployment and that's a concept that you're also going to see in in Rackspace private cloud a concept that is also emerging in fuel is The ability to deploy any server any charm that any juju charm that you wish to a container to an LXC container and effectively what that enables you to do is you can run say for example a Three-node high availability cluster so three three physical nodes that you use for your open stack control nodes and Then every single charm every single service will on each of those three nodes Deploy one container that basically forms one third of the cluster And this is something that actually works really really beautifully I'll be it with a few limitations. So one such limitation is in Up until very recently up until mass 1.8 and before the before mass 1.9 All of what Ubuntu did there considered itself very very little with network management That is changing. So you had very little ability to Manage, you know bonds VLANs routes and so forth and as a result There was also very little such support in the juju And if you think about it one thing that you might want to be able to do is if you for example deploy Cinder to LXC containers if that LXC container only has a single network interface That's generally fine. But if you are deploying say for example in conjunction with Ceph, which this also supports very beautifully Then you might want to actually bridge your Cinder LXC container with like one leg into your Ceph storage network. That would be helpful, right? But that's coming and that's that's evolving as it as it goes along This has been so juju has Provided actually a remarkable degree of stability In terms of actually sticking to a single tool The same thing by the way is true for Susan Again, we're talking about a completely different deployment stack. So we talked about OSP director in triple-O. We talked about Maz and juju and in Suzy it happens to be crowbar and chef um Remember crowbar originally came out of Dell has the unique distinction of being the software project with the scariest mascot on earth and That's basically a bare metal deployment facility and that hooks in with chef for actual service and application automation again a relatively similar approach to the one that we saw in red hat and in And in Ubuntu, which is One thing that you need is you basically need your seed node that you then deploy everything else from and in chef parlance That's called the admin node So this is something that you know if you're familiar with Suzy you're going to expect All of this stuff to be configurable from Yast and of course this is and You basically go through a few steps there and then you run a script called install Suzy cloud and then off you go And at the end you have your dashboard One thing one specific Concept that Suzy caters to that the others that I've previously talked about don't is what if you don't like Actually deploying to the bare metal. What if you don't like actually bootstrapping your nodes like pretty much from scratch What if you already have your existing management and deployment interface and whatnot and what you would instead like to do is You might want to actually use an Existing machine and then deploy that and you also have a facility for that with crowbar register If you which is basically a shell script that crowbar generates for you that you can then run If you don't like just installing You know the whole works of your of your admin node There's another thing that's very helpful that you can do and that's Suzy studio for those of you who are not familiar Suzy studio is a very very handy way of building in an automated fashion effectively appliance images and then Basically being able to tweak them for whether you want to run them on bare metal on VMware on KVM on OpenStack on whatever and it's a very very simple and handy tool for that And it also allows you to define effectively or use virtual appliances that are available in a gallery and Of course as it happens There is also a Suzy OpenStack cloud 5 admin node that you can use there And that's basically you know you can if you want to deploy this to the bare metal You know you can put this into a USB image that you then IPM I mount and then boot it up or whatever you'd like If you want to test this in a virtual environment, then you can do that With an image from there and so forth. So that was the first part installing the the cloud admin node The the second is actually installing your your cloud nodes and Here we have Obviously the ability to do auto discovery. That's something that the others that I've mentioned up to this point can do as well and Then what you can do is you can effectively define individual node roles to specific nodes And then of course, you know, you can you can tweak individual services and so forth if You get sick of doing the same thing over and over again for different nodes There's also a bulk edit mode, which is quite helpful. And then in the end, hopefully what you'll get is you get a nicely deployed OpenStack environment One thing that Suzy got got going for for themselves is a very very handy way of deploying OpenStack services in a highly available fashion That's something that every vendor solves slightly differently in juju For example, what you do is this is effectively expressed as a juju relation with an HA cluster charm that you can then Through that relation linked to any other charm in the in the system And what you can do in Suzy is you basically can pull and drag services on to something that you have previously defined as a cluster and It will just magically reconfigure itself to become a highly available service as opposed to when you just drag it onto a node Then it will deploy itself as a non highly available system And the other thing that I've already mentioned is the ability to use crowbar register And crowbar register is effectively a shell script that crowbar generates for you that allows you to register Previously existing and previously installed Nodes effectively SLEZ nodes that are already installed and configured and register those for Suzy cloud deployment And then finally of course the last thing is you deploy your OpenStack services Again, it's basically it's a pretty nice drag-and-drop interface that you can that you can get there and the individual items here in Juju parlance are called charms in crowbar parlance. They're called bar clamps and And here's such an example of where so here previously you had individual bar clamps deployed to individual nodes And here's another example where Rather than deploying to individual nodes. You're actually deploying to clusters Basically using the same kind of drag-and-drop method which is really kind of handy and the fourth vendor that I want To talk about is Rackspace specifically Rackspace Private cloud or I think as they call it now Rackspace private cloud powered by OpenStack because that just kind of rolls off the tongue better And They again use a different deployment model and that deployment stack is Ansible and Rackspace deserve Commendation for the fact that they're basically doing all of their stuff all of their development on this upstream it used to be on Stackforge until Stackforge got retired so now it's under the OpenStack namespace and What Rackspace don't concern themselves with at all is actual bare metal deployment so everything that osa the OpenStack Ansible deployment expects is there is an installed Ubuntu 14.04 and How you get there is effectively, you know your problem But again, what you will typically do is you will install a deployment host Installing the deployment host is a relatively straightforward process of effectively get cloning a single repo Changing into that repo and then running a bootstrap Script so for those of you familiar with Ansible, you might ask Well, why can't I do that on like my own laptop? I mean, I already have Ansible on there All I need to run is effectively secure shell to be able to connect to my hosts Well, as he's unfortunately relatively common among many projects that use Ansible This thing relies on a specific minimal version of Ansible and a relatively recent one, I believe it's one nine one four if I remember correctly and What the bootstrap Ansible thing will do for you is it will actually install that Ansible for you right like that or not, I'm not particularly fond of of That approach but well That's what it is Then what you want to do is you want to configure your target hosts That's effectively describing your topology in YAML right this by the way now is not too dissimilar from Red Hat where you can effectively describe your overcloud in a YAML file and In SUSE, you also have the ability to describe effectively your topology in YAML and then batch automate your deployment that way And then finally what you effectively do is you run your Ansible playbooks And and off you go There are three in total one is called the foundation playbook, then there is the infrastructure playbook and the infrastructure playbook that's an interesting one because What the infrastructure playbook does for you it deploys all of the services that OpenStack relies on that are not OpenStack such as MySql with Galera for high availability such as RabbitMQ such as a decent SysLock configuration such as memcashd and Rackspace were the first ones that also gave you the automated ability to install Elk so elastic search lockstash cabana Directly and fully integrated with OpenStack and and and configured for that There's other vendors that are that are doing this now. So apparently Morantis are now shipping a fuel plug-in for that purpose And I don't exactly know what Red Hat's plans are in that department And then finally you actually run your OpenStack playbook And that's just called setupopenstack.yaml and that's your financeable playbook that you run and then you hopefully have a Working a working OpenStack environment So So that's basically my my my overview here if You want to use these slides at any time you can certainly do so you can grab them from here The top one are is just the the rendered presentation and the bottom one are the actual sources And this stuff is all under a Creative Commons license. So if you feel like reusing it Then please do so And of course, you know if you grab the QR code earlier, it's gonna lead you to this as well And if you're curious about What our company does about OpenStack, then of course you can also take a look at our website We have a landing page on OpenStack, and so do take a look there so When I first did this talk Which was a few months back The first question I got was well Okay, so you're now you're talking about OpenStack deployment, right and that's fine for Getting a feel for how easy it is to get up and running on a specific with a specific OpenStack environment But that's not the end of it right you want to be able to maintain and upgrade your stuff as well So I decided to Tech on a few a few slides for that so the problem is we want to keep abreast of OpenStack releases as they go if we first deployed our OpenStack cloud on Icehouse then Maybe would like to skip Juno and then go to Kilo or maybe we want to do actually every single release upgrade And so forth. So we need a way of doing upgrades Again, I'm going to take this in the same order. I'm going to start with with Red Hat. So Red Hat's upgrade story basically getting you from from OpenStack Icehouse to Juno to Kilo to Liberty to whatever we'll see in the well, I'm not going to say what we'll see in the next few releases But what it's been up to this point is It's basically oh well You don't really need to worry about upgrades because by the time a new OpenStack releases out We've just completely changed our deployment methodology and therefore you're going to have to re-spin from scratch anyway Which isn't very enterprise-y I might add Although of course, you know, there's there's there's people that say that the definition of enterprise is running outdated software and paying a lot of You know support for it. That's not the definition that I like But right now That's an issue. It is not an easy task getting From say for example, it was kind of sort of okay to go within Icehouse from Rel 6 to rel 7 or Centos 6 Centos 7 that was kind of sort of okay, but if you deployed your stuff with With the rel OSP installer on rel OSP 7 With Icehouse Not a good thing. I mean you're not in a very pretty situation if you deployed with form and puppet and That's what you built your processes around so well Ubuntu Actually does remarkably well in that department because upgrades are a means of effectively changing one juju variable And you can do that basically on the fly with juju set or you can you can also define effectively a yaml configuration of your whole thing or you can even deploy a juju bundle which is basically extensive yaml describing your entire infrastructure and The the process is almost fully automatic by just setting basically a single variable Which is available on all the juju open stack charms. It's called open stack origin Normally you would do so from yuka from the Ubuntu cloud archive Recently there has have also been additions to juju that enable you to actually deploy from git The reason I say it's almost fully automatic is that in prior releases There have always been sort of minor tweaks that you then still had to do But what's really nice is that this does enable you to run effectively staggered service Updates so that means that you can go through basically charm by charm along effectively defined lists that I think we all owe gratitude to Cern for because they found out what basically the best the best sequence of service updates is With open stack to cause the least blood sweat or tears And you what you can do effectively with juju is you can upgrade, you know every single service separately and also It's it's it's a very very easy task to upgrade the charms themselves. So the stuff that actually deploys the services for you So that's actually pretty neat With with Susan Susan opens that cloud also does have full support for upgrades You effectively run a script called Susan cloud upgrade This has one downside, which is it's effectively a stop the world type upgrade At least that's what it was from Susan cloud 4 to Susan open stack cloud 5 We'll see how they do in their in their next release What that meant from going from Susan cloud 4 to 5 is all your instances had to be suspended the API services were down for the upgrade and Then you would actually run it and then everything would magically come back up and you would resume your your VMs on the ansible side Also support for upgrades. There is effectively a fully automatic upgrade script and The expectation there is you have effectively no impact on your VMs What you will need to take into consideration is that you're always going to have brief API outages So for a short period of time, you know, your neutron is going to be unavailable your nova is going to be unavailable and And so forth, but you're not impacting actual running VMs, which is what people are are typically most interested in another thing that I should actually mention about about OpenStack ansible and the stuff that largely came out a rack space which we're beginning to see as sort of a pattern is This stuff just deploys into containers by default. So in juju, that's an option This stuff just does it by default. We'll see whether that is going to become sort of an accepted best practice of deploying open stack services just a single isolated container for every for every open stack service Preferably in high availability configurations such that you can always throw one container away Rebuild throw the next container away rebuild and never actually break API availability Okay, so with that I am 34 minutes and 30 seconds into my talk Which leaves me five minutes and 30 seconds for questions So I'd like to open it up now for that. So please fire away and I do have a Q&A mic Which is not open right now Here we go. That's much better. And what I'll do is I'm just gonna Throw this to the first person You have a question When is the right time to dirty the hands with the actual configuration files in auto open stack? What when is the right time to do what to dirty the hands with the actual configuration files? When should you actually get your hands dirty on configuration files that is to install it initially manually? We get a chance to learn the inner so yeah So so if I if I paraphrase your question correctly What is the right time to actually get your hands dirty by hacking configuration files if it's actual open stack configuration files? That time is never But while operating and issues do come and you do need to troubleshoot While operating the cloud Various kind of issues come up. Yes. They might demand actual knowledge into the configuration files Yes, looking into the logs and yes, then please contribute to the tool that you're using and send a patch for that Or work with people Who will who will you know troubleshoot and debug the issue for you or with you? Yeah, but yeah Yes Any particular is another question any particular reason you did not mention fuel from Mirantis Excellent question. So what about Mirantis? You can collect your check later. Thank you Okay So the only reason I didn't I didn't include Mirantis here was that I basically had to cut the line somewhere And and this talk is only 40 minutes, but I did I did put together a little bit of information about how Mirantis fits into this whole thing Okay, so what is there? What is their default deployment stack? Mirantis happens to be as per red hat going to OSP director the only major vendor that still backs puppet It's gonna be very interesting what that means for puppet and open stack upstream development and particularly for really large shops that rely on on Puppet to deploy but that's what they use so fuel is effectively a glorified puppet front end and effectively you're using puppet for For deployment the deployment checklist or the deployment progression is effectively the same or very similar to everything else that we've seen You start out with installing a fuel masternode That fuel masternode you actually deployed to the bare metal you effectively download an ISO image And then something resembling an old-style PC BIOS comes up you Use you you make your your appropriate settings interestingly Some of the fuel documentation or the fuel documentation says about some of these settings that you're supposed to never change them during the entire life cycle of the cluster I'm just gonna take them on their word for that. I've never tried that out I just don't know what happens there But I'm sure there's plenty of maranthus people who could explain that to us in the hallway Then you boot what maranthus calls or it fuel calls the node servers. That's fuel lingo for open stack nodes They effectively pixie boot they install Ubuntu 14.04 for you and then maranthus open stack on top of that In fuel 6 1 there was still support for running Centos 6 on the open stack nodes or on the node servers that support has apparently been removed in the fuel 7 Then you get a nice and handy And and shiny web interface that you can use to to deploy your stuff as pre-flight check and so forth one thing that's somewhat Unusual or peculiar perhaps about fuel is that there is a bunch of functionality That is not sort of in fuel core, but this is made available through plugins Now some of these plugins you generally would expect them to be third-party plugins like for example support for metonet support for Juniper contrail support for solid fire and so forth so stuff that actually comes from a third-party vendor and That where that vendor actually writes the plugin but there are other services such as VPN as a service Firewall as a service that some people actually consider relatively core to open stack that are implemented as a fuel Plug-in. There's also fuel plugins for sri ov support for melanox connect x3 dry HCA's infinity band HCA's there is a fuel plug-in for elk elastic search lock-stash cabana and So forth and as far as upgrades are concerned for Morantis, yes, you can do that the upgrade process is generally such that suppose you're starting out on fuel six one Apply all the available upgrades you want to now go updates up to that point. You now want to go to fuel seven That's what falls is a little weird You basically download a tar ball an upgrade tar ball, which you then need to manually unpack Then that runs an upgrade shell script, which depending on your hardware will take something like 30 minutes or so You're going to have a brief API outages during the upgrade run, but after the upgrade is done is done what it then Provides to you is basically the newly released open stack Version that you can then deploy Services for So so that's the brief update story for Morantis, okay? I'm afraid I'm out of time. Thank you very much for coming Sorry for the for the room situation and have a great rest of the conference Hi, oh, thank you Thank you