 I want to thank you for coming out to hear what I have to say today first of all, it's early and you all look very chipper. I'm David Blackwell, I'm an open ecosystem technical marketing engineer at NetApp and I want to talk about what we've been doing to find ways to make the actual installation set up in backup of OpenStack easier and how we're actually going to be implementing this for our own OpenStack environments in our Phase 3 OpenStack network. So we do like to discuss that there are costs to running OpenStack. I used to run a Linux user group and I liked the mantra that some of the software is only free if your time is worth nothing. If you're spending 800 hours getting a small project to work you've gone the wrong method and so OpenStack is complex. I don't think there's anyone who will argue with me on that point and in general there's more specialized staff needed for OpenStack than for some of the other environments just because of proliferation into the IT industry as a whole. I see in most of the enterprise accounts I work with that the project ramp up time takes a lot longer as they're trying to get someone to figure out how they're going to install OpenStack, how they're going to upgrade, what their backup plans will be they spend a lot longer just on the initial infrastructure setup than on actually getting to use OpenStack. Also there's that risk of a full disaster point. There's no internal DR solution for OpenStack. You don't have a way to back up your entire 100% environment and be able to pull it back in if something happens. I really have been having to think about and my position requires me to architect solutions. I wanted to find a simple way of deploying and managing OpenStack. I needed to have it to be the fastest way to get an environment up and running and because I work for NetApp I needed it to use NetApp in some form or fashion. What I've actually done is I've used our cloud software defines solution that we have in AWS and Azure and our select solution which is our software defined KVM software as my two backend storage methods that I'll have OpenStack pointing at. I'm going to show you how I prepare my host as this is a really quick overview. The installation, the comparisons of backup versions, traditional verses, the way you can do it this way and then what happens when everything breaks and you have to rebuild it. So first of all my environment, I am using CentOS 7. I'm using Cola Ansible as my OpenStack deployment method using the Queen's release and Cola Ansible is using the RDO distribution of OpenStack. I mentioned before I'm using ONTAP Select for my local storage. It's a software defined storage solution that can install on KVM or ESX and I'm using our cloud solution as my backup replication point. Each site for this single node setup only is needing two volumes. One to store all the information for OpenStack and one is my Cinder back end. This does work with massive multi-node environments. You just add one extra volume for each individual node you add to hold its individual configuration. So why am I saying that NetApp is what I'm using in the back end, why ONTAP? We think we stand out for a couple of reasons. Not only have we been part of the OpenStack foundation for years. We sit on the board. We've contributed 16% of the Cinder code and 60% of the Manila code. So we're very involved with the community. We're very involved with the project. We have thin provisioning built in with our software solution. So as you provision Cinder volumes, if you have to provision 100 gigs, you don't use up 100 gigs right away. It only actually uses that space as the file grows to that size, allowing you to present more space than you actually have in anticipation of adding more space later. You can use NFS or iSCSI as your back end protocol configurations. We do very easy and simple encryption at rest for Cinder. You turn it on and you can forget about it. It'll just always be running. And we do space-efficient clones. What that means is if you do a Cinder snapshot or a Cinder clone, it doesn't actually make 100% copy of that Cinder volume. It creates a 4K block pointer file that only grows as changes to that file grow. So if you have, again, my 100 gig Cinder volume example, if you need a copy of that for 10 developers and you clone it 10 times, you're not using a terabyte of space. You're using about a megabyte of space. Until they actually make changes, all you've created are block pointers back to the original file. And additionally, you can do attached or detached volume migrations between back end pools. If you start to fill up a back end pool or the performance has shifted and you want to move it, with the NFS connections, you can do non-disruptive to instances migrations of their Cinder volumes. Containerizing OpenStack. So this is OpenStack Cola that I'm using. If you've never seen OpenStack Cola, I highly suggest you take a look at it. It installs on bare metal systems. It's ansible driven. And it does a modular sub-process system of Docker. So it's not just Cinder's in its own container, Keystone's in its own container. Every individual process that runs is its own container. Here's my simple Cinder setup. I have three containers just for Cinder. So if a container doesn't start, it makes it easier to troubleshoot what individually is causing your issue and not having to run through your entire configuration to find out where a problem is. Additionally, since everything's containerized, it also makes adding features later far simpler. So if you wanted to add Magnum or something later on and you do the deployment, you don't have to worry about libraries now conflicting or an update or installing Magnum causing an update to Cinder that causes it to break. Since everything's isolated, you don't have those situational issues. It's upgradeable. I do this all the time as demos at work. All you have to do is in your global's configuration file, you change the name of the release you want to use and rerun deploy and it'll pull the new container images down, run the intermediary database changes that have to be done and it upgrades. Because of that, it's also technically downgradable. If there's not been too many extreme database changes in projects from one version to the next, you can actually downgrade easily too. I do a demonstration for a guy at work where I go from pike to queens to pike to queens to pike to queens in about 30 minutes. It's also incredibly fast. I'm sure everyone's familiar with Packstack as one of the previous simpler installation tools. Prepping a host for Packstack, about 10 minutes, all the requirements they want you to do, and then my average Packstack run time is about 45 minutes. When I certified for OpenStack years ago, Packstack was the preferred installation method in my three hour lab time. I spent 45 minutes twiddling my thumbs, just waiting for the install to finish. Cola Ansible, conversely, since I use Ansible to prep my host, I save a little bit of time there, and the actual run time for me, including container downloads, is about 13 minutes on average for a single node. Every node you add to that, to expand controllers, to expand compute, to expand storage, adds between 30 and 40 seconds. This isn't even our project. I just really love Cola Ansible, and that's why we use it quite a bit. The steps involved for using Cola Ansible, you'll edit your globals.yaml file. It just has what projects you want to run, what IP addresses you're going to use, what physical interface on the bare metal machines will be used for management, and which will be used for your OpenVSwitch container. You run the bootstrap servers. It does certain things, disabling the right firewalls, fixing the right SE Linux configurations depending on your distro. You can run the prechecks if it's your first time running Cola Ansible. I do it so often, I skip the prechecks because I know what I've already mastered in my playbooks, and then you do a deploy, and that's really the entire thing. When it's finished, you can run a post deploy, and that will create your admin RC file, but that's really all there is to getting OpenStack on a single or multi-node environment using Cola. The changes I've made to Cola that I'm doing in this install example, in my globals.yaml file, I added a backend for Cinder to use on tap NFS, the line for enabling it. I added the configuration input points. I actually have a slide at the end that will have a GitHub link to all this. You can download my environment-specific code and chop it up for your uses or just look at it and laugh at my ineptitude, either way you want to do it. I also have patches for the Cinder checks and the Manila checks, though I'm not actually installing Manila in this example. This all works, honestly, not because of Ansible and not because of NetApp. It works because of the way Docker handles volumes. So Docker volumes uses a metafile that keeps track of what it's created and what's there as opposed to a database. So if you can back up that entire directory and put it back someplace, Docker just assumes it created everything that was there. So I mount one of those two volumes I showed you in the beginning at varlib Docker volumes. And then when I install, Ansible tells Docker to create a local volume to hold to my SQL database and it tells it to create a local volume for the Nova locks. These all get created in that NFS share that I can then back up completely someplace else, copy back in and do a restore, which I'll show in a few minutes, that will put everything back. When I do my host preps, I do a few extra steps. I remove swap because I like to use a newer version of Docker than Kola Ansible does. I install some packages. I do my pip installs and I configure my storage selection. So I've created my local storage. I create what we call an SVM. It's our storage environment. I create all the interfaces. I set up all the NFS export rules. I create the storage volumes. I do the same thing at my destination site. I set up all my peering for my snap mirror to be initialized. I do all those local setups on the host, including install. It's hundreds of steps if you were to do it all manually. But thanks to Ansible and the different ways you can do it, it's four total commands for me. So I have the whole thing from hundreds of steps to four commands that I know will work every single time. So I'm just showing here a quick example. My host prep runs all from Ansible. It takes about eight minutes. My host prep also does all that storage configuration I mentioned because NetApp has a full Ansible module suite. So I was able just to add all that to my host preparation set. I can also then show you once my host preparation is finished, I do the actual Kola Ansible installation. This took 11 minutes. It had to also download the containers, but that's 11 minutes to download and fully have OpenStack running. I like to show that things I've said are actually happening. So here I'll just go ahead and I'll show that OpenStack, all the services are running from my setup. And then for my GUI using friends in the audience, I do the same thing from Horizon just to show that, yes, OpenStack is installed, is running, and is accessible. So backup. So the traditional way of backing up most of your environment, A, you have to back up your MySQL database. That's a dump command of your entire database because you need to make sure you have all of your different database table interactions and indexes proper. For NOVA, for every single running instance, you have to do a glance image create and a glance image download for that. If you have 100 instances, that's 100 files you have to create, 200 commands you have to run, and then you still have to do something with those 100 files. Cinder is also two commands. You need one command to create the backup and one command to export the metadata so that when you install it or restore it, it knows, again, which instances it connects to, which VDA device it is. Just backing them up and putting them back or just importing them back into Cinder requires you to go through and remember which every individual system connects to. If I have one drive for root and one drive for data, that's 200 additional files I have to find something to do with. So now I'm up to 300 files I have to manage for my 100 instance environment. With ONTAP and the NetApp backup, it's a single command because I'm backing up the entire volumes and just mirroring them to our cloud. And because of Docker, I have everything in there almost as if my transactional database was a flat file, it's just able to take an actual point in time grab of it and copy everything over. Using ONTAP cloud, you have some other advantages as opposed to some of the other options for a secondary site cloud-based. So you have your choice of AWS or Azure and location. Timing controls, you can actually use our software to set up when the instances in the cloud turn on and off if you only want to do a backup once a day, you can set a timer to start the system 30 minutes before your backup time. And once you know your backup window, you can stop it 30 minutes after so you're not paying cloud compute time when you're not actually using the system. It's a full version of ONTAP. So if you're familiar with other things NetApp does, our cloud solutions are completely compatible with anything else you're doing today using ONTAP. So I've had my own environment set up and I'm now mirroring into the cloud. This is just to show you, I have my two volumes and I have set them to a mirror. Let's jump ahead a little bit. And so now what I'm gonna do, I'm gonna create an instance. I run this part of the video really fast. Everyone's seen an OpenStack instance be created. This isn't anything exciting. But what I do here is once my instance is created, I'm going to touch a file and then just vi some, a line of text into it so that later when I destroy the environment and restore, you can see I have my exact same instance that I destroyed before. This instance is using a Cinder backed volume. So not only do I have Nova information, but the actual root volume is a Cinder volume in my Cinder backend. So we're gonna log into the Syros distribution. I just wanna show you there's nothing in the home directory currently. So I'll touch that file I mentioned. I'm calling it proof. And then we're gonna edit proof just to add a unique line. And I'm just adding the line NetApp insight 2018. And now that I have that file, I have my information. So this is all great. I know I have the information, but especially if you're using my example, which this was originally designed for edge locations, telcos that had to put systems at cell site towers. There's a lot that can go wrong. Disaster is inevitable. If you join IT expecting never to have a disaster, you join the wrong field. So when something goes wrong, it could be an OS data corruption. It could be hardware failure or at a edge system. It could be loss. Someone hit the cement block with a truck. Someone stole the system. Lightning, it doesn't matter. There are extra risks with that kind of exposure. So my environment, what I've done is I've just formatted my original environment. So it's now gone. All I have is my backup. Traditional recovery from this, if you were to try to do things the older way, you have to first set up MySQL. Then you have to individually create all of the tables you had with their proper indexes and links because MySQL restore only works at the table it needs already exists. So once you've done all that manual work, then you can import that file that you of course remember to remove from the original host. Nova per instance, you have to import those hundred glance images back in and then deploy an instance uniquely from each of those hundred glance instances. And then you're probably gonna wanna clean up those hundred glance instances because they're not gonna be your standard deployments. Cinder, you have to do an import of all the metadata and then an import of all the backup volumes to have them once again connected for your Nova instances. With ONTAP, it's one line. We're just doing a restore of the snap mirror information in the opposite direction. So the steps that we do to fully recover we're prepping a new host still to prep a host. We're snap mirroring all that data back. The time that a snap mirror takes is dependent on how much data you're moving back. We mount that volume we've created that's not been moved back to Docker. And then we just do our cola ansible install set. So I'm replicating all the data back to my freshly prepared hosts before I run cola ansible. And who has some guesses on how long it's gonna take once my data is replicated to have OpenStack up and running again? Any guesses? One hour? It is not one hour. Any other guesses? 11 minutes? It is actually not 11 minutes. 20? All right, I'm just gonna toss some more socks out because they're fun. Five minutes is a better guess. I got two more and I actually have a couple more at the booth. All right, so it's actually going to take, I wanna show you that Horizon's not running. My OpenStack isn't running. And then because I never trust IT demos I wanna show you that I don't have any Docker containers running and I don't have any Docker containers hidden as stopped. I'm not trying to cheat the process. So then I go back in and I run my deploys and my deploy actually takes eight minutes. This deploy still is on the same type of hardware my 11 minute install took and it still had to download all the container images. It only took eight minutes because when Ansible said, okay, create the volumes Docker said they're already there. When Ansible said create the MySQL database and set it up, Ansible found that it was already there and didn't have to change anything. It didn't have to set keystone rules. Everything was already in my database from that backed up volume that I just remounted at varlib Docker volumes. You can see everything's running again. The only thing that you'll have to do separate from just doing the reinstall is you will have to restart every instance. Obviously they couldn't have been running if they didn't exist eight minutes ago. But once you start them all back up they are the exact same instances. You won't have to remount volumes to them because they still have their same Cinder information their same Nova information their same neutron information. Here I list my home directory. There's my proof file. And then when I cat out the data you can still see it still said insight. If you want my Ansible playbooks it's the very first line. If you want to read more about Kola Ansible which you should it's the second line cloud.netapp.com has information on our cloud volumes on tap information. Netapp.io is everything we do open ecosystem. OpenStack, Kubernetes Ansible configuration and we have a Slack channel where you can come and ask any questions you can get an invitation at netapp.io slash Slack. So I'm hoping you're able to see that really containerization is the future of OpenStack. Ansible is what's driving this change to make it easier. And the Netapp because of how easy our cloud offerings are and how easy our snap mirroring replication is we're not just ready for this we're actually ahead of the process and making it to make your actual backup and restore options possible. And thank you for your time.