 All righty, everybody hear me? Good? All right. So welcome everybody to what is the last session block of the day. It just so happens to be the last session block before happy hour. So hopefully we'll get through this 40 minutes and we can drink to surviving the first day of the summit. So I'm glad you all can make it. This is Vagrant Up, your Rackspace private cloud. So who am I? By the slide here, you would guess that I'm the son of Ace Ventura. So if you haven't been to our Rackspace booth yet, there is a character artist. Hopefully he is there the rest of the week. He did a fantastic job. He sits there and draws you out on iPad mini and then posted all over the internet for everybody to see. So my name is James Thorne. I'm a sales engineer with Rackspace and the private cloud team. Some background about myself. I graduated from Texas State University and San Marques, Texas. After that, I worked for the same university for about a year and two months as a Linux admin. And after that, I was ready to get out of the university bubble. Regardless if you're going to school or working for the university, you live in a bubble when you're at college and I was ready to get out of it and see what the big bad world had to offer. So with that, I was ready to take all the knowledge I learned at Texas State on the road. So I became a platform consultant at Red Hat. I was there for about another year and two months. I got to travel all over the country, saw some great cities, saw some not so great cities, met some very interesting companies and met some very smart people. Well, when I took that job, I had the travel bug and it was 100% travel. And after that year and two months, that bug was itched. It was done. I was ready to plan some routes and not travel everywhere. So there was also that thing called OpenStack. What was it? I kept hearing about it, reading about it. Wasn't really sure what on earth it was. So who better to join to fully understand it and learn about it than Rackspace. And I joined as a sales engineer in the private cloud team and here I am, nine months later, talking at the OpenStack Summit. This is my first big conference to go to, let alone speak at one. So I'm very honored to be here and I appreciate you all coming. So with that, before we get started, I'm gonna do a live demo, but I'm gonna do less of a live demo and more of a background demo because if I actually did a live demo, we'd all be sitting here watching that screen for 30 or 40 minutes and just watching terminal text scroll by. So instead of doing that, I'm going to do the actual demo, put in the background and then at the end of the session, we'll come back and hopefully if everything worked, it'll be done and we'll have a demo Rackspace private cloud environment up and running with one command roughly to begin using for self-learning, testing, troubleshooting, whatever. So with that, I'm gonna close out the presentation here real quick. And that should be up there. All right, and I can of course see on my screen as well. Everything I talk about is gonna be on my personal website so you can go to thornlabs.net, do a search for vagrant and it will be the second link titled deploy Rackspace private cloud entirely within a vagrant file using VirtualBox or VMware Fusion and where is my mouse? Let's see here. Can't see you on earth with my mouses. Well, it's on the other screen, there we go. All right, so this entire post will detail how to set up vagrant or where to go to get vagrant setup, how to install VirtualBox or VMware Fusion. You can use either or with vagrant. VirtualBox has a much lower barrier to entry because it's free. VMware Fusion, you have to one purchase which is about $79 I believe and then to use it with vagrant you need to also purchase a provider plug-in thumb vagrant which is $89. So in total my math is probably completely wrong but it's about 140 to $150 to get started using vagrant with VMware Fusion. So much higher barrier to entry. I used to use VMware Fusion, I switched to VirtualBox for some reasons I'll touch on later but VirtualBox works just as well. In addition to setting that up I have vagrant base boxes available which are essentially images, premade images that vagrant uses to create the virtual machines. There are all sorts of vagrant base boxes available on the internet. I just so happen to make my own. So you can use these or other Ubuntu or CentOS vagrant base boxes. So I'm gonna skip down to the actual meat of the conversation or the presentation and which is the actual vagrant file. So you'll see here I have a handful of different vagrant files for different operating systems I have Ubuntu and CentOS or different types of environments non-HA or HA. For this demo I'm just gonna use the first one here, a RPC version 422 on Ubuntu server with neutron networking environment, non-HA. So I'm gonna copy that curl command which points to my GitHub account where I have all the vagrant files and when I paste it into the terminal it's gonna rewrite that file to be named vagrant file which is one word and that's all vagrant needs to get going. So with that copied I'm gonna open a terminal and hopefully you all can see that. I would do the demo I would bring up the environment on my laptop but I need a stable internet connection and I need a decent amount of bandwidth so I'm actually gonna log into my desktop back in Austin to do it. I'm gonna go into a folder I've just set aside to put vagrant environments I'm gonna make a directory we'll just call it RPC version 422. I'm gonna go in there and I'm going to paste that curl command and now you will see I have the vagrant file just a quick cat just so there's stuff in it there's stuff in it. I'm gonna open Tmux so if I lose connectivity I can connect back in. Tmux is similar to screen if you've never heard of that and then with that I'm gonna type vagrant up and it's gonna start doing its thing so I'm gonna leave that in the background and I'm gonna go back to the presentation and hopefully at the end of the presentation we can come back to that and it will say all done and we can jump in real quick to the environment run some commands make sure it's working and that'll be the background demo. So with that, why did I create these? When I joined Rackspace about nine months ago my open stack knowledge was very small. At the time I wasn't really sure is it a hypervisor? What is it? I'm very confused as to why is it all modular? Why is it broken apart like this? So being a sales engineer my job is going to be supporting this product and helping to sell it at a technical level I obviously needed to get up to speed very quickly. So I could of course use DevStack or RDO but I worked for Rackspace we have our product that we sell and I took our deployment docs our public-facing deployment docs that detail how to set up the open source Chef server and then how we use Chef to deploy open stack. So I took those documents and started reading through them started trying to understand how things worked and my configuration management knowledge was very minimal as well. I had never used Chef I had never used Puppet, Salt, Stack or Ansible. Of course we use Chef and Chef by itself has a decently high learning curve. There are other config management tools that are much lower barrier to entry but as I learned Chef and made more sense and those deployment docs I started to understand them. Unfortunately I didn't have a physical server environment to actually install Rackspace private cloud on to troubleshoot and to demo it and to understand it. What I did would have is my laptop which is a fairly well-specred Retina MacBook Pro 16 gigs of RAM quad core processor and that's enough to get a basic environment up nowhere near of what a real bare metal environment would be but enough to learn on and to begin to understand how everything works troubleshoot things, do customer demos, whatever. So with that I used VMware Fusion at the time and I would go into VMware Fusion manually create each of the virtual machines I needed. So you can install Rackspace private cloud as well as just vanilla open stack all within one virtual machine but when we did an actual production customer deployment we have controller nodes, it could be HA controller nodes we also have compute nodes, we have sender nodes and we also have a node dedicated as the Chef server. You could couple the Chef server with the controller nodes but it makes things a bit easier if you separate it out so you have less complexity in your environment so that's what I wanted. So every time I created a fresh demo environment in Fusion I would create by hand a Chef server virtual machine a controller node virtual machine and a compute node virtual machine. Which is very tedious if you've ever used VMware Fusion or virtual blocks or any of the workstation hypervisors you gotta go on there, you gotta name it you gotta set the proper amount of virtual CPUs you need RAM, storage, virtual mix, all that and then point it to an ISO to do a fresh install of Ubuntu or CentOS, takes time. And as I would do this it became one more tedious it took more and more of my time away from actually learning OpenStack because that's what I wanted to do. I wanted to go through our deployment docs go hit the OpenStack part and begin to understand how this all worked, how it installed and I don't remember if I was reading a DevOps article or whatever but I stumbled upon the term vagrant and when you read the term vagrant doesn't really tell you what the tool does but I read that real quick, clear and concise summary of this is an easy way to create virtual machines from an image and I started reading up on it seeing how easy it was there was already pre-made images out there you could download those, install vagrant type vagrant up and boom you have a virtual machine that you can begin doing stuff on. So I thought that was that light bulb moment oh my gosh this is exactly what I'm looking for I can now very quickly spin up a multi-node virtual machine environment using virtual blocks or fusion and begin focusing on the OpenStack commands. So that's what I did and within vagrant there's kind of two key pieces first is the vagrant base box which is the image it's been pre-made by somebody, yourself, whoever and that's what vagrant uses to create a virtual machine from so no longer do you have to point to an ISO and install fresh so you don't have to wait for that whole build cycle to go through now it's just a pre-made image that sucks in within a couple minutes maybe less depending on how fast your workstation is and then it will apply the necessary amount of virtual CPUs, RAM, virtual Nix, the hostname and then any other sort of provisioning scripts you point at vagrant has the capability of doing post install stuff with shell scripts Ansible, Puppet, Chef and the case of these vagrant files it's all shell scripts and I'll kind of detail why shortly but with that in place I had the different virtual machine definitions so I had one defined for the compute node the controller node the Chef server and possibly the sender node and whatever else you may need if you need multiple compute nodes so now I can now use that and get an environment up quickly and begin focusing on the OpenStack commands great, saving lots of time learning OpenStack, how we do it why we do it this way so on and so forth of course I hit the next point of I've automated my virtual machine creation why am I still doing these OpenStack commands by hand I get to a point where I learned all I was going to learn from our deployment docs we did it one way or maybe a couple different ways I had figured out all the different ways we did that deployment and why we did it and I stuck with that so okay great so I wanted to figure out how on earth could I automate the OpenStack install process and because there's particular parts of the install that require user intervention so for the most part I started out with taking all those manual steps putting them into a shell script because the majority of the commands you can just copy and paste into a command line and it does its thing and then you go on to the next command or next command in the process so I started with that and I quickly realized there was a handful of things that needed user intervention the first of which is the Chef environment file so within Chef there's the concept of an environment file and you could have environments Chef environments for production, QA, dev whatever you want and within that environment file you can override default attributes that are set in the Chef cookbooks we have our rack space Chef cookbooks that we use for private cloud and they set default attributes and I would use that Chef environment file to override the particular environment specific to my environment every environment is going to be different it's going to have different subnets it's going to have different nicks designated for particular things within private cloud or OpenStack in general and that's what that's for so typically I would just copy and paste that Chef environment file from a text file into the knife environment edit command but in this case I of course didn't want to be there to do that so using the same knife environment command you can use the from file command to point to a file so I just took that Chef environment file and dumped it to a file in slash temp or wherever and then just pointed knife environment from file at that file and boom there's your Chef environment file you're good you can now bootstrap nodes within Chef point it to that environment and you're good to go so that was the first kind of manual step that was taken care of the next one is the different SSH commands so within these vagrant files there's an order of operations to how the nodes come up the first node that comes up is the controller node that's created by vagrant and then the compute node and then optionally the sender node and those really don't matter the order at which they're created but the Chef virtual machine does have an order of operations it needs to come up last because in this environment it's doing all the orchestration if you will it's logging into each node in the environment issuing a bunch of commands doing preset up doing post set up and doing its own Chef commands to get the environment set up so it's important that that comes up last but with that out of the way how does that Chef server log into each node in the environment so of course we're going to use SSH keys to do that we don't want to deal with passwords and in this environment we know what the passwords are everything's vagrant so the root user's password is vagrant and any other user's password is vagrant it's a demo environment you don't need secure passwords so you of course have to create keys on the Chef server so that's the first step but as you all are probably aware when you go to create that first SSH key it's going to prompt you for a password well I don't want to deal with that luckily the SSH key gen command has a command line switch I think is dash N uppercase N you can just do double quote specify a blank password and boom you have your keys created your public and private keys created with no password so you won't be prompted for it later okay so that's created that's great so next in that list is the Chef server connecting via SSH to each node in the environment because it's a fresh environment and that Chef server has never connected to any of those nodes SSH is going to prompt you for the fingerprints for each node in that environment and that's obviously user intervention yes or no so how do you get around that well one terribly insecure way to do it is to turn off the strict key host checking parameter in the SSHD client configs you could do that it would work not a suggested method because if you did do that and you even though it's a demo environment in the real world if you connect to a server once accepted the fingerprint and then later connected to the same server what you thought was the same server again and it prompted you there's probably a security issue there that could be a man in the middle attack whatever so not the best way to do that the better more programmatic way to do it is to use the SSH key scan command and you can just run that command with the IP address or host name of every node and it will echo back the fingerprint and you can just dump that fingerprint to the user you're connecting from known host file and that's exactly what we do in this entire environment a host file is laid down on every node that points to the static IP that I've set in the environment and the short host name so I use the SSH key scan command to hit each one of those dump that fingerprint and the root's known host file and that problem is solved great so now the last step is we still don't have the SSH public key on each node in the environment so the Chef server still can't log in so you can of course use the SSH copy ID command to get that key over to that server but this is the first time you're connecting it's going to prompt for a password so the two ways you can get around that are to use the SSH pass command which would probably be cleaner in my vagrant files I'm using expect so you just got to kind of know the process and the things that are prompted to have expect work for you and that's what I use I've used it since I've created these to work pretty well so that's how I get around that and then at that point all the SSH keys are in the environment and the Chef server can log in and begin bootstrapping the nodes so with that real quick just to kind of show you what that looks like and this is going to be kind of a pain let's see here so on let's see here so here's the Chef environment file that I mentioned that's a whole other discussion in and of itself to talk about oh there's a screen right there I can look at so moving past that I just used the cat command as mentioned earlier to dump that to a file and then pull it in with knife environment from file which you can see right here and then there's that SSH keygen command with the dash and double quote double quote there's the key scan commands typically I will use the short name in the environment just because it's easy to read for newcomers or anybody really instead of having to memorize the different IP addresses that are used and you can see it's dumped to roots known host file I still expect because the particular base boxes I'm using don't happen to have it installed you can see it sends the vagrant password to that particular node in the environment and then moving on from there so that's done for each note in the environment and then moving on from there before we actually get into the Chef portion of it there's some preset up for a sender node that is done you of course don't have to use the sender node but typically with the sender environment you're going to set up a dedicated hard drive or ray to ray or partition that you will then layer LVM on top of and then create a volume group called sender dash volumes in this case I can just do the same thing with a file and makes it easier so I'm just creating a 10 gig file making a loop interface making LVM physical volume group and then doing an LVM volume group called sender dash volumes and for the sake of our Chef cookbooks that's all it needs to get going when the Chef cookbooks run and they get to the sender node they will see sender dash volumes as a volume group and it will do everything it needs to set up sender in the environment so with that and I also put this is one of the kind of the oddities of doing this in the local environment especially with vagrant I put all those commands in rc.local on the sender node because if you do a vagrant reload and you don't have those commands in rc.local it won't get reset back up and a typical production environment you wouldn't need that there because again you'd be using a dedicated partition and LVM would be aware of all that and keep track of all that so with that preset up out of the way we now move into the actual Chef deployment so looking at the top here first I have it broken up into three different nodes I have the controller node the compute node and the sender node first I point to the environment that we created earlier I just called it rpc version 422 and all so there's five commands there you could do all this in one line and I did that for a while but I ran into bizarre issues where it would run through and the Chef client run would fail and it would do the run through again and because of the way Chef does indexing so you can do searches within Chef it didn't know it was there and I have it just a continuous while loop and I don't expect it to fail and I don't want to have to you could put in there if it fails five times just stop everything because this entire script has parameters set to close out on any sort of error or exit one return and doing it this way step by step kind of alleviated that problem a little bit a little more wordy in the vagrant file but for a newcomer it's just easier to read and understand it so after the controller node is bootstrapped I apply the different Chef roles so in this case single controller and the single network node you could separate out a network node if you wanted to but for the sake of RAM and resources on your workstation just put all in one and then I sleep for 15 seconds to wait for the Chef server to catch up on its indexing because the knife SSH command that comes next relies on that indexing because I'm searching for name controller one the entire environment is severely under resourced the Chef server I think is given a gig of RAM which is not enough the controller node is given two gigs of RAM which is very much not enough so I give it time to kind of catch up and in some cases that's not enough but with the continuous while loop it will eventually catch and just start going with the build so that's done the controller node has to be done first and then you move on to the compute node and then the sender node and then any other compute nodes you may need so once that's all done and I mean if you have a fast environment it may take anywhere from 20 minutes to an hour it depends largely on your internet connection and how fast your gear is if I did it from my laptop on a fairly fast connection it takes about 30 minutes so that's what you can expect there and at this point once all that's run through and done you have a basic open stack or private cloud environment to begin using some additional things that I've done and this is one thing I've come across with dev stack or possibly RDO when you do installs on your local workstation is when you bring up an instance you don't have connectivity out of that instance to the internet so how do you do anything great I can spin it up what can I do with this I can spin up another instance and they can talk to each other okay cool I want to go talk to my package manager or pull down something from github whatever so the final steps allow you to have connectivity out from your instance to the internet and then also connectivity back in they allow you to set up floating IPs so you can connect to that instance from your workstation and not have to jump into the controller node and then assuming the instance is in a software defined network then jump into a network namespace and then be logged into that instance you could of course login from the console but a lot of the cloud images have and it's strictly SSH key based and from a console within horizon you really can't push that key and login unless you have an admin pass setup or something like that and typically in a production environment these changes wouldn't be here or these modifications wouldn't be here and I'm working around limitations in virtual box and vagrant so it was similar to those post or those pre-install steps for sender earlier I go through and where's my cursor so let's see we'll do the controller node so I log into the controller node on each node in the environment I have everything separated out on different virtual nicks so each zero on every node I just pretend doesn't exist that's strictly for vagrant to log in via SSH to the node virtual box has weird ways it does networking and I just leave that alone and pretend it doesn't exist each one is where I begin separating out the services the open stack services and APIs ETH2 is going to be a dedicated network for Neutron GRE tunnels and then ETH3 is going to be my Neutron provider network if you will so think of whatever workstation you're bringing up this environment on as your router outside to the internet so if you use virtual box or VMware fusion if you have all these different networks it's going to create virtual adapters on your laptop with .1 addresses 40.1, 250.1, whatever in this case I'm using 244 as the network and the .1 address lives on the workstation I'm bringing the environment up on which just so happens to be my desktop back in Austin so that's what I'm going to use as my Neutron provider network and for floating IPs so I log into the controller node I delete that IP off ETH3 ETH3 I bring up an open bridge called br-s3 and that was set up by our chef cookbooks and then I add that IP address to that open vSwitch bridge you don't have to do that but if you want to be if you want to have connectivity on that network within your controller node or compute node you need to have that IP set but if you're just logging in from your workstation you don't have to as long as your workstation has an IP on that network I then add that port ETH3 into the open vSwitch bridge and then I go in there and do the same commands with an rc.local and then also I sleep for a bit so if you do a reload on the vagrant node it waits for the services to come up, make sure they're up and then goes through and does all this and then it restarts a couple open stack networking services to have the changes take effect similar to what was done with the sender preset up earlier and you don't need that, again it's strictly for if you do a reload of the environment and it all comes up properly after the fact if you're like me and you typically bring up an environment use it for maybe 30 minutes and then destroy it, those aren't that necessary but for a completeness sake those are nice to have especially if you need to reload the vagrant VM to forward ports because right now for example if I wanted to access the horizon dashboard from my desktop in Austin I would need to forward particular ports to that desktop and then use SSH to get to those ports from my laptop and it becomes kind of a mess but I would like you to do that and bring the environment back up into a proper state so those are done on the controller node or those modifications are done on the controller node and the compute node and at that point the last piece of the puzzle here for us to have instance connectivity out to the internet there's different ways you can do this this is just the way I decided to do it I turned the Chef server into a router essentially using IP forwarding and IP tables you can turn any Linux box into a router of sorts and you could set aside another vagrant virtual machine to do this if you wanted it on its own but for the sake of RAM and CPU resources on the workstation I just coupled it with the Chef server after it does its deployment it's not really working that hard after the fact essentially so as I mentioned turn on IP forwarding forward any packets that come from ETH2 on the Chef server out ETH0 so because of the way virtual box does networking the only on any of the virtual machines that has internet connectivity is ETH0 well if you ever spun up multiple virtual box VMs they all have the same IP address and they may all have the same MAC address but they're all isolated from each other they can't talk to each other unless you add another NIC on the same network on each and then they can communicate with each other it's kind of a pain to work around but what this allows us to do is ETH2 and that 244 network that Neutron provider network so when I created that Neutron network with an open stack the gateway instead of being .1 on 244 which can't go anywhere because that network does not have connectivity out I instead point to the 244 address on the Chef server which I think in this case is just .10 so then when you bring up an instance on that provider network the gateway will go to the Chef server and if you know that there are a lot of different tables commands that packet is coming in ETH2 it will then masquerade it out ETH0 and it will go out to the internet and then it will come back in properly now it will only come back in properly if you set promiscuous mode within virtual box on the virtual adapter on your workstation if you don't that packet is going to go out And I think the response is destination, host unreachable or something like that. And if you look up that, the actual standard definition of what that means, it means the packets made it out to its destination, but it can't come back in. So at that point with all those changes in place, that is the extent of the shell scripts within the vagrant file. And real quick, just to show you the three parts of this vagrant file, the first part down here is what you need without a doubt. Without this, you can't do anything. So there's too much to show it on the screen. But the first piece at the top is just some vagrant syntax. The next piece is the vagrant base box that's going to be used. So in this case, it's a Ubuntu server, 12.04 box. You could also use CentOS 6.5 if you wanted to. The next thing turns off shared folders. I don't need it in this environment. And then you get into the actual virtual machine definition. So you have the controller node. And you can see the host name being set. You can see it pointing to a particular shell script that exists at the top of the vagrant file. Then you have all the different virtual adapters that I mentioned. And then at that point, you also have the parameters to set memory size and the number of virtual CPUs to set broken up into VMware Fusion or Virtual Box. And you can see here for the Virtual Box 1, right here above the two end statements, that's the setting that sets that promiscuous mode for the virtual adapter. And that has to be done here. You can't go into your workstation, fire up a terminal, and set promiscuous mode on that virtual adapter there. It doesn't work for whatever reason. But doing it here, doing allow all, there's also another parameter. It's like allow VM or device or something like that. But allow all worked properly. And if we continue scrolling through, you're going to see the definition for the compute node and then the sender node. And then the Chef server node, which doesn't need as many virtual adapters. And you'll notice it has references to shell scripts, the common script and the Chef script. So to kind of show you real quick what those look like, we jump back to the top of the vagrant file. Here's the common script that exists all right here. It just drops a host file into every node in the environment. So you can SSH into node or reference a node via its short name or its IP address. And then the Chef script is what does all the deployments. So here is the curl command to pull down a shell script that will install the open source SSR Chef server, set permission, set an executable bit on it, set a URL environment variable for it to reference, actually install it, change into Roots home directory, pull down the Chef cookbooks GitHub repo, and then check out the particular branch in needs, get all the sub modules in place. And there's the upload the cookbooks to the open source Chef server, upload the roles, and then there's the Chef environment file that it uses. And at that point, that's everything that's needed within the vagrant file. So before I automated all this, it was just that bottom piece I was using, just those virtual machine definitions. And I was doing all the above stuff by hand, which obviously is extremely tedious. So with this in place, let's see how this build is doing. And of course, it doesn't work all the way. So this brings me to my next piece, the various problems that I've encountered with these vagrant files. So these vagrant files have a lot of external dependencies. I need to pull down the open source Chef server from Chef. I need to pull down Chef Client from Chef, and that all comes from AWS based on the URLs that we have in here. In addition to that, I need to pull from a GitHub repo. What else is there? There's all those different external things that, for the most part, probably 85% of the time work. I did this an hour ago and it worked within 30 minutes. Of course, I'm on stage and it doesn't work at all. And this is one of the main things I've run into is this is the open source Chef server install script that stalled out on downloading the Chef server Debian package. So I've encountered this before. Sometimes I would do a build late at night and it wouldn't work. Or I would do a build on Sunday morning and it wouldn't work. Typical times where you would expect maintenance to happen on all these external things. You could probably code around it. You could also pull down all those files, stick them somewhere that you know will work and get around that. But the point of these vagrant files is to spin up a fresh RPC environment every time just the way we deploy it at Rackspace. Because I use these for customer demos. I use these to troubleshoot questions that I have or on our community forums, whatever. And when it works, lay down the vagrant file, type vagrant up, go do something else 30 minutes later, come back and I have my environment. I can begin doing my work and do whatever I need to do. So I'm going to cancel out of that. And at this point, so one last bit here, depending on where this failed, you could either destroy the entire environment and just redo it. Or in this case, because it failed at the Chef server, I could, instead of destroying the entire environment, just do vagrant destroy Chef. And then, once it does its thing, do vagrant up Chef again. You could probably do that to the point where it does any sort of configuration on any of the nodes in the environment. If it's hit that spot, you could go in there and maybe manually take off the changes. But it'd be easier just to vagrant destroy everything and then vagrant up it again, and hopefully it works. We could do this again, and it could work in 30 minutes. We're, of course, not going to sit here and watch it. But that's all I have to talk about, about why and how I made this vagrant file. So with that process failing me on stage, that is all I have to discuss with that. Everything I talked about will be on my website, thornlabs.net. Do a search for vagrant. It should be the second link. Follow me on Twitter. I'm J. Thorne on FreeNode. Are there any questions? Yes. That could, of course, be done. Actually, some of our, oh, I'm sorry. He asked if I could do the same sort of thing using containers or Docker. You very well could. I believe some of our Rackspace developers are actually working on that for another sandbox type environment. And they've utilized Docker. And I think in this case it was Linux containers. But what you can do with this, I mean, if it's a shell script, you can put it in this in provision of vagrant machine after the fact. You could probably spin up RTO with this. You could probably spin up DevStack with this because all those need some sort of base operating system or virtual machine to install. So, yeah. Oh, there's a microphone. I'll repeat it. Sorry, go ahead. Which we post? I mean, to pull down from the, so he asked if the OpenSec APIs are the same as the Rackspace public cloud. So, yeah, they are the same. Yeah. Familiar, you used vagrant destroy Chef. I'm not, so kind of two parts. One, I'm not familiar with that syntax. So could you explain that a little? And then number two, in my experience, if you do vagrant up and something stops, you can just start over again and it'll usually pick up where it's at. Is there some special reason that doesn't work in your case? It's hard to say because I've taken those manual steps that I did before and just hit them one for one and just pasted them into the shell script. I'm expecting each one of those to succeed because, I mean, if you're doing it manually, of course you go back, figure out what's wrong. In this case, because I want it as hands off as possible, I'll probably just vagrant destroy it and vagrant up it again. I'm not familiar with how it would pick up, actually. So I'd be interested to hear about your use case there. I've just, if, well, often it was because I had a mistake in my script. So it would fail at step 12. Sure. And then I correct my step 12 and I vagrant up again and it just picks up where it left off. Sure, okay. So that's my experience. But also, I've seen vagrant destroy, that takes out the whole thing, but why vagrant destroy Chef? Does that take out a particular step? Correct. Yeah, that takes out a particular virtual machine in the environment. In this case, there was four nodes, the Chef server, or what I've designated as the Chef server, the controller node, the compute node, and the sender node. So vagrant destroy will get rid of all of them, vagrant destroy, Chef will get rid of that one, vagrant destroy, controller one will get rid of the controller node, so on and so forth. Okay, I've only used vagrant on one machine, so you put like a master file over it? Yeah, within the vagrant files, so typically you just have the base box and then that's all you really need, and then of course the standard vagrant syntax, but the different virtual machine definitions I have is what spins up a multi-node virtual vagrant environment. Okay, so was that all in the same, I'm sorry I didn't follow. Oh, no worries. Was it all in the same vagrant file? Yes, everything is contained within that one vagrant file. Okay. Thanks, yeah, sure. And everything that in everything I showed you is up on my website, it's in my GitHub repo, so feel free to fork it, do whatever you want with it, make it better, send some pull requests, and kind of help me too, so. Any other questions? Good, cool, all right, thank you everybody. Go enjoy your happy hour. Thanks.