 So, I guess as we're getting started we can talk a little bit about who I am in my history. So I'm Cody Bunch, I'm Principal Architect, Private Cloud with Rackspace. I don't necessarily know what that means on any given day. I spent the last five years in the VMware world building out the VMware product at Rackspace and making it what it is today. A couple of months ago, six, eight months back, I had the opportunity to jump wholesale into the open stack world and like my homelab sits unused now entirely because I'm doing open stack things. As part of the VMware world though, I started doing these VBrown Bag podcasts which are techies in the community talking to other techies in the community about what we find interesting. So a couple of weeks back I started them up for open stack as well. So if you're interested in podcasting, you have something to say but didn't get your session accepted here. Come see me after the show, we'll totally get you on the podcast. I guess we're time to start. This is totally not the hands-on with Quantum and Cinder Session. It was, but you know, conference internet, so oops. But what we will do is we'll keep going through as if it's functional and then if you do manage a better internet connection than I did in the last 20 minutes, feel free to follow along. So the prerequisites is a laptop, a little bit of RAM. The Vagrant scripts I have you using are using a little over six gigs, give or take. Not all at once but you need at least that much to get it all started or things get a little weird. You're going to need Vagrant from VagrantUp.com and then VirtualBox or VMware Fusion with the VMware Fusion provider. So to get started, I put all of our scripts into a Git repo. I'll probably update that after this to take out the conference specific things. So I did some bad things for the networking, bad, bad things. But it was totally going to work around the really bad Wi-Fi except the really bad Wi-Fi was worse than I anticipated. So once you pull that down, you see the end of the directory and you VagrantUp. And that will kick off a build. That build looks a little... Yeah, there we go. That build looks a little bit like that. So there's three nodes in the environment. The first one is a controller node. That controller runs Horizon, he runs Keystone, Glance, and the OpenStack Networking, the OpenStack Networking API bits, and the OpenVswitch pieces that are required. The second node that gets built is a compute node. He is generically just a Nova compute node with enough of the Quantum and OpenVswitch agents to make things work. The third node is... I call it iSCSI, it's the sender node. It's running Linux iSCSI, sorry, Open iSCSI, and TGT and the sender services and so forth. What you're looking at here in the brackets is the first number is the number of those nodes you want to build. The second number is the last octet of the IP address for that node. So if you want to scale from one compute node to 30 compute nodes, you change that one to a 30 and watch your laptop go up in smoke. I wouldn't necessarily recommend doing it, but if you have a robust enough environment, so you've installed a favorite virtual box on a server or something, you can actually start to scale this out and see how it operates its size. You probably want to change the 201 to something that's not going to collide with the iSCSI box as well. So that one. Do you want me to leave that up there for a minute? There we go. So as I said, while the room was filling up, the first rule of live demos is don't do live demos. The second rule of live demos is if it doesn't blow up in your face, you're not live demoing hard enough. So everybody got this far? The get repo may take a minute. So it may be beyond slow. I'm going with the assumption that it's not going to work, but these slides will be available on Slide Rocket, and the conference video will be posted, so you'll be able to follow along. And then the... Instead of using a Chef or a Puppet or whatever to install OpenStack, we did this in Bash Scripts so that it would be easy for you to follow when you got back to your office. You would know what I'm doing, where I'm doing it and why, and I didn't necessarily make a DevOps choice on your behalf. Like this half of the room may be Puppet, the other room may be Chef, and everybody in the back may use something else entirely. So Bash was generic enough to work for... We'll get there in a minute. Yes, they're there, and I can put a link up after the fact. So is that okay for everybody in the back? That may be an eye exam for some of you. That's about as big as that goes to fit it all on the screen. So what's happening there is on the controller node that's installing a few of the dependencies. So it installs the Linux headers, build essential packages, and that's because Quantum requires... Or not Quantum. OpenVswitch compiles a kernel module for you. The second line there is actually installing the OpenVswitch and OpenVswitch data path bits. The fifth line in, fifth, right, is installing actually the Quantum bits to the Quantum server and the Quantum plug-in for OpenVswitch. And then we finish out by installing all the various agents, right? And the last section there is we are creating the database for Quantum to use. So we're creating a MySQL database on the controller node with, like, you probably want to change those passwords kind of thing, right? Which totally threw me for a loop during the install. I was trying to set things up as Quantum OpenStack. I didn't even read my own script where it's Quantum Quantum. So, like, don't waste four days when you can just go back and read what you've done. Oops. Yes. There's a... I can't really hear you. There's a microphone, like, just next to you there. So there's a hierarchy structure to this, right? The vagrant file calls individual script files for the nodes. So this is in the controller.sh file. That's part of that git bundle. And so this is also just a high-level summary of, like, the important bits for this session. There's other things that happen in there for Glantz and Keystone and Horizon. But I didn't want to turn this into a how do I install OpenStack class? I figure you've all gotten at least that far. I hope. I totally hadn't gotten that far before this, by the way. So, no, it's... That's the... If you comment that line out of... So the comment is the proxy's not working. So if you open up Common, c-o-m-m-o-n.sh, and comment out the proxy line, and then restart vagrant, it'll pick up and should start working. It'll just be really slow. Yes. So in controller.sh, there's a bit where IW get the seros image as it's building. You can probably comment that one out as well. Again, these are the really nasty things I did to make it all work on the conference Wi-Fi, right? So also in that controller.sh file, what you're looking at here is the section of code that configures the OpenVSwitch plugin. So it just injects all of this into that file. What it's doing is it's setting it up for... It's setting up the SQL connection. It's setting up your integration bridges, your tunnel bridges. Do we want to use GRE for tunneling? This is where you're setting all of those quantum options. I can't really... Everyone's staring hard at their laptop, so I can't gauge if we're ready to move or not. So the question is OpenVSwitch, Linux Bridge, is there a preference? Originally, when the session got submitted, it was to work with a bunch of the enterprise vendors' plugins as well. And so the preference is whatever works for you or for your environment. A huge investment in Brocade or Rista or Cisco or whatever, work with their plugin if that's the route you want to go. If you're doing this in a lab, OpenVSwitch worked for me, right? I'm not going to dictate to you what should work for you, but it was relatively simple to get through. There are a number of mostly decent guides out there, right? So what I cobbled together here was like 15 different guides that almost worked, right? But OpenVSwitch was a relatively simple piece to get going. So the question is, is it better to install the network node on the controller or elsewhere? So in the context and constraints of the lab environment, like my laptop can only handle a certain number of VMs, I installed it on the controller node. As you scale this up into a production world, you're probably, for resource constraints, HA and so forth, you're probably going to want to place that onto a different node. Again, it will depend largely on what your environment is. Is it I'm getting my feet wet with OpenStack or am I deploying like 10,000 physical nodes to run like 300 million instances, right? So your decisions will vary based on what your implementation details end up looking like. So that is configuring the OpenVSwitch quantum plugin. Oh, wow. I can totally... I can't slide very well today. So the next bit there is the API paste configuration. This is where we set quantum up to talk to Keystone. This one also threw me for a couple of major loops there, right? The middle or off token bits, it took me three days to find the guide that had that in the appropriate place in the configuration, right? The off host, where it's in brackets there, my IP, that is the IP of the controller node. Again, since we're doing this in a controlled environment, it was just easier to use that variable. You'll want to change that to wherever Keystone is running in your environment, right? You give it the tenant name, which for this is my service tenant, and then your admin user and password. So when you're setting up Keystone, and we'll get to that in a couple of steps here, you make that user an admin of his particular thing. Really having a hard time. It's going to get even worse when we get deeper in the deck, right? The next configuration is the Layer 3 agent. And again, we're configuring that to talk to Keystone as well. So what you've got there is the interface driver. That is where you change out your plug-in. So if you're working with NEC switches or Rista or Cisco, or just a different open-v-switch thing, right? That is where you change that. A lot of them at this point don't necessarily have a standardized installation procedure, so it will vary from vendor to vendor and plug-in to plug-in. The bits that go alongside this, right? And then again, what is the region? What is your service tenant? What are your admin users and passwords? And then where is Keystone running, right? Yay, I didn't screw up the slides that time. Metadata, again configuring it for Keystone. And then also telling it where to find the Nova Metadata service. All right, so straight forward so far. Let's do a quick show of hands, like who's already installed and worked with Quantum? Okay. Okay, show of hands who's worked with Cinder. It's about an even split, right? So half of you are here for Quantum and are getting, or, you know, we'll get really bored when the Cinder stuff starts and the other half will get really, or are really bored and we'll get excited when we get to Cinder. So we're almost there. All right, last but not least is the Quantum.com file. This actually tells Quantum where to find Keystone, right? Again, fairly straightforward there. My IP is where Rabbit is running, where Keystone is running, what am I binding to, and so forth. And because it's all running on the controller, it's all set to my IP. You'll want to change those for your environments or to scale this outside of your laptop, right? And then the last little bits here. Oh, hmm. I thought I got into actually setting up the Keystone users and whatnot. I'm sorry about that. In the controller.sh file, there's a ginormous Keystone section. Feel free to review it. It is borrowed almost wholesale from Kevin Jackson's OpenStack cookbook. So he's got that out on Amazon, but you can totally pull it out of my Git repo as well. And so the last thing we're doing here is we're restarting all the Quantum services. We restart OpenVswitch, and we create a couple of bridge interfaces, right? And then like in theory, that all just kind of magically works, right? Give me just a moment. That coffee is terribly bad. Okay, so, Cinder. So in our VM environment, we've got three nodes, we've got the compute node, we've got the controller node, and we've got the sender node, right? And so what we're going to be working with here is the configuration gets a little different for this than it did for Quantum, because we have to do something on each node to make that work, right? So this is going to flow from the controller outward. So on the controller, we set up, we create the Keystone service for Cinder, we go ahead and configure what that actually looks like. So once we create the service, we're getting the UUID of that volume service that we've created, storing that environment variable, setting the public and admin URLs, and then the final command there actually creates the endpoint in Keystone, right? So the reason 172.16, 172.202, right, is there. Instead of my IP, like we saw prior, is because that is where the sender services will end up running. So there's a little bit more that goes into getting Keystone going for this. You have to create the user, and then once you have the user, you go ahead and add the admin role in the service tenant to that user, and that's pretty much what that's doing. So the sender user ID gets the UUID, stores it, and then uses it in the next line there. And then additionally, we're still on the controller node, right? Additionally, we set up a sender database. And this is another one where you'll spend four days troubleshooting things if you get your passwords wrong, right? Yeah. Now, too much caffeine and staring at the same eight lines for three days, right? Yeah. So just breathe in, breathe out. If it's not working, come back 20 minutes later and like, oh, there we go, right? All right. I believe we're still on the... No, we're now on the iSCSI node, right? And so on the iSCSI node, we're going to install some dependencies. Again, we're installing the headers and build utilities because... Where is it? There's one of these things in the next line for the sender bits that actually require them to make a kernel module. I don't remember which one exactly it was. So you install the various bits there to actually let the next app get line start working, right? The next one is installing the sender API, the sender scheduler, the sender volume service. It then installs Open iSCSI, the sender client, and TGT, which is how we manage... Or how the sender root wrapper manages the iSCSI connections for you. And then we restart the Open iSCSI services, right? And then there's some configuration there, right? So what that's doing is, again, some terabad stuff for the networking, right? It's trading out the... We're listening on localhost to wherever that is actually going... Wherever your controller is actually going to be. In this tiny environment, it was at 1.7.2.16, 1.7.2.200, in your environment, that may be different, right? It's trading out the service tenant variable for the actual service tenant we create in Keystone. And also the user and password there. So that's everything that's going on there. All right, so some additional configuration. SenderConf, right? SenderConf was a fun one. So the first bits set you up for connecting to SQL and, you know, where's my root-wrap configuration and so forth. The iSCSI helper line there. So that's where we set up, like that TGT packet or package that we installed. That's where you tell Sender to use that, right? If you're doing this by hand after the fact, instead of using the vagrant file and scripts that we've built, if you have... Was it Linux iSCSI target installed? This will totally break on you. Like, you have to app get purged Linux iSCSI target to get TGT to function or vice versa, right? And you have to change the configuration variables accordingly. Or you'll be up at 3 a.m. in the hotel room pounding your head on your laptop, right? What else did we change here? The rabbit host, because rabbit is running on your controller, you change that out to where your controller is, right? So we're on the iSCSI node, you change that to point back to the controller. All right, bunch of blank stairs. It's like really intense. There we go. There's a question. It's just back into the left there. Is a volume group necessary to be configured? Ah, yes. So that... So we get into actually making the volume group down the line here. Because we're using... Was it LVM or so? We tell Cinder where to find that LVM bit or so. Does that make sense? Is it necessary it may not be for other plugins? But for the way we're doing it, that's what we had to set it up for. So the volume group name has to match what your actual PV... What is it? VGCreate was or so? And you'll see that in a... I actually think that might be... Yeah, that's next. So last little bit, you sync the database. You go ahead and create a loopback file system. Or at least for this environment you do, if you've got this running somewhere real, you're probably not going to want to do this with loopback file systems. That would just be a bad idea. The way we're doing it doesn't persist between reboots. So if you vagrant up once, it'll work. If you restart the iSCSI VM, that may not work again. But vagrant's really cool. You just vagrant destroy and vagrant up, and it builds the environment again. So... Yeah, the loopback file system, we're creating the... with the cinder volumes file that we fleshed out before. We PVCreate, we VGCreate, and that's where that configuration from this slide becomes relevant. And then finally we restart all the cinder-related services there. Okay, so now we're moving out into the compute nodes. Yeah, I was in that kind of mood last night. I'm actually still in that kind of mood now that the Wi-Fi is a little angry at me, but whatever. Alright, so the compute nodes. So on each compute node, and you'll see this in compute.sh, so what we just walked through were some of the highlights of iSCSI.sh. This is now in compute.sh. This installs the Linux headers and build tools again because the quantum bits will need a kernel module. It then installs the open vSwitch bits for you, creates your bridges, and installs the... Wow. It installs the quantum plug-in and everything for you, as well as the cinder client. I don't know that necessarily cinder client is useful on all your compute nodes. That was useful for me in troubleshooting this environment last night though. So if you're using this environment again going down the road, it may be useful to have there, but when you get to production, that's probably something you're not going to want to do. Sorry, that's another iExam slide. There's a lot of stuff going on there, and it wasn't really easy to break into two slides. So in the nova.conf, there are two sections that are really important. The cinder one is critically important that you get the enabled APIs line correct. If you don't get that correct and it defaults to the OS API, you're going to spend hours hating yourself, right? So that line is important. The iSCSI helper TGT-ADM is also important. The rest sets up the actual driver you're using and so forth. And then the quantum stuff. So this tells it where the quantum API is running, where to off against and so forth, and then it does a whole lot of really fun stuff to get the OBS bridge driver and interface driver up and running for you. And so forth. So you do have some OpenVswitch bits. We're installing the data path and OpenVswitch and there should be a line that actually installs the OpenVswitch agent. I may have that wrong. So we do install OpenVswitch and then here we configure Nova.conf to actually use the OpenVswitch plugin. We'll leave that one up for a second because that is a really dense slide. Excuse me while I choke down my terabad coffee. I don't know off the top of my head if that is 100% necessary. It was in eight out of the 13 guides that I followed. So actually are there who are the quantum experts in the room? Who's touched quantum at least once before? Okay. He didn't do it with the firewall. It didn't work very well. I saw another hand over here. Come on. Don't be shy. Let's all help each other. Okay. So you need the firewall for the NAT redirect nonsense that this does. Is it okay to call that nonsense? Because there were a lot of words there and that coffee is really bad. So for the NAT redirect to get to the metadata server you need the firewall driver to manage the IP tables for you. Does that answer your question? Okay. Moving along. I'm just going to stand here and be quiet for a minute, right? Because we have a whole other hour and I've maybe got one more slide. I can't believe you all let me talk that fast. Somebody didn't give me the slow down gesture. You're going too fast for the audience, right? There's got to be some more questions maybe. Yeah. I guess they give us the hour and a half to actually vagrant up and then wait for it to work. So there would be, had I not had to abuse it into shape in the last couple of nights there are a couple of actually see me after the fact and I will or in the slide rocket or whatever for this I will put up the links that I followed. So, oh, and then the last little bit that threw me for a loop here was telling Cinder where or Quantum rather where Rabbit is. It's like everything, if Rabbit is running on something that is not your local host when you're configuring these things you need to tell it where it is or you're going to have a bad time. Yeah, so there we go. That was the if we get time part, right? So if we get time we'll show this thing actually working. Unfortunately I was getting a whole bunch of like that's what happened. So if you're using what? NFS versus iSCSI? That was a few questions. The first one is how do we change from using Linux iSCSI to something else be it Enterprise iSCSI or NAS you'll trade out your driver accordingly. On the Rackspace Knowledge Center for the private cloud product we have a white paper for both NetApp and EMC that describe how to do this with their their storage appliances if you're going to trade this out for something like FreeNAS and their NFS I believe there's an NFS sender driver. I've not had opportunity to work with it yet but you will trade out yeah I'm looking for those couple of lines for you. So you'll trade out the driver line here the volume driver and that's on each compute node and then in here you'll also trade out what the iSCSI helper is and that is in my Keystone Endpoint bits. So in the iSCSI.sh and I believe at the endpoint there's the endpoint. So that's where we specify the IP address of where the sender services and what not are running is we create that endpoint. So in this environment that's the address where we're running iSCSI in your environment you'll trade that out for wherever that is running I was just saying that with the NAS driver you're going to be just effectively mounting on a local mount on the hypervisor and then the volume creates are just files within that NAS mount that then get mapped to the VMs as they get created Thank you very much Yeah, so I'm by no means an expert no means an expert in any of this I just I coupled it all together, got it working have worked with the EMC and the NAS driver beforehand difficulty is in a room like this like we've already got a hard enough time getting the actual compute nodes and controller nodes to build imagine trying to get 200 virtual EMCs to build and then configure those we'd probably blow a whole day and a half actually just downloading that image so, okay, so that was okay, so the last we totally skipped over this section once the environment comes up if you get the environment to come up you're going to want to go ahead and you can access horizon at that address the 172.16.200 right, slash horizon your admin user and password or admin open stack or if you actually just want to log in the command line way, vagrant SSH and then the name of the node will log you in as the vagrant user and then you can pseudo to root there is a stack.rc file in there so you source that stack.rc that has all the environment variables you need to actually execute like sender list and so forth to access the sender node individually to troubleshoot or to see everything that we've done actually in action vagrant SSH high scuzzy and you can operate that the same way same goes for the compute node vagrant SSH compute that's it like I'm sorry that well, that the conference wifi was a little weird there's, I have another one of these already, is it on? I'll put the URL up here in just a moment I think we have a question that'd be awesome was that also your question? just an opinion actually a question for your opinion the use of open vswitch versus enterprise you know hardware networking in real production so that is totally an it depends question I figured there are so many varied facets of that you know like do you want somebody to call up and joke or do you have the engineering staff and skills in your organization to manage open vswitch at scale and what does that scale mean for your environment does it mean 10 or 15 nodes or does it mean 10,000 nodes right so you'll really want to adjust that decision based on what your requirements are sorry that wasn't sorry for the non-answer but like it's complicated I was hoping okay so the follow up question there was what is my experience with the enterprise bits I haven't worked with the Cisco plugin directly Mrs. Eggley can I talk about who we're working with is that allowed okay so we're working with a number of other enterprise vendors some may or may not be in the room and if they're hearing me say this they'll probably find you after the fact so we I work for the private cloud program at rack space and I work for actually the technology alliance partner subset of that program so I've worked with that many enterprise plugins so far and they're in all they're all in varying states of readiness some are just converting their stuff from Nova network to OpenStack networking sorry quantum drivers right others are writing from scratch and they're all in various states of readiness some more than others some provide more features than others some require weird and wonderful networking topologies to actually make it function so your mileage may vary having worked with all of them I can't necessarily say one is better than the other they all have their own quirks they're all weird and wonderful quirks any more questions just gonna stare at me and make me hold this microphone right for another hour oh he's got one back there oh the loop back device yeah there is indeed the reason I didn't include that in this is my thought behind vagrant environments is that they are inherently disposable so you're not going to use this vagrant environment to build a production build you probably also wouldn't use loop back devices to make ice-guzzy things in your production build right so the idea behind vagrant is you type vagrant up walk away get a terabat americano and half an hour later you come back to this working environment right so having that persist between reboots is not necessarily something you need to worry about if you do need to reboot the ice-guzzy node that we built in the environment you're going to want to use vagrant to do that so you would vagrant destroy ice-guzzy and vagrant up ice-guzzy that individual node and it would rebuild it exactly the same way speaking of if any of you managed to get the get repo checked out you probably just want to go and check it out like in 45 minutes to like 2 hours from now once I've had a chance to make the or remove the terabat network configurations so that you can actually play along at home right or at the hotel internet or wherever you get this is working I can also be reached I'm codie.bunch at rackspace.com bunch c at gmail if you have any questions after the fact and then I owed somebody the slide share link there we are so what's in the repo has some really bad things to make it work for the conference if you check the repo out in a day's time I will remove all the terabat stuff and so the repo will work for you when you get back to your office and start playing around it will probably even work in your hotel room this evening after the rackspace party although after the rackspace party you may or may not want to do this thanks everyone there's a question so not in this but in the slide it's another one that's going to depend on your environment I used it here because it's easier for me to conceptualize the tunnels and the networks between the hosts and everything that's I wish I had a better answer than that I'm sorry so thanks everyone for coming to my party one more I'm getting there I want to let the people who are leaving leave and then I'll get the URL up totally turned around as I called them out so I'll watch the man on stage operate his web browser and see if we can get some background music turned back on can we get some death clock or something maybe some maiden techno's not right for the like my demo just totally failed in front of 200 people mood there you go so yes it'll be in the read me in the repository so in the in that get repo in about two hours time to the slide rocket I will have removed the networking stuff so that this will work at home there you go nope you need a microphone I'm totally going to walk you a microphone yeah kill 12 seconds when we have an hour left so I was just wondering if there's a quick way to get the precise 64 box without having to go through the network you right what's going on there was a lot of chaos behind me I was wondering if there's a quick way of getting the precise 64 box with that going through the wifi um somebody's going to have to go through the wifi once right but I figured you probably already have it or somebody so I've got the I've got the VMware box and like I don't know that there's a windows VMware provider yet but so I've got the box for VMware fusion but vagrant charges and extra license for the VMware fusion thing so I'm interested in the virtual box oh this fellow here in the stripey shirt in the front row has the box virtual box the virtual box box box box the virtual box there we go there you go so we'll uh we'll just go completely ad hoc at this point if you want to stay and work with it we've got some time we'll try to get the virtual box box passed around photobombing my own photo I'm not even sure how to turn that one off there does not appear to be a switch right on the bottom right