 Welcome, Tom Potter talking to us, OpenStack and non-developers. OK, thank you. Take it away. Sure, thanks. OK, thanks for coming along, everyone. I hope you've all been enjoying the conference. So my name is Tim, and I've been working at HP with and on OpenStack for nearly five years now, which is a scary amount of time. And today I'm going to be talking about building a personal cloud with OpenStack. So this talk is aimed at a slightly different audience than other OpenStack talks that you may have been with. And the title kind of gives it away. It's OpenStack for Non-Developers. So my big idea is that last year, sometime last year, OpenStack has hit an inflection point where it's no longer the domain of solely developers to be able to install and use it. So today I'm going to explain how someone who isn't an OpenStack developer, someone who hasn't got a deep domain knowledge of how OpenStack works, someone with only basic sysadmin and networking knowledge can install, maintain, and use a small OpenStack system on very modest hardware. Now this is actually super exciting. I find this is super exciting because OpenStack is just a fantastic tool for getting things done. And it's actually pretty easy to run up a big bill on cloud services like Amazon or HP Cloud or whatever. And that can be OK if you're using it as part of a business or you have money coming in. But if you're at home and on a budget or whatever, then having a personal cloud that you can use for free is just a fantastic resource. So OpenStack is a hot new technology, which is kind of an understatement, really. I mean, there's a lot of hype about it in the tech press and the media, but also among general tech users. So I was at the dentist the other day and what do you do? I mentioned something about cloud and cloud services and then had to kind of get through 10 minutes of being barraged with questions about what cloud was and how it worked. And this is from my dentist. So perhaps I'm showing my age a bit here, but all the energy and excitement around OpenStack really reminds me of Linux itself about 15 years ago. So it's something that's taken the IT world by storm. And it's kind of upset a lot of existing products and companies' plans. And lots of big companies are announcing that they're doing something with OpenStack. And everyone's joining in, contributing kind of code and money, documentation and what. And of course, everyone's talking about OpenStack at conferences. I counted at least eight talks this year that were directly about OpenStack. We're all here at LCA because we love open source and we like building things with it. We're all here because we love tinkering. And I'm going to say that this is one of the best ways to learn about something. So typically someone like you or I would download the latest version of something, stick it on a machine, see what it does, break it, reinstall it again, and then think about how we can use it in our lives at home and at work. And this is what I want to be able to show you how to do with OpenStack. So right now there are about a half dozen or so kind of OpenStack distributions that you can download and start using for free. Hewlett-Packard has HP Heal-On, which I highly recommend. Rackspace have the Rackspace private cloud. And Morantis has a product called Morantis OpenStack, just to name a few. So these products are all great, but they operate at too high a level to do kind of serious tinkering and messing around with. So most OpenStack distribution style products are very much organic downloads. And you kind of install that image directly onto a server. And I think skipping over that initial install phase kind of cheats you out of an opportunity to learn what all the bits of OpenStack are and how they fit together and how they break. So I think you're going to learn a lot more in a build versus a buy situation. So there's a lot of piecemeal bits and pieces of information available on the internet about installing OpenStack in the form of blog postings, answers on Q&A sites, tutorials, and documentation by vendors on getting various situations working. But bloggers and people like that usually only document something that they've done, some complicated situation that they've solved or something that they've got working. And they kind of cut and paste a bit of configuration file and then say what they did. So it's a pretty disjointed experience, just kind of coming in and trying to work out what OpenStack is, how to get it working. So what I believe is missing is some basic getting started information for someone who just wants to kick the tires. So what about DevStack? If you've dipped your toes into this OpenStack thing already, you might have heard of DevStack. So DevStack is just a shell script that both documents and installs how to configure a complete OpenStack system. But it's geared more towards developers and particularly the developers of OpenStack itself. So it really isn't suited at all to production deployments or kind of long-term installs. My main problems with DevStack is that it doesn't always run to completion successfully because it's under constant development itself and it's got to cope with a lot of many situations. And it doesn't survive a reboot. So if something happens to your server, your DevStack system doesn't run. So you've got to run some scripts to get it going again. And it seems you've already got quite a lot of knowledge about existing OpenStack knowledge already before you start. But the good news is you can install OpenStack at home. What I'm presenting in the next section is a straightforward technique for installing OpenStack at home on a spare PC. So you're not gonna need huge amounts of RAM or a name brand rack-mounted server. Just a regular old desktop machine with a single network card will do. So I developed this talk on a five-year-old HP workstation that I had lying around with only 12 gigs of RAM in it. So we're gonna be using vendor-supported packages from a major distribution and kind of standard distribution tools and repositories that are actively being developed and maintained. And at the end, you'll have a working OpenStack service that you can use as a springboard to get started in cloud computing and OpenStack. So there are so many benefits to virtualization and cloud computing that I'm not gonna, you're probably sick of hearing about it, so I'm not gonna talk about it. I started off writing this talk from the point of view of, hey, this is something cool, you might wanna try out, it could be fun. And then it went to, wow, this is actually really, really handy. And really, more than handy, it's insanely useful and everyone should have one of these. So I won't go back to the buzzword bingo side at the start, but having local access to a free method of virtualization and managing virtual machines is a game changer just personally and getting your own work done. You get all the regular benefits of cloud computing. It's cheaper, it's faster, it's more convenient. And having a personal cloud also gives you an extra freedom that you might not have considered before and that's the freedom to experiment. You can do lots of things on a public cloud service, trying out new software and new operating systems. And if you do that on a public cloud as Amazon or HP, the HP public cloud, that's reasonably cheap, it's 30 cents an hour, something like that. But just for personal, it's not really that cheap for personal mucking around. So I managed to rack up a $1,000 bill on the HP public cloud, just writing this talk, just firing up stuff and forgetting about it and fire up 20 virtual machines and accidentally leave a few of them running. Luckily I've got a free gratis account on the HP cloud, but if you're at home and you accidentally blow 1,000 bucks, you know, I don't know, that's not gonna be much fun. It has to be beans and rice for the next year or so. Okay, so what I'm presenting in the next section is the actual technique, straightforward technique for installing OpenStack at home on a spare PC with a very modest hardware and networking requirements. Okay, what exactly are we gonna do? We're gonna install the Ice House release of OpenStack. We're gonna install it onto Ubuntu 12.04. We're gonna have it use a range of IP addresses on your home network. So the particular OpenStack release and operating system I picked here, I just picked because it worked more or less straight off the bat the first time I tried it. But other OSes and OpenStack versions are certainly doable if you wanna put the extra work in. But for the purposes of getting something up and running at home to try out, that kind of exact details, they don't matter all that much. So the hardware requirements are quite modest. A spare PC, number one, a spare PC with a reasonable amount of memory. As I said before I mentioned, I developed this talk on a 12 gig machine with 12 gigs of RAM, but 8 should be usable as well. Number two, access to a console would be handy. If you end up messing up the networking, if you've got a keyboard and a monitor you can plug in, it'll be useful, or out of bands through your console. So if you've spent much time messing with Linux, it's actually surprisingly easy to lock yourself out of something by messing with the networking. I've done it many times and I'm sure people in the audience have too. Okay, the networking requirements are similarly quite modest. A single NIC and an undisturbed range of IPs on your local home network. So by undisturbed I mean your DHCP server isn't going to start handing out addresses for them in that range, sorry. And you can most likely change this quite easily in your home router. You just go into the web interface and tweak a few settings. So I've just described on the slide, my network at home is pretty standard, one, two, one, six, eight, one, dot zero, class C network, gateway at one, dot one, and I've twiddled the DHCP server to hand out addresses in the bottom half of the network. So up to and including dot 127, which means I've reserved the top half, dot 128 to 254 for OpenStack. And in the example here I'm saying the OpenStack host will live at dot one, dot two. So if you have general knowledge about Linux networking, that's pretty handy. So DNS, DHCP, most people know, have a basic idea how they work. ARP, the address resolution protocol, that's pretty important for understanding how things put together. And of course the, how things, how IP routing in Linux works, it's useful to understand for our setting up a server. So we're going to end up with a standard set of OpenStack pieces. Horizon, the web interface, which is actually really, really good compared to a lot of open source web interfaces. It's quite, it's quite, it looks quite good. Nova, the compute service and glance image service. Neutron for networking to allow your virtual machines to communicate on layer two and layer three levels. Cinder for block storage and Keystone kind of ties everything together for authentication. So even though we've left out a bunch of kind of fairly big and important projects, we've still got the essence of OpenStack here. The ability to create virtual machines, authentication, persistent storage, networking and a pretty interface to play with it. Okay, so we're going to install OpenStack using Chef and StackForge. So you might not be familiar with Chef. It's one of a growing set of automation tools that's basically used to install and configure software on servers. We're going to be using a set of cookbooks that are part of the StackForge project. And our StackForge project is a big set of tools and repositories used for building infrastructure. And it's what actually what the OpenStack infrastructure team uses to install and run the infrastructure to develop OpenStack itself. So Chef is actually a pretty complicated piece of software and it can do a lot. But for the purposes of just setting up a single server to mess around with at home, you don't need to know very much about it. To get started, you just need to create a configuration file, run Chef, and then you can mostly just forget about it. If you're running a production system or you want to keep up with a very latest version of everything, then you need to think about it a bit more. But just for, to creating a single-node server, you don't need to know very much about Chef at all. So I'm going to go through a series of command lines here, but don't worry about writing it down or understanding it completely. I've got some pretty detailed instructions of a walkthrough on the Wiki page for this project. It's on GitHub and username is teapot. And the repo name is OS4ND, which stands for OS4 Non-Developers. Okay, so here's just a very quick overview of installing a base operating system for our server. Firstly, we install Ubuntu 12.04 using a DVD or whatever your favorite method of getting an OS onto a server is. Secondly, we add an extra software repository, the Ubuntu LTS releases, which is the one we're using. They're not updated very frequently, usually only for security reasons or with fixed bugs that cause data loss or data corruption. So here we're adding the cloud archive repository, which is a collection of open-stack packages that's maintained by the Ubuntu cloud team. So this repository is updated a lot more frequently than the LTS operating system repository. And then we install a slightly more modern kernel, that port of the next LTS releases kernel and reboot. Right, so we can install Chef by just downloading a gigantic dead file from the vendor, and installing it using deep package. And fortunately, as is the case these days with a lot of kind of large complex pieces of software, they tend not to be packaged in Ubuntu or Debian. You have to use the vendor package. Secondly, we pull down an umbrella repository from Stackforge, and this is just a top-level repository that has a bunch of links and metadata that point to other parts of Stackforge. And then we run Chef's dependency management tool called Berkshelf for some reason. And that kind of just installs, pulls down all the dependent cookbooks and installs them locally. Finally, we create a configuration file that describes our setup, and then run Chef to install and configure OpenStack. And this process only takes about 10 to 15 minutes. It depends mostly on the speed of your network. And that's actually it. Thanks, everyone. Once you finish running Chef, you've got a very basic single node, more or less fully working OpenStack server. Okay, here's an example of the configuration file that we would feed to Chef. For a single node install, we don't really describe very much. I've left off a few things, so it fits mostly on the slide, but they don't matter too much. But the essence of it is that it's JSON, and you say what you want and what you don't want on there and kind of describe a few other components of what your system looks like. We've just got an IP address there that says which IP address that we're gonna listen on. One thing to call out is that we're turning on, even though this talk is called for non-developers, we're turning on developer mode, but don't worry about that little inconsistency. And developer mode in this case just says to Chef, don't try and install random passwords, just create well-known passwords because to configure random passwords for all of Chef, for all of OpenStack is a lot more work and I don't really wanna confuse everyone with that. So well-known passwords are fine for kind of mucking around at home, but obviously if you wanna install this at home, you'll need to do that a bit more. That's a bit beyond the scope of the talk, but I wanna put that up on the wiki pretty soon, how to do that. All right, so Chef, as I said, creates some well-known passwords and usernames. So we can start poking around with the system more or less immediately. The commands above set up the environment variables that are used by OpenStack's command line tools and you can start messing around with the OpenStack command line tools here or just head over to the web interface and start from there. Interestingly, if you run PS or any other kind of performance monitoring tool, it turns out that a base install of OpenStack with no virtual machines uses about two gigs of RAM. So if you've got a eight gig machine, that still leaves you six gigabytes for virtual machines, which is not bad just for mucking around. So one last thing before we can say that we're actually finished is that's to hook up a networking. I'm gonna go into lots more detail about the networking in a few moments, but I'm just putting up some command lines here to describe how to connect each zero of our PC to one of the virtual ethernet switches that OpenStack uses. And we're also creating a record for the slice of the external network that we're gonna be using on our home network. So if you remember back to the start of the talk, my home network was 192.168.1.0. I've reserved the .128 up to 254 for use by OpenStack and the gateway is 1.1. And I'm turning off DHCP because we don't really wanna have two DHCP servers running on the same network that usually ends in tears. So networking, networking in OpenStack, it's probably the most complicated part of the system. And it's the piece that I had the most trouble understanding when I was trying to put this all together. And it took me a really long time to get it all together in my head. So hopefully I can save you all a bit of time and explain how it works for you now. Okay, open vSwitch, you might have heard about it already. It's one of the major components of OpenStack's networking service. And it's an open source multi-layer virtual ethernet switch. So if you think of a physical switch, with lots of cool security, monitoring and management features, but it's totally in software, an open source, then that's basically what OpenVswitch is. After we've run Chef and configured our network, as described previously, we ended up with two virtual switches on our server. The first is called BRInt, which stands, which is the switch for the OpenStack integration network. It's not internal. Which is confusing, it's integration. And this is the switch that the virtual machines get plugged into, and also OpenStack's DHCP and some other internal services that are all connected together on the internal, on the integration bridge. And BREX is the switch that represents our external network, and that is connected to our home network. So this is what networking looks like after Chef is finished on our home network. It looks like after Chef is finished on our, Chef is completed on our host. There are the two virtual network switches that I've just mentioned, BREI integration and BREX for external access. Now, unfortunately, these interfaces are confusing, sorry, OpenVswitch goes and creates some ports and interfaces to connect to these switches, but unfortunately, they're named, they're all port, the switch, and the interface are all given the same name as the switch itself. So if you're talking about BREX, then you have to kind of figure out whether you're talking about, you know, which bit you're talking about. Usually it's obvious, but something to not get confused about. And also notice we've got the ETH0 physical network interface, the only physical part of the system, just down there on the bottom left there. Okay, let's look at what happens when the networking, to the networking when some virtual machines are created. So when a virtual machine is created, it's given a virtual port on BRInt and gets plugged in and that appears as ETH0 on the virtual machine. Notice how there's no connection between the BRInt and the BREX bridges at the moment. So at this stage, virtual machines can only communicate with each other. They don't have access to anything outside the host. To allow virtual machines to communicate with the outside world, we need to link the two virtual switches together using a router, the virtual router, surprisingly. So the router operates at the level three layer, that is the IP address level. And for the most part, the open V-switch bridges operate at the layer two level, the MAC address level. So if we add a virtual router between the BRInt and BREX bridges, then IP packets can flow between virtual machines to the outside world. And notice we also, we actually also need to plug in ETH0 into, in a virtual sense, of course, into BREX. And that was one of the commands on the previous slide was telling open V-switch to plug the physical port into the virtual switch. Okay, so just to demonstrate, you know, how to hook up the networking, I'm gonna have a quick demo using a virtual machine. I didn't want to press my luck and try with native networking. So I've created a virtual machine on the laptop here. I'm just gonna give you a quick demo of yeah, of how to do this initial networking setup. Right, so I'm just gonna create and configure an unprivileged user to do this. And the same way that you don't do everything as root on your Linux box, it's not a good idea to do stuff in OpenSack as the admin user. So I'm just going to follow the instructions here. I'm just in the web interface, create a project or a tenant called demo. I'm gonna create a user and then I'm gonna sign out. So let's see how this goes. That's not what I wanted. Is that what I wanted? No, I think I might have messed up my, that's annoying. Okay. Right, so I was just gonna log in as admin. I was gonna zip on down to the identity panel, projects, create, create a demo user. Sorry, a demo project. Say drew on five demo. Okay. And I'm gonna create a demo user. Demo. Okay, and it's gonna be a member of the demo project or tenant. Okay, user demo was successfully created. I'm gonna sign out. Okay, that was easy. All right, the next step, I'm actually glossed over a little bit of exactly how the networking works in OpenSack. So in the previous slides, you might have got the impression that all VMs connected to the BR ints which can talk to each other. So that's not the case. Linux, some Linux networking tricks are used to isolate the different users from each other by using some private networks. So we need to create a private network for our demo user inside OpenSack. And I'm just gonna go through the instructions here and create a layer two and a layer three network with a dress up there. So to do this, we log in as the demo user. I'm gonna go to network. I am gonna create a network. Notice we can see the public net here. That's the slice of our external home network. Gonna call the demo net. Demo net. I'm gonna create a demo subnet, which has, sorry, which has that 10.0 address. And I'm gonna turn on DHCP and I'm gonna give it some name servers. I'm just gonna point at Google's ones because it's easy. Okay, so that is create, whoops. Okay, so that is creating a network for the demo user that only the demo accounts virtual machines can talk on. Okay, now remember we need to link the internal network and the external, sorry, the integration network and the external network together with the router. So I'm gonna go through the instructions here to create a demo router. And okay, I'll do that right now. That's quite quick. So yes, it's still logged in as the demo user. I'm just gonna pop down to router here. Create a router, demo router. Okay, and now I'm gonna set the one half of the router to point at the forwarding, sorry, the forwarding gateway, I'm gonna set to point at the public net because we want all packets coming into our gateway to go out to the public network or the internet or your home network. And we also need to hook up another part, the other half of the router I'm gonna add, I'm gonna add the demo network to the other side of the router. So now we've got two sides of the router, one on the internal side, one on the external side. And that's all nice and happy. Okay, I'm just gonna create a virtual machine and show that packets can go from the virtual machine out to the home, sorry, out onto your home network, in this case, the conference network, and then back again. Okay, I'm just gonna follow the instructions here. So I'll go to compute instances. I'm gonna launch an instance. I'm gonna call it demo. I'm not gonna call it demo router, thank you auto complete. I'm gonna give one of the chef installs a kind of testing virtual machine image here. So I'm gonna run that, and I'm gonna connect the virtual machine up to our little private network and launch. Okay, now we make a sacrifice to the demo gods and hope this comes up. It has every time I've tested it, it has. Excellent, okay, thank you. So I'm just gonna go to the console. So we should be seeing the standard Linux kernel boot up messages here. And in a few seconds, as if by magic, we'll get a login prompt. How good is that? So this is the, oops, demo. Okay, I'm just gonna log in using the credentials that are up here. Yeah, it's handy isn't it? More and more systems to do that I think. Okay. Okay, right. So if we, this is on the console of a virtual machine, it should be connected up to, it should be have an address in the 10 range on our private network. And let's see whether we can head out to our router and back again. Okay, well, we will just pretend that worked, okay? I won't tell anyone it didn't work if you don't. Oh yeah, right. Oh yeah, no, I had that. Okay. Oh yeah, no, I had to do this in virtual box before I started, but okay. Okay, so I know. Okay, right. 21683. Oh, okay, look, I'm not gonna have enough time to do all this, but this basically works, trust me. You can't connect from the conference network. We can, you can send pink packets to Google's 8.8.8.8 name server, but you can't get requests back. So you just need to, if I go back to my network configuration, I should say use the conference name servers, which are 130.something. So yeah, that was, I didn't, I didn't remember that I had to deviate from my script and install different name servers, but yeah, although you can ping, yeah, anyway. Yeah, you should, you should, but I have only gotten another 10 minutes to get through the rest of this, so. Right, so I've already done this. I've logged into the console with the credentials. I have shown slash not shown the networking works and VMs, don't forget the VMs by the same tenant can see each other at the layer two level if they're on the same network and virtual machines from different tenants are completely isolated and they can't communicate with each other at all with each other at all. Okay, this next test demo should work because I'm not using our names. I'm going to create a floating IP and attach it to the virtual machine. So if we just instances, right, so what we're doing is we're actually, yeah, so what we are, what we're doing here is we're saying to open stack, please give me an IP, please allocate me an IP address from my home network and assign it to a virtual machine. So publicness will be on 192.168, but the demo is a bit different because it's running locally. So it's giving me a floating IP and I should, I'll associate that with my demo virtual machine. Okay, so the demo virtual machine here has two IP addresses, one on the demo network and another one on the external network. And if I ping, I'll put a bit more of this on the screen for you. Right, if I ping 10.0.2.133, that should, that's me talking to the virtual machine using a floating IP address through the router. Okay, we'll just get back to, watch and lean, we've got to zip through the next bit. Okay, networking, I'm gonna go over it again. So this is what the networking looks like with some IP addresses attached. We've got two virtual machines, dot three and dot four. If dot three wants to talk to dot four, it'll do it in the same way that physical machine would dot three sends out an up request. It says, hello, I'm dot three. I'm looking for the MAC address of dot four. And dot four will receive that broadcast request and say, hello, I'm dot four. My MAC address is whatever. And then the two virtual machines can communicate with each other. For packets heading out. So if you wanted to ping, you wanted to download some software or whatever from your virtual machine. This process is quite similar. Dot three knows that it needs to send out packets through the gateway dot one. So it would send out an up request and find the MAC address of dot one. And then once the packet hits the router, some source NAT happens and it rewrites the source address of the packet to be the address of the OpenStack host, which was one dot two in our example. And the packet zips out the internet and comes back again. For public IP addresses, the flow is pretty similar, except in this case, whether the packet goes out to the router, the, in the source NAT stage, the router changes the outgoing IP address to be the floating IP address, which in the example is one dot 128. And then it goes out to the internet. Packets coming back in will be addressed to one dot 128. So the router needs to act as a proxy ARP server. So it will, it knows that it needs to respond to ARP request for itself, but also for dot 128 and other IP addresses that it's responsible for. So on ETH0, if someone asks for the layer two MAC address of dot 128, our dot two server will respond by proxy for it. And that's how we can send packets out and back again. And I mean, obviously the floating IPs allow incoming connections to your router virtual machines, which is the, which is the thing. So you can set up an open, if you set up an open-sack server at home, by using floating IPs, you can have other users on your network connect to services on your virtual machine. So you might want to set up a Minecraft server or a web server or something that's accessible on your local network. That's what we'd use floating IPs for. All right, so we've got a basic open-sack system. We can boot virtual machines, talk to the internet, except incoming connections using floating IP addresses. So I'm just gonna go over a few bits and pieces to help you get more out of the system. Okay, operating your cloud, obviously these things need a little bit of maintenance and kind of day-to-day troubleshooting sometimes. So open-sack is it's not, you know, it's not a black box. Although it might be a collection of black boxes, but it's not, it's just a collection of REST interfaces. And these REST interfaces are in front of lots of Linux technologies that you might already have familiarity with. So for Nova, I mean, this is a bit of an over simplification, so sorry, Michael. Nova is just a REST interface in front of LibVert. Keystone, you know, it's a REST interface that interacts with MySQL or some other backend that can perform authentication. Cinder, for just a local machine, we can use LVM2. Glance just kind of allows us to read and write disk files, you know, fundamentally. And Neutron is a REST interface in front of, again, a bit of an over simplification. It's in front of IP tables, OpenVSwitch, and a bunch of other Linux technologies that you probably already have heard a lot about. So troubleshooting your little server. So what do you do when you find something that goes wrong? You're looking at log file. What's the second thing you do? You poke around in the configuration file to see if it's not configured the way you thought it was. What's the third thing you do? You go search on Stack Overflow. But I mean, it's because the individual components of OpenStack are in front of an underlying resource, you troubleshoot the underlying resource. Networking is probably the only really tricky thing to worry about. It uses a few projects that I guess your average system admin might not have seen so much. IP network namespaces, something I hadn't actually heard about until I started messing with OpenStack. That's the technology that allows much in the way control groups and Docker kind of allows process isolation. Network namespaces allow network namespace isolation. And OpenVswitch, it's got a bunch of command line tools that you can use to kind of poke around at what's going on. Okay, back to Chef. So at some stage, if you want to kind of mess with the configuration of your server, you might want to change some part of Nova. So the very, very simplified process of doing this is you work out what configuration file you want to change. Say it's nova.conf. There'll be a template file somewhere in your cookbooks that you've downloaded called nova.conf.eab. You take a look at that file, work out what template variable you want to change and then add that to your configuration file and rerun Chef. And Chef will rewrite that nova.conf file and then restart the write services so that things will start working again. So for example, if you want, I'm running right now, I'm running OpenStack inside a virtual box VM. So you can't run, well, apart from the fact that it's a Mac and it doesn't do KVM, if you're running on Linux, you can't run KVM in KVM. Mindless prior arrangements with management have been made. You can run QEMU in KVM though. So you're running QEMU in QEMU, which is a bit slow. But nonetheless, that's a useful thing to do. So it turns out that nova has a little configuration parameter that you can change. So you change the vert type from KVM to QEMU and by poking around in the template file, you can see that the attribute, the JSON variable you're looking at is in a dictionary called OpenStack compute libvertvert type. So we add that to our JSON file rerun Chef and everything's good. Okay, again, a bit of oversimplification but this is the general idea. So this is what this would look like. This is the configuration file I used to build a virtual OpenStack on VirtualBox. So we probably can't remember the previous JSON file but we've just added, I've just added OpenStack compute libvertvert type and set it to QEMU. Okay, to start off with virtual, sorry, opens Chef, right. So when Chef installs OpenStack for you, it gives you one very small image that's basically only used for testing. So to make your whole little home OpenStack server more useful, you need to suck down and install some more images. There are a couple of places you can do that. Ubuntu has a site and Red Hat has a site too if it's got links to other images. So you just download a couple of hundred megabytes of disk image and then run glance image create to import that. Or you can do that through the web interface as well. That's pretty easy. Okay, block storage, for a single-node OpenStack server we can use LVM. So if you've got an extra disk in your server or if you had the foresight to partition it with LVM beforehand, you can configure Cinder, the block servers, the block storage service to use LVM, a volume group. So there's just some commands on the slide there of how to create an LVM volume group called Cinder Volumes and that's kind of standard stuff if you've done LVM before. So using Cinder gives us persistent storage for our virtual machines, which we can do snapshots and backups with. You can actually use NFS if you enjoy working with NFS but not many people do. Okay, finally, multi-node setup. I haven't considered it in this presentation. I wouldn't think that too many people would have the resources lying around at home to install a multi-node situation but if you do some Chef tricks, you kind of split the services into some control node and some computer-only services and you install them on different nodes. But the interesting thing to know, multi-node networking is a bit more complicated. OpenStack creates a third virtual switch called BRTUN and then it creates a mesh between each compute node so they can all talk to each other. Okay, thanks. There's a lot of... Okay, so there's a lot of islands of information about OpenStack in the form of blog posts, Q&A sites, vendor tutorials and documentation but I don't think there's not a huge amount on really basic getting started stuff. So I hope in the previous 45 minutes you've seen how straightforward it is just to install a server and start using it. Sorry, and start using it. So if you're gonna only remember one thing from this talk, it's this. In 2015, you don't need to be a developer to install and use OpenStack effectively. But because you're here at LCA, you're a hacker and a maker and a tinkerer and you learn things by installing it and trying it out and making mistakes. And I once you're over that kind of initial hump, learning, initial hump in the learning curve, you can move forward and start, you're discovering more about how OpenStack works. So installing OpenStack isn't an end in itself in the same way that installing Linux or Apache or MySQL isn't an end in itself. OpenStack is just a tool and these tools, you can use to solve problems and build solutions. So you might wanna install an OpenStack server at work after building and testing a software. But whatever you do, realize that OpenStack can be used in the small as well as in the large and it can allow you to cheaply and freely experiment with whatever the largest thing is. So thank you for listening everybody. My email address is up there. If you wanna say hi or have any questions, I don't forget the wiki link is also up there as well for a more detailed walkthrough. Thanks very much. We have some time for some questions if you've got a slide. The networking you just set up, is there a reason we're doing two bridges with a router instead of just connecting the VMs directly to BREX? I think that's just the way, you probably can configure OpenStack just use one, but the way the StackForge cookbooks have implemented it is to have two bridges so you can, if you wanna move to a larger install and do some more tricky things, it's easier to have two bridges and they're virtual and they're free effectively. So, hi. Just, I've got a few questions, hopefully some of them are very quick. Any particular reason why using Precise instead of trustee for your server? There was a problem with Chef and trustee, some interaction that I could have spent days fixing but it may be fixed now but. And what about Swift? Was there a particular reason that it couldn't be included due to the constraints of the environment? Oh, no, not so much an environment, it's just I didn't think that would be something that someone at home would necessarily want to use. As a home user, I'm just assuming your use case would be running virtual machines and services and that kind of thing, so there's no reason why you can't do Swift as well. So it could be deployed on their hardware? Yeah, oh, you're definitely. And last one, proxy app in the Neutron gateway, is that all automated by the adding of the floating IP process? Yeah, it is, yeah. There's this horrible, horrible set of IP tables rules that gets done on your behalf which you probably don't want to look at them. And does all of that run on the host system, not in a separate networking router instance or something? No, it's all on the host system and it's all virtual inside Linux and whatever IP table rule, so you don't need anything else. Almost there. Long way away. Just a quick question about ARP security and network isolation, what if I'm running in an untrusted environment and what if somebody changes the make address, somebody changes the IP address, what can the protection is there? Thank you. Yeah, that's kind of beyond the scope of what I'm considering here. I don't know the answer, I'd probably go and find someone else who knows more about Neutron and ask them the question, sorry. We have time for one more question, I think. With the console, that was a text-based console. Is it, the console, is it graphical or is it just text? It is, if you have a Windows server, would it still give you a console? Yeah, it's an embedded VNC client which talks to a VNC server in QEMU, so I'm not sure whether that's graphical or text, it's probably more graphical than text, but it's VNC. Hi, would it be possible to have a quick look at the IP config of the host virtual machine that you were running that in? I'm just interested to see what are those virtual interfaces surface themselves. Ah, let me get rid of this guy. Let's see. Okay, of the virtual machine, okay. I'll see if I can get this done quickly, otherwise you might have to talk to me afterwards. The host. Oh, the host, ah, right, sorry. I think the answer might be no. Ah, keyboard weirdness. Right, yeah. I can go through some of the, I understand some of this, I can go through it with you, it's probably doesn't, I probably can't tell you in 30 seconds, but come see me after. It's a bit complicated, there's the answer. Okay, thanks again, everyone. Good point, thank you very much. Thank you. I'll see how we'll talk and... Cool, thank you. The organizers, cool. Thanks.