 on your own machines. That is not strictly necessary to take something out of the tutorial, but if you want to, you're certainly welcome to. You're going to need VirtualBox installed on your machines. If you haven't done so, you might want to just follow along by listening, because I've heard that the Wi-Fi isn't exactly excellent unless you actually have your VirtualBox binaries on your machines. But that actually was in the information for the talk. I do have five USB thumb drives here for you. For those of you that want to follow along, I'm just going to put one in every row. And then I would ask you to pass them down. Here we go. And just pass those down. And I'm going to give you about 10 minutes extra so everyone can get this stuff installed. So we're going to get started at 10 minutes past the hour. And if you don't have those machines running by then, I would encourage you to just follow along the tutorial and then duplicate the tutorial afterward in your office wherever. These images are also available on Dropbox. And I will make those available to you after the talk. And there's instructions for you that are on the screen here now. What you're going to have to do is there are two virtual appliance image on the USB drive. One is called PuppetOVA and one is OpenStackOVA. The Puppet one we only need one instance of. And the OpenStack one we need three instances of. And they're going to be named Alice, Bob, and Charlie. And there's a little fix up host script in there. You fire it up. You log in as root with a password of OpenStack. And then you run fix up host Alice, fix up host Bob, fix up host Charlie. You reboot. And then you have those machines renamed. And they have the proper network configuration that we need for this tutorial. So that is going to be on loop here for the next 10 or 15 minutes. And if anything is unclear, please feel free to raise your hand and ask. And again, if you choose to just listen and not follow along with the tutorial yourself, that is perfectly fine. You can do that. This is purely optional. If you do want to follow along, if you do want to build an OpenStack Cloud in an hour, then you're certainly welcome to do so. And of course, as always, slides will be available not only on the conference website, but the sources for these slides are also on GitHub. And I'll be happy to share the links at the end of the tutorial. And for those of you that may have brought their own USB thumb drives, such as the one that we're given to you by the friendly folks from HP, it would be great if you could create a copy of the thumb drive and then pass those around as well. Because that would facilitate and speed up the process. Specifically, if we have some late arrivals after 11, it would be great if you could help those people out. Is that a USB 3 or 2? 3? Awesome. By the way, if you happen to be on a Mac, these machines are known to work. If you are on a Windows box, it has to be a, well, generally you would have to work on a 64-bit box. And if you're on Windows, your virtual box networks will be named slightly differently. They will not be named VBoxNet01 and 2, but they have some Windowsy names, but that shouldn't hurt you. There's USB sticks going around already. For those of you in the first two rows that were the first people to get the thumb drives, did those virtual machines come up OK for you? Were you able to start it? No? OK. You should be familiar with how to import a virtual appliance into a virtual box. If you're not on the virtual box file menu, there is a setting that says import virtual appliance, at least it does so in English versions of virtual box. And if you want to import a virtual appliance, then you select the OVA file that you've copied onto your machine, and then you give it a name. And the puppet one, you can leave unchanged. And for the open stack ones, it would be helpful if you named them Alice, Bob, and Charlie. And there is a little checkbox that you can set that says, re-initialize MAC address. And you want to do that for the open stack VMs. And once that is completed, you can fire them up. And when you fire them up, they should come up with a console prompt, and then you can log in as root with a password open stack. And then the thing that you still need to do is run the little fix up host script to set your host names and set a proper network configuration. Are you sure that device is working? And again, as a request, for those of you who have already downloaded these files onto their laptops and do have a USB stick with you, it would be great if you could help out your neighbor by copying onto your own a USB drive and share with your neighbor. That would be fantastic. The files are available for download online on Dropbox. I'll be happy to share that after the tutorial. Don't attempt to download them from Dropbox right now. That's not going to work on a conference Wi-Fi network. Not a blank. OK, and for those of you who have just arrived, as I said earlier, we're going to give people about 10 extra minutes to get those virtual machines up and running on your desktops. So we are going to be starting the tutorial proper at 10 minutes past the hour. So if one of you guys has already set up everything and wants to grab a coffee or a water or a bio break or something, feel free to do so because you're not missing anything. Could I have a show of hands, please? Who of you guys currently has one of my USB drives? I just want to know where they are. Who has one of the five USB drives that I passed around? Oh, they disappeared that quickly. Come on, don't give me that. You have one? OK, so you no longer have it. Well, that was actually not the question. I wanted to know where they are now. I know where they were at some point because they were in my pocket at some point. I don't have what? By the way, here's an offer. I'm willing to pledge dinner for a person that comes up with a meaningful way of distributing about a gigabyte and a half of files to a roomful of conference attendees in about 20 minutes. The rules of engagement are all the equipment has to fit in cabin luggage, and it can't rely on a conference Wi-Fi. So if you come up with a fancy idea of maybe building your own, I don't know, your own network, a bit torrent swarm that is exclusive to the roomful of attendees, or something like that, please send me an email. And if that's actually a workable idea that I can use at a conference, I will buy you dinner. I saw a hand. What's your suggestion? What is that? Well, that's a fantastic idea. Yeah, make a share on a laptop. And then share it off via NFS or a web server. Believe it or not, I've actually tried that. And that's how you very quickly find out that the problem in conference Wi-Fi is actually not the internet uplink. Because if you then have attendees saying, this is a great idea, but I'm currently downloading at about 800 kilobits a second, it's going to take a while. So sorry, no dinner for you. OK, so again, for later arrivals, there are USB thumb drives circulating for those people that actually want to follow along with the hands-on setup. If you don't want to follow along with the hands-on setup, that's perfectly fine. You're going to take just as much out of this tutorial that way. But if you do want to follow along, please make sure you get those set up within the next seven or eight minutes because by that time, we're going to be starting the tutorial proper. And again, if you've already copied the files off to your own desktop or to your own laptop, and you have a USB thumb drive in your possession, such as the one given to you by the friendly folks from HP in your conference bag in tint, it would be great if you could copy that stuff onto that thumb drive and share it with your neighbor. That would be utterly fantastic. That is an option. The problem with that is, if it's an access point that is small enough to comfortably fit into hand luggage, it typically doesn't deal too well with about 120 or 200 clients either. There's really awesome Wi-Fi equipment out there that can handle like 3,000 clients at a time. But it requires checking in. I didn't say this was easy. If it were easy, I probably would have figured out by now, but I'll be happy to crowdsource ideas on this one. Yes, you had a question? And that would be local to the room? Doesn't work. Sorry, I can't use something that works remotely and relies on a conference Wi-Fi. Sorry, the point was not having people rely on the conference Wi-Fi to connect somewhere, to download something. That's even worse than I need interactive access to a remote cloud over a public Wi-Fi, a conference Wi-Fi. It doesn't quite cut it either. That's the old, never underestimate the bandwidth of a station wagon full of tapes, right? OK, right. So for Mac users in the room, we actually have those files on AirDrop now. It should highlight it somewhere. To go through what the actual invalid setting is. Could I have a show of hands, please, just so I can get an idea? How many people have actually gotten the virtual machines on their desktops, like copied the files over? Can I see a few hands there, please? OK, that makes me happy. Out of those, if you're having a problem starting those virtual machines, scream really loudly now. Great. I know these people. Have you tried clicking into it? Are you sure that there's no window hidden somewhere that is waiting for input from you? I didn't say anything about Intel VT. I said something about 64-bit. That's all. On neither of those boxes? Well, I know that this stuff generally works. So the only thing that I can offer you, it is very probably a local problem with your virtual box installation or with your setup. There are still some USB keys floating around. I'm sure. I don't have any left here, because I've distributed them all. Yes? You need to create three instances of the open stack one. Three instances. No, I don't have any left. They're floating around. From this one, you need three. They need to be named Alice, Bobby, and Charlie. Yes? Am I going to solve this thing? OK, OK, I'll try it. All right. OK, so one final round of instructions here before we actually get started. What you need to do once you have retrieved a USB thumb drive from either me or one of the friendly faces around you, what you want to do is you want to import those virtual appliances into virtual box, file, import virtual appliance. For in order to be able to actually do that and then get them running, you need three virtual box host-only networks on Linux and Mac. Those are going to be called vbox.net 0, 1, and 2. You need to create one instance of the puppet.ova appliance and three instances of the open stack appliance named Alice, Bob, and Charlie. You reinitialize their Mac addresses. You then fire them up. You log in as root with the password of open stack. And you run the root slash fix up host script with the parameter Alice, Bob, or Charlie, depending on which virtual machine you're on. You're reboot, and then you're ready to go. One final time. Copy everything over, start virtual box, create those three host-only networks, import one instance of puppet.ova, import three instances of open stack.ova, name them Alice, Bob, and Charlie, and hit the reinitialize Mac addresses checkbox. Fire them up. Log in as root with the password of open stack, and run the root fix up host Alice, Bob, or Charlie script. Reboot them, and then you will be ready to go. Because that is exactly the state that my machines are in here on this machine that I'm using here. OK, so I'm going to close that, and we're actually going to get started with the tutorial. Before I actually dive into kick stack and the stuff that we're going to be talking about for the rest of these one and a half hours, you might be wondering just who the heck I am. As we heard in the opening keynote on Monday, we have over 2 thirds of new attendees to open stack summits. But I have a show of hands for here as well. For whom of you isn't the first open stack summit? That's about right. So again, that's about 2 thirds to 3 quarters. So while I'm a regular here, you might be wondering just who the hell I am. My name is Florian. I am one of the founders and a frequent instructor at Hasdexo. We are a professional services company that focuses on open stack, on distributed storage, and high availability. And what we do is consulting services, implementation, troubleshooting, performance tuning. And we do a fair amount of training as well. The first link up here is, if you will, my corporate bio. The second one is a link to my Google Plus page. And I am reasonably active there and would be happy to connect with you and hear from you there. My personal email address is there as well. It's easy to guess. Florian at hasdexo.com. I'm one of those strange holdouts that actually don't have a personal, don't maintain a personal Twitter presence. But we do have a corporate Twitter account. And that is Hasdexo. So if you want to tweet out your thoughts about this tutorial, we'll be more than grateful to read them. And another thing, and I'm going to put this link up again later on, that I'd like to show you is academy.hasdexo.com. Just in case you're interested about our training program, please feel free to take a look at that as well. All righty. For those of you who are reasonably new to OpenStack, just a very quick overview over the OpenStack architecture. Again, a quick show of hands, please. Who of you guys has first touched OpenStack in the last six months or less? Who's really new to OpenStack? Nice. Cool, that's about one half. Awesome. It's always great to have new stackers around. Let me give you a very, very brief overview over the OpenStack architecture that we're going to be building here. I'm going to start at the very bottom. We have in OpenStack a central identity service that we call Keystone. It is responsible for not only maintaining user accounts and tenants and take care of user authentication, authorization, and access control. It is, through the tenant concept, the thing that we can use to segment our entire physical cloud infrastructure into multiple segregated walled off areas that we can then use on a per customer basis, if we're a public cloud, on a per department basis, if we're a private cloud, on any other sort of segmentation that we need, Keystone does that for us. We then obviously have a compute layer, OpenStack Nova. That's the kind of stuff that actually runs and maintains our virtual machines. We have our OpenStack image service glance that maintains the images that we spin our compute guests out of. We can optionally store these images in OpenStack object storage, codenamed Swift. But we can also store it in a variety of other backend stores. Over on the left of the slide, we have our OpenStack network infrastructure, codenamed Neutron, formerly called Quantum. The OpenStack networking ensures that not only individual virtual machines can talk to each other within the cloud infrastructure, but also that we have connectivity to and from the outside world, such as, for example, the public internet. We have OpenStack block storage, codenamed Cinder. This provides persistent storage to virtual machines, to Nova guests. And then we have a unifying dashboard that we can use as our all-important GUI to manage all of this. There are two things that are not included in this overview here, which were new additions to OpenStack. And those are the OpenStack orchestration layer, heat, and the OpenStack metering and monitoring infrastructure Cilometer, which is just left out of here to not clutter the picture too much. From this, it follows that we can introduce the concept of OpenStack node roles. OpenStack node roles are essentially logical, atomic, and composable classes of nodes, nodes meaning physical servers, in an OpenStack cloud. They are logical, because an individual node role is typically not defined by the physical hardware that a specific node features, but rather by the services that are maintained by them. They're atomic in the sense that they are usually not broken down further. That being said, just like we can smash atoms, there may be ways and means to break down these node roles further, but generally speaking, they're atomic. And they're composable, which means that it is perfectly possible for a single node, a single physical server, to hold several of these node roles at a time. And that is exactly what we're going to be doing in this tutorial as well. So let me walk through these really quickly. One node role is what we call the infrastructure node. Sorry about this. It's what we call the infrastructure node. And an infrastructure node runs a database, a relational database. Typically, this is MySQL, but it could also be Postgres, or theoretically, it could be any RDBMS that SQLAlchemy supports. And a message queue server. Typically, an AMQP server, typically, RabbitMQ. There are other options as well. We can run Apache Cupid. We can run 0MQ, but by and large, and by default, it's usually RabbitMQ. This is also what we're going to be deploying this tutorial today. We have authentication nodes. Authentication nodes run the OpenStack Identity Service, Keystone, which provides authentication services and also, very importantly, a service catalog. That is to say, a list, a queryable list of API end points in the OpenStack Cloud. API end points describe the services that are maintained by our API node. API nodes provide restful API end points to OpenStack services. As I'm sure most of you are well aware, all of the OpenStack services, Nova, Cinder, Glance, Keystone, Neutron, all of them, provide restful APIs, which means, essentially, they're HTTP servers. And they're using HTTP methods to, or clients use HTTP methods to communicate with them. And they're usually passing JSON objects in doing so. The API node is the node type that provides these. And it is very well possible that we have several API nodes, such that we can provide a certain amount of high availability and automatic scale out. Then we have controller nodes that provide scheduling and registration services that are internal to OpenStack. Examples for controller services would be the scheduler service, the Cinder scheduler service, or, for example, the Glance registry service. So those are back end services. And the reason why we are looking at this node type separately from API nodes is that a controller node can completely live in the walled off management network. It needs no connectivity necessarily to the outside world. Whereas in an OpenStack cloud, whether it is private or public, you typically do want to expose your APIs to the outside so that people can interact with the cloud programmatically. And because of that, an API node always needs to live, or at least be available, or be reachable from the public network, or from the network wherever your clients are sitting. Whereas for the controller services, that is not the case. They only need to speak on the management network. Furthermore, we have network nodes. Network nodes provide network connectivity within the cloud and to public networks. And again, this is a separate node type because it usually has different connectivity requirements, such as a network no typically needs to be required, needs to be connected to the public network, such that your virtual machines can actually be reached by the outside and themselves reach the outside world. It's very, very uncommon to build an OpenStack cloud that is completely walled off within, say for example, a secure service. There may be certain three-letter agencies in the United States and elsewhere that are known to run OpenStack, as we know from the last summit, where this may be completely walled off. And reporting on it might get you thrown in jail, or at least held up in airports for eight hours. But it is typically very common for network nodes to be connected to the public network. And then we, of course, have compute nodes. Those are work courses. Those are the ones that host and run virtual machines, or as we refer to them in OpenStack and Nova Prolinces' guests. So these are essentially our hypervisor nodes. Because of the way OpenStack is architected, again, these compute nodes need no connectivity to the outside world. They need connectivity to the tenant networks, the networks that we run our, for example, GRE or VLAN tunnels in. We have storage nodes that provide persistent block storage to guests. Storage nodes may either actually store our persistent data themselves, or may act as essentially proxies to one of the about a dozen and a half back ends that OpenStack Cinder supports. Again, separate node type, because it obviously has to listen on the management network so the internal API services can reach it. And it also has to listen on whichever network your storage infrastructure uses, which might be entirely separate from the one that is the rest of your cloud. And then we have a dashboard node or dashboard nodes which provide a unified web-based user interface to cloud administrators and cloud users. The dashboard node may well be the same node as your API node, so you might coalesce these two roles or compose those two roles on the same node or the same set of nodes if you're running it for HA or low balancing scalability purposes. But it's sufficiently dissimilar from the API nodes to be considered a separate node type. And then we have two new ones, which is due to the fact that the C-lometer and T-project just saw their first release drop. We have a metering node type or metering node role, which connects metering data from a unified event stream. That is what C-lometer does for us. And then we have the orchestration nodes, which run the orchestration engine, the heat engine, for complex guest workloads. By the way, just as a reminder, all of those slides will be available later on. And they're also available on GitHub, so you might save your pictures that way. So again, metering node and orchestration node, these are new additions in OpenStack since the Havana release. Prior to that, all of that was an incubation and experimental, et cetera. And let's take a quick look at how this maps to our tutorial architecture. For those of you who have set up these virtual machines, you are currently going to be running four different virtual machines. Once called Alice, Alice is going to be, and again, as I said, it is very common for multiple node roles to be composed onto a single physical node. Alice is going to be our infrastructure node, our authentication node, our API node, our controller node, our storage node, our dashboard node. And if we have time at the end of the tutorial, we're also going to make it our metering and our orchestration node. If we don't have time at the end of the tutorial, you will be able to do so on your own. And I'm going to tell you it's actually going to be simple. Then we have Bob. Bob is our compute node. Bob's going to be the one that runs our actual Novogas. Obviously, this being a reference infrastructure, it's not necessarily anywhere near what you would see in production, such as it's relatively uncommon to have a single compute node in your entire cloud. But in this case, to demonstrate the architecture, Bob is going to be our one compute node. And then we have Charlie and Charlie is going to be our network node. Charlie is going to be the node that simulates our network gateway to the outside world. And then because we want to deploy all of this with Puppet, we have a node named Puppet, which is going to act as our Puppet master. For these three, for Alice, Bob, and Charlie, what you have on those boxes right now is a completely bearer Ubuntu 12.04 image. And the only thing that has been changed from the original installation is I put on these little scripts in here for fixing up the IP addresses and the host names, and they have Puppet 3 clients, Puppet 3 agents on them. Puppet is actually pre-installed. It also runs a Puppet 3 Puppet master. And it has a set of Puppet modules pre-installed that you can find or that are being developed on Stackforge. For those of you unfamiliar with Stackforge, Stackforge is a sort of sister project to OpenStack. It is a collection of sub-projects that are not considered part of OpenStack proper, but that have the ability to use and work with the entire OpenStack continuous integration and continuous deployment architecture. So anything that is being managed on Stackforge goes through the same Jenkins jobs, gate jobs, et cetera, et cetera, that also goes through OpenStack. And there is a collection of Puppet modules for OpenStack that is among the very, very many things that you can find on Stackforge. These modules were originally started by employees of Puppet Labs, but have since become a community effort. And there is a number of non-Puppet labbers that contribute to them, my team included. And then there is another thing that is pre-installed on those boxes is something that we call KickStack. It is a really, really, really thin wrapper around the Stackforge Puppet modules. And the only thing that it is meant to do is it simplifies and provides a little more high level of view for deploying node roles to individual nodes. And if you want to take a look at that, you don't need to copy that off the slide. You could just snap a picture of this. And your phone will take you to it. So all of that stuff is on GitHub. The idea behind KickStack is to make OpenStack deployment easy for you using any Puppet ENC, an external node classifier. And that can be the Puppet dashboard, which we're going to be using here. It could be Puppet Enterprise. It could be the Foreman. It could be an ENC that you write yourself and generate some YAML for you. That's the whole point. Make everything configurable from a Puppet ENC. If you don't like ENCs, it's perfectly fine to do this in your Puppet manifest as well. Either way, if you go to this GitHub repo, there is a readme in there, and there is a documentation in there for how to use it with or without an ENC. And that creates our Puppet dashboard. That's what the Puppet dashboard looks like, right after starting. Depending on your individual configuration, you might be able to connect to the dashboard at this address here that you can see at the top, 192.168.122.100, port 3,000. Or if that does not work for you, based on your virtual box configuration, there is a port forwarding entry in your Puppet OVA. So you should be able to connect to this if you don't have a local firewall that's keeping you from doing that at localhost port 3,000. OK, so it should port forward that to you on localhost port 3,000. This is what the Puppet dashboard looks like at the very start. It is completely empty, except for this predefined group here named Kickstack and all of these classes that we see down here. And now we're going to start building our cloud. I should actually start these things. And then this looks much nicer. OK, so this is my Puppet host. And here's my Alice, Bob, and Charlie. So those are the four boxes that we have. And you're, of course, welcome to connect to these boxes either using the virtual box consoles, or you can also connect to them by SSH if your SSH setup or your virtual box setup allows that. So the IP addresses there are these. So that's Alice, that's 192, 168, 122, 111. And then there's Bob and Charlie that are 112 and 113. And the first thing that we're going to do on all of these boxes is we're going to quickly run our Puppet agent. And then this will connect to the Puppet Master. And yes, for those of you familiar with Puppet, you're now welcome to crucify me. The Puppet Master is actually set to auto sign SSL certificates. No, do not do that in production. But here for the testing setup, that's perfectly fine. So what this does is it has our Puppet boxes or our Puppet agents check in with the Master for the first time. And if we do that on all of these, so that's that. And then we switch back to our Puppet dashboard. We should now see those three nodes checked in in a moment. Right now there are two. And there is Charlie. So we've got all of these three nodes. Alice, Bob, and Charlie, they've checked into the Puppet dashboard and we're happy. And what we now need to do first is we need to add all of these three nodes that I've just checked into Puppet dashboard to the kickstack group. So over here, group, kickstack, edit, and then you add the nodes down here. And there's a nice auto completion thing that we can use for this purpose, Alice, Bob, Charlie. And there's one minor change that I need to make to my configuration here, not to yours. So what you want to do, just these three nodes, add them to the kickstack group, hit update, and then if you see the group named kickstack, you should be seeing at the bottom here the nodes that are listed. So now we've told this thing that is just part of that group, but let's actually start installing stuff. And we're going to start with Alice. And if I now add a few kickstack classes to this thing, and I'm going to start with the infrastructure class. And I'm also going to add the auth class. Save changes here. So this has now become a infrastructure and an auth node. And now we're going to go back to our node named Alice and run Puppet agent again. And then it's going to do some fancy stuff for us. Obviously, in a production setting, you wouldn't be invoking Puppet agent manually all the time. Instead, you would probably have it running as a system service. But what this thing is now doing for us is it's going to install a few packages. It's going to add, in this case, the Ubuntu Cloud Archive. If this were a rel or a CentOS client, it would instead add the RDO YUM archives. One of the nice things about the Puppet modules on Stackforge is that they support three different distributions, namely, Debian, Ubuntu, and, well, four, Debian, Ubuntu, and Rallifadora. And we can use the same configurations for both of these platforms. They don't support SUSE, in case you ask. I guess this is largely due to the fact that SUSE themselves for SUSE Cloud seem to be preferring Chef and Crowbar as a deployment and orchestration scheme. You got a question? Your Puppet node doesn't have a network. Hang on a second. So if you don't have a network connection, please double check whether you have your host-only networks configured, whether those are up. If you fail to do that, then as you bring up the machine, Puppet will give you a little warning. But if you just hit OK on that, it will still boot. And then we'll have no network connectivity. So if you go into your virtual box preferences, do make sure that you have three different host-only networks. And if you are on Linux or on a Mac, those are going to be called VboxNet0, VboxNet1, and VboxNet2. If you don't have them, then those nodes are not going to have connectivity between them. Was that your question? Yes? Have you tried pinging Alice from Bob? No? The fixed IPs should work just fine, really. So one thing that might help you, although it shouldn't be necessary, is if you want to clear out the Udev rules for your networks and then just reboot. That is in slash htc slash Udev slash rules d 70 persistent net dot rules, I think, if I remember my Buntu food correctly. And by the way, our puppet node needs no connectivity whatsoever to the outside world. All the packages that we're going to be installing here are pre-cached on the puppet node. It does not need to go out to the web. So the only thing that we need to make sure we have is actual connectivity between those two nodes. Meanwhile, this thing has completed its first puppet run. What has happened here, and I'm going to add a second one. What has happened here before is this thing has created in a completely hands-off fashion all of the required MySQL databases for all of these individual OpenStack services. It generates passwords for them and does not need to do any pre-population of those databases because that is something that all of the OpenStack services do on installation with their various managed DB sync commands where they just take their database schema that is defined in the ORM layer and then go from there. So the question was how do I add those nodes to the kickstack group? In the puppet dashboard, the same thing would be true for puppet enterprise, by the way, there is a list over here and it says group. And there is a kickstack group in here. If you click on that, then you're going to see a node list. By default, that's going to be empty. And if you hit Edit, then here again at the bottom you have a node list. And you can add nodes here. And those are nodes that have to have previously checked into Puppet. So again, on the left-hand side of your screen, group, there is a kickstack group in there. As you hit Edit, you get to this screen and that has a nodes field where you can add your individual nodes. And that catalog run is finished. Now, with our first two puppet runs, what we've done is we have created all of these databases. As we can see here, Varlib, MySQL, there's a bunch of databases in there that have been created for Cilometer, for Cinder, Keystone, Nova, Neutron, whatever. And we have also created a Keystone service. Not only have we created a Keystone service, but we've also created an admin user that we can use. And because this is being nice to us, it has also dumped the credentials that we need for this into an OpenStack RC file. And we can now source this OpenStack RC file. And then we can, for example, do a Keystone endpoint list such that we get a list of currently configured Keystone endpoints. And as of right now, that's going to be exactly one, namely the Keystone service itself. So that's that. It has created a Keystone service for us here. It is already functional, as we can see from the Keystone endpoint list. We can also list our tenants here. Those are the tenants. And it has created a few users for us, most notably the admin user in the OpenStack tenant, that we're going to be using to interact with the system here. We're now going to continue by adding to our node Alice. Here's Alice. We're going to add two things here, which is we're next going to add the API class. Here we go, Kickstack node API. And we're going to add the dashboard class, Kickstack node dashboard. And if we go back to Alice and do yet another puppet run, here we go, then this thing is magically going to become an OpenStack API node. We have fulfilled a prerequisite for this, which is we need a Keystone services tenant. That's the one that you see at the top here. The services tenant is the tenant that individual OpenStack services use to authenticate with Keystone themselves. And that is a prerequisite. So we need to have a Keystone endpoint. And we need to have a services tenant. And we need to have Keystone users for all of these services in that services tenant that they can use. Those users are not listed in Keystone user list because by default that only lists their home tenant. So all of that has previously happened. We've got the users. We've got the databases. We've got a Keystone endpoint. And the next stuff that we're going to do is we're going to install all of these individual OpenStack API packages. So that would be Nova API, Glance API, Neutron server, Cinder API, et cetera, et cetera, and also the OpenStack dashboard. And that's a reasonably complex configuration. If you do it manually with Apache and Django and the OpenStack dashboard itself, Horizon, and the StackForge puppet modules do all of that for us and make it very simple. One thing that I failed to mention earlier is these modules that are on StackForge are also available on the PuppetForge. The PuppetForge is a collection of third party and puppet labs, contributed puppet modules that you can install and manage on your puppet infrastructure with a simple puppet module install command, which makes things really, really simple. KickStack itself, by the way, is also up there. However, I do need to warn you that the package on the PuppetForge has not yet been updated for OpenStack Havana because of some dependencies on current Havana issues with the StackForge puppet modules. So as this is chugging along here, as we can see, not only are the packages being installed, but they're also being configured. One thing that just flew by here was the Cinder configuration. Now here's the Neutron configuration. Neutron in this setup is being configured in OVS mode. So it's using the OpenVswitch plugin, using gretunneling. And that, as we are going to see in a moment, is also going to be properly implemented on the compute node and on the network node. One thing that is important to understand, and we're going to see that in just a moment, is that the API services that we're installing here are fundamentally independent from the services that the APIs manage. As we're going to see in a little bit, we're going to be able to interact with these APIs, even though there are currently no services at all that are implementing them. The idea is here, obviously, that this stuff is, at any time, pretty much completely decoupled. But it has sort of the interesting side effect that you can have an open stack cloud that looks like it's fully functional, but it's not, as we're going to see in just a little bit. So here's our Cinder API. All in all, this puppet run should take about four minutes or so on this hardware. I'm running all of this here on one laptop. If you're running this on actual hardware, you can generally expect this to be quite a bit faster. For you guys in the third row, did those network connectivity issues resolve themselves? No? OK. I'll be happy to troubleshoot those after the tutorial. And you should be able to, if you follow along here, the rest of the stuff that we're doing is, essentially, we're adding more and more of these classes. So it should be relatively simple for you to duplicate that later on in the comfort of your home, your office, or your plane seat, which is what comes closest to home for myself. I don't know about you guys, but for me, that happens to be the case. The steps are documented in the KickStack GitHub repo. They are documented both for running with an ENC, such as Puppet Dashboard, actually explains how to run it with Puppet Dashboard. But it also explains how you would run it if you're running without an external node classifier. If what you prefer is just hacking Puppet SitePP manifest, you are certainly free to do so. And you do have that option. It follows exactly what's on GitHub. So there's no specific magic here. OK, we're scheduling a refresh of the HDPD service. So we should be seeing a dashboard relatively shortly here. Here's another thing where I guess I can't pledge dinner, but I can at least pledge a beer. If you find out why my laptop has less IO when there is an external monitor plugged into it, I'll buy you a beer. This is a very special case of demo effect. The IO throughput on this thing is actually significantly slower if it is up on a lectern, and it has an external monitor plugged in. I used to think it was an interrupt going crazy from the Wi-Fi, but killing Wi-Fi doesn't change a thing. Interesting stuff. By the way, this is really nice about the Puppet, not only that's actually not just the Puppet modules, but also the respective Ubuntu packages that we're using here. It will, for example, do the DB syncs for you at the end of installation runs. That is to say it will populate the Cinder database for you. Yes? Yeah, well, the stuff. So what happens is someone at a release manager at Puppet Labs takes the modules from Stackforge and builds a Puppet module and uploads it to the Puppet Forge. That's what happens. So the stuff that is currently in Git on Stackforge is actually in really, really good shape for Amanda. So if you want to work with that, by all means, please do. And we're always really, really happy for contributions on those. So if you do find a bug, file it on the Launchpad. If you can send a patch and upload that to Garrett, that's wonderful. There are some really, really, really helpful Puppet developers that are working on this stuff. So for example, a few people that I had the pleasure to work with when I started on this. Joe Topchin from Cyberra was a really great help. Dan Bodie, who just launched his own company in his ex Puppet Labs, was a really great help as well. So there's a bunch of people that you can talk to and that will talk to you and will help you out if you're willing to contribute. And that's still doing its cilometer. Sure, any time? Yes. Sure. So there is, I don't know if that has ever gone out to all the attendees. There was information that did go out to the speakers. There is a site on the OpenStack Summit site that we can upload slides to. And the stuff is being published, I think, at the end of every day. And I've seen individual speakers share their links, but I haven't seen tweets go out to everyone where you can find the slides. Maybe that's sort of a miscommunication. I'll be happy to take that up to the speaker manager. But that information is definitely available. All of the speakers for the summit have been encouraged to upload their slides or provide links to their slides when they're hosted online. So yeah, all of that material is definitely available. Well, I just shared it with you. And hang on one sec. Let's see. There you go. I'll be back at that at the end. Yes? So currently, here's Alice. It currently has these four classes here. OK. So in the interest of saving time, let me kick off my next puppet run straight away while we're starting to interact with these services. So we're going to add the rest of the services or the rest of the node roles that we want to add to Alice next. And that is going to be controller and storage. And that's going to be it for Alice. OK. And let's go back here real quickly. Kick off that puppet run. And while we do that, lo and behold, here's our OpenStack dashboard. And just for the heck of it, let's quickly copy our OpenStack RC file from Alice. So here's the generated password that we got here. And we're going to use that real quick here. And I'm going to use the username admin. Of course it has cookies enabled. There we go. So as you can see, even though we don't have any compute nodes, we don't have network nodes, we don't even have a Nova scheduler or a Glance registry service, nothing, our OpenStack dashboard is already fully functional. And the reason for that is that because all of our APIs are already fully functional. Because as we can see here, from a Keystone, sorry, I need to source this, obviously. Here's my OpenStack RC. And actually, let's head over to Alice here from here. And that is, of course, busy right now. But that's all right. If I do a Keystone endpoint list, as you can see, all of these API nodes, all of these API services, have been listed for Keystone. And all of these API services are currently already responding. So for example, if I were to do a Nova list, then it comes back with an empty list because I don't have any computes, any guests currently defined or running. But the API service is already fully functional. The same thing, for example, is true for Cinder. I don't have any volumes defined or anything like that. So it's going to come back with an empty volume list. There we go. And if I do a neutron net list, it is going to talk to neutron, and it is going to come back with an empty list of networks. If I do a heat stack list, then it's going to talk to heat and ask it about, whoops, that is probably currently restarting, but that's OK. And then we can also do a glance image list. There we go. And it comes back with an empty list of images. All the while, this is chugging along in the background. And here we can see those are various Nova services that are being installed. And after that, we're going to see various glance services, and we're going to see various neutron services, et cetera, et cetera. And that doesn't keep us from interacting with all this stuff already, which, architecturally, I think is kind of sort of cool. And now, again, we're going to let this chug along here until we actually have a fully functional controller node that is also going to act as our storage node. And once we're done with that, we can continue on with our nodes named Bob and Charlie. And they're only going to get one node roll each, namely a compute node and a network node. And once that is complete, then we can actually start running virtual machines on there. And by the way, that was the reason why my heat call didn't work earlier, because heat had just been stopped and then restarted. And that completed a lot quicker here. And what we have here now is the controller node and the storage node is now essentially completely installed. Among other things, it also has a Cinder volumes group that has been created for us in LVM. And again, in the interest of time, let me kick off the compute node installation on Bob before we start interacting with our services on Alice, or go back to interacting with our services on Alice. So I'm now going to make Bob my compute node. Kickstack node compute. And then we're going to let Puppet do its thing. The first thing this does is it fetches from the other node that has just been created. A few pieces of information, such as, for example, where the heck is my Nova API, where it needs to know a few passwords that it can connect to. And of course, it also needs the Ubuntu Cloud Archive, because that's where it gets its packages from. And like I said, just in case you're wondering, Puppet is actually running a pre-populated AppCacher and G service, a proxy, in offline mode. So none of these boxes actually need to connect to the internet to some Ubuntu APT archives to actually fetch this kind of stuff. OK. And now that we have a full-blown controller node, we can start interacting with our cloud. And we can do so either on the command line or using the OpenStack dashboard. Just for a little example here, if we go into the tenant that we have just created here, we can take a quick look at, for example, our network topology. And as you would expect, our network topology is currently completely empty. There is nothing in there. Now we could use the OpenStack dashboard to create a network topology and create some routers and do all that. But because OpenStack is meant for quick and massive cloud deployment, we can also do that in a scripted fashion. As it happens, there is a script here on your node named Alice, which, when you've got your controller node installed, and you don't need a network node installed at this point yet, you just need the controller node and the API node. If you source your OpenStack RC here, OpenStackRC that we created earlier, and then we do the little create neutron networks thing, then that will create a handy little network topology for us. There we go. There's a router. There's a network. And there is a gateway. And there is our network topology. Hang on a second. That should look slightly different. It should look like that, exactly. So we have an internal network that we're going to call AdminNet. We're going to have an external network that's going to be our simulated public internet. And then we have something in between that we call ProviderRouter, which is a virtual router. And all of this is, of course, tenant-specific. So this stuff exists for the tenant that we have just created. If we had a completely different user and a complete different tenant, it would not see this network topology, and they would be able to create their own completely independently. There's a few other things that I want to do that I want to be able to do when I ultimately launch my compute infrastructure. In the meantime, let's take a quick look at how Bob is doing here. That's still chugging along, fair enough. And there's a few things that I normally do in a setup like this, and recall most of this is basically motivated by the training stuff that we do. I like to add a flavor here. This is something that does exist in DevStack by default, but it doesn't exist in a straight-up open stack. I call this M1nano. It's a really tiny flavor. It's just one vCPU, 256 mega RAM. And I'm not going to use any specific non-ephemeral disk storage here, and I can create this flavor. There's my M1nano flavor, and I'm going to start using that in a moment. Another thing that I want to be able to do is connect to my guests that I'm about to create via SSH. So what I want to add here is my own SSH key pair, and I can do that here in the Access and Security tab under key pairs. And what I'm going to use is just my standard SSH key pair, SSH add-l. There we go. That's my SSH key that I want to import. There we go. Import key pair. Going to be really creative and name this Florian. That's that. Import my key pair. And I guess that's about it. Let's just take a quick look whether that key pair is, in fact, there. So here's a key pair list. There we go. That's the key pair that I've just created with that fingerprint. And another thing that I want to do is I, of course, want to upload an image into Glance that I can then use. You could use any old image that you like, and I'm going to use any old image that you like. That works with OpenStack. So for example, you might be using the Ubuntu cloud image or something else. What I tend to use is Cirrus. Cirrus is a super tiny image that is only about 12 meg in size that does know about cloud in it and can therefore run in an OpenStack infrastructure and be configured properly that way. So what I want to do, and this also was on the USB just so no one actually would have to download it from the internet. So I'm now going to create this image here. I'm going to call this Cirrus. And my image source is going to be an image file. And that is going to be, where are my cloud images? There's my cloud images. And that's my Cirrus. Upload that. That is a KUKAO image. And we can go ahead and create that. This, by the way, is a very interesting pitfall. If you have a Glance API service that is running so the OpenStack dashboard can connect to it, but you have no Glance registry and you have no working store behind it, you can still upload an image through the dashboard. And it will actually say, thank you, I've got to acute this image for you. And then there will be no trace of it ever anywhere. Because it is the Glance registry image that actually stores the image metadata in the database. And then that is being retrieved via the API. So that's an interesting pitfall, which is something that people sometimes find surprising or frustrating. So let's see how Bob is doing. That's still going strong. The thing that that has to do here is because we are on Ubuntu 1204, Ubuntu 1204 shipped with a kernel that did not include the OpenVswitch Datapath module. So they had to ship that as a DKMS module. So basically that means every time you're installing this, it's actually building something for you here. And that is probably going to take a few more minutes. So let me try and do something here. Let's see if this actually turns any better. If I just kill this thing. All right, there we are. I have got a Wi-Fi. Why is that in use? You are off. That's like totally not cool. That doesn't like me. Okay, well it's progressing. And that is our Nova configuration. So let's go ahead here, progress with Charlie and actually complete the configuration or complete the install part of the configuration at least. There we go. Here's Charlie. And Charlie, we are also going to add to the, we're gonna add the kick stack node network class here. In the meantime, over here our glance image list. Oops, glance image list. Should show the new image here that we have. There we go. So that's the service image that we have just created. And another thing that we can of course already do is we can create a volume here in Cinder because ultimately we want to be able to provide persistent storage to our machines. So we're gonna create a volume here real quick. And we're gonna name the volume tests. And that's just gonna be say one gig in size. It's not gonna come off of a snapshot or anything. It's just gonna be a standard issue volume. And that is our volume that is now available. And as we can see here, not only do we see the volume in here, we also see that volume here in our LVM volume list. And of course, this is also being exported from the Isakusi target. So in this case, we're using the standard Cinder backend with LVM and Isakusi. And under the hood it does all the magic for creating this volume, creating a target and actually exporting it on the Isakusi portal. So Bob is finally done installing the compute stuff. And now we're gonna let Charlie become our network node. And then we can actually get going. One thing that this does for us, which is helpful on the compute node, it creates our open Vswitch virtual switch for us. The configuration that this does is it uses what it calls integration bridge. This is the stuff that we put our individual VMs interfaces, VM ports in. And it creates a tunnel interface, which is the stuff that through a GRE tunnel creates network connectivity between compute nodes and between the compute nodes and network nodes. That doesn't look very spectacular right now because we only have one endpoint of the tunnel configured. But as we're gonna see once Bob completes, we're gonna have the other end of the tunnel configured as well. For those of you unfamiliar with GRE tunnels in Neutron, they essentially build a full mesh of GRE tunnels between individual compute nodes and the network node. And that acts like a gigantic virtual switch where from anywhere in the cloud I can plug a virtual machine in. What I should mention is for those of you who are planning to deploy this in production, do reconsider that. OVS with GRE tunnels does not scale astronomically. For example, we've seen issues with that, with networks that were running upward of say 80 tenants or so. So the issue is not really that much of how many hosts or how many guests you have, but how many guests are running in how many tenants because that is how the network segmentation works. And there are of course other options that are available for you, such as for example, you might be using the ERISTA backend or you might be using even the OpenV switch backend but not with GRE tunnels and instead using VLANs. There are several options there for you to choose from. Okay, and here's Charlie that is chugging along with Neutron. So let's take a look at how that's doing with OpenV switch, hasn't built the tunnel yet. And now what this does for the network node is it provides not only network connectivity within the virtual network but also to the outside world. To that end, it has a separate bridge as we're gonna see in a moment. On Bob, this is on the compute network. It has the integration bridge and it has the tunnel bridge and as we're gonna see on Charlie in a second, there is also an external bridge. That's the bridge that maintains connectivity to the outside world, to the public network. That's a bunch of Neutron here. And again, this is doing the OVS install here. In the interim, nothing has really changed in the OpenStack dashboard except that we should see our, there we go, we have seen Bob pop up as a compute node. The dashboard gives us some info here about what is the type of the hypervisors that we're using here. Obviously OpenStack Nova supports a multitude of backends by now. Arguably the one that you will find out the most in production is the one with LibVirt and KVM. Zen is also very well supported because most of the Rackspace OpenCloud standardizes on that as is well known. And there are other hypervisors that are supported such as Hyper-V, such as ESX. There is ongoing work with LXC containers and several others. Here's a list of all of these services that are currently exposed in the OpenStack dashboard. So that is effectively the dashboard equivalent to Keystone endpoint list. And what we can also see here are the various Nova services, the various compute services that are running. As you can see, there is a handful of Nova services that are running that are part of the controller node, such as Nova console auth and Nova conductor, Nova scheduler and Nova cert. And then we have the actual Nova compute service that runs on our node named Bob. And we have network agents that are running on our node named Charlie. Does that mean that Charlie is actually done what it's supposed to do? Yes, it is, beautiful. And we can now see that here the OBS VS CTL is suddenly becoming a little more interesting. It has created the tunnel for us here. That's the GRE1 tunnel up here. So we now have a fully working virtual switching infrastructure that we can now plug the system into. What this has also created for us is a set of IP network namespaces, which we're gonna be using for logical network segmentation on this node. And now that we have gotten this far, let's go ahead and actually use a dashboard to fire up a virtual machine. So we are going to launch an instance here and we're going to call this Cirrus, Cirrus 1, there we go. I want one of those, I want to boot that off of the Cirrus image. There we go, access and security wise, I want to use my key pair. I want to put that into the admin network and now I'm gonna launch this thing. Now, once that launches, so what happens now is of course, we're pulling that image over to the other node, we're assigned to the compute node, we are assigning a IP address and then it is here and we can take a look at the console here. So that machine is now booting here and we can also check out the log. As I said, this is a minimal cloud image, pretty much the only fancy thing that it is capable of doing is it actually talks to cloud in it. As you can see here at the, and it runs cloud in it and as such is capable of connecting to the Nova metadata API service, as you can see at the very bottom here, it just connected to a Nova service here at a magic IP. There we go and that is our completed Cirrus and what we can do here to this instance now is we're gonna quickly associate a floating IP from the external network, can allocate that, we're gonna associate that with our Cirrus box and there's our machine. That thing actually runs on the cloud. That is our Cirrus host. I just connected to that using the SSH key pair that I previously defined. This is the IP configuration, as we would expect 10, 5, 5, 3, that is the IP address that we just received from Neutron and we can also do a quick cat partitions. That's just a single disk here that is currently available. So let's go here, use our volume and attach that to this instance. Say we attach it as dev VDB, that volume is now attached. Let's see if it really is. We heat, there's a VDB and let's create an EXT4 device on that thing real quick and let's mount that and now we're gonna do an echo hello world sudo t slash mnt foobar. So our mnt now has a file in there that's called foobar and we can now unmount this thing, go back to our dashboard, create a new instance called Cirrus 2, boot from the same image, same thing, same thing, admin at launch. Oops, must select an image of course, there we go. And that's our next instance and that was of course a little quicker because it didn't have to fetch the image again. Take a look, that should be in the same network. No, there it is, that was fast. Oh, that doesn't have an SH server, not yet. There it is, okay. And let's see, how is the PROC partitions doing on that one? Okay, there's only one. So how about we go about that? We're not gonna detach that, that is now available again. Attach that to Cirrus 2 instead, diff bdb, and there's our foobar. And that's the persistent volume for the other box. And we can now play with things like dissociating floating IPs and reassociating them to the other box, et cetera, et cetera. So we can do pretty much everything that we wanna do with an OpenStack Cloud. So that was like, now that's one hour and 20 minutes with me doing lots and lots of talking, going from completely from scratch, bear open to a complete OpenStack Cloud. I could go on, but we're unfortunately out of time with the metering and orchestration node just in case you're curious and you want to do that. It's a simple means of adding the kick stack node metering and or kick stack node orchestration class to your node named Alice. So that would be here, and then another puppet run, and you should be there. Just to wrap up, if you happen to like this talk, it would be great if you could let us know on Twitter. Stexo is the handle that you want to tweet to. That'd be great. For those of you interested in the slides, this is the link to the GitHub page where you can find the slides for this presentation. I hope that's large enough for everyone that the QR code actually registers properly. From what I've heard, this works really well with Google goggles. And just in case you are interested in learning more about this, I would encourage you to do take a look at our Stexo Academy schedule. If you're based in Europe, we happen to have classes coming up in Munich. If you're based in India, we have classes coming up in Bangalore. If you're based in South America, in which case I would really not envy you for the trip that you made over here, we have classes coming up in Sao Paulo. So just in case. This one? Certainly, I will be happy to leave that on. I'm going to hang around for another about 20 minutes until I need to run. So by all means, if you have more questions, feel free to grab me. This concludes my talks at the OpenStack Summit. So as of right now, I am a summit attendee, just like you guys are. I'm going to enjoy that massively. And I hope you're going to do the same thing. I wish you a great remaining day and a half at the OpenStack Summit. And it would be great if I could see some of you again either in Atlanta or in Paris next year. Thank you for coming and enjoy the rest of the day.