 We're going to kick things off for the second half of the day with Thomas. He's going to tell us about installing OpenStack on Debian and give us a little demo of how to use it. Thanks for having me. It's a pleasure to be here. So I'm... Who am I? I'm 30 years old. I'm the founder of GPR Host, which does hosting around the world. We're French guy living in China. And I'm the founder of the OpenStack packaging group on Alioth. So I've been involved in packaging OpenStack since the cactus release. It means after it was one year old. And I'm currently the most, if not the only, active maintainer of OpenStack in Debian. I'm also the upstream of a few software. And I package other things in Debian like XCP, which is the only way to use Zen together with OpenStack. I used to do lots of PHP peer packages, but now we've got a much more active team. So GPR Host has many points of presence. That's the same slide I put every time. So here we go. OpenStack is, as you may have heard, is big. It's a lot of packages, both in terms of OpenStack itself, producing more than 100 binary packages. And it takes me a long time to deal with that. This is an LS of my working folder on my laptop. And I had to interact with all of the Debian packages that you see here, just to give you an overview. So it's not easy to package, to set up, because it has many moving parts. And for each component, you have to go through a procedure, which consists of registering the component to the authentication program, which is called Keystone, set up its database correctly, and then all this is both error-prone and it takes too much time. So when you set up an OpenStack cloud, your goal is to set up OpenStack Compute. But to have a successfully running OpenStack Compute system, you need to be able to give it some images. That's how Glens comes into play. So Glens, basically you upload images, download images, modify them, and then you can give them to Nova so that it can start virtual machines. And then you have Cinder. Cinder does the block storage. So if you need permanent storage, permanent partition, or boot from a hard drive, then you will do that through Cinder. Then there's Quentum that does the networking. Each customer or user of the cloud can build its own network, its own LAN, which is going to be private, and will not do like that with the clouds of the other user of the cloud. Meaning I can use 10.0.1 network, while somebody else does as well. And then you have supplementary packages like Heat, that does auto-scaling, and Cellometer, that does the metering. So you need to have all of these components set up. And on top of that, you use either the CLI or the Web Horizon interface. So Horizon is the OpenStack dashboard, and it's just another client. And there's another component called Keystone that does the authentication. So when you talk to any of the components here, you will do it through the Keystone authentication mechanism. Keystone is not only authentication, it's as well a catalog of services. So Nova, Glent, Cinder, Quentum, Heat, and Cellometer will all register their services with the IP address, port, URL. You have to tell Keystone where they are. And when that's done, for the storage, you can either use Cinder, which does block storage by using an LVM slices that it will push to ice-cozy. That's not what I use. I use CIF because I think it's better because it does the storage for everything in OpenStack, including object storage, image storage, and Cinder. So you plug CIF to Cinder, Glent, and Nova. Each of them has to be configured in a way so that it can access your CIF storage. If you've done that, then each component needs to be able to talk to each other through a message queue broker. So you have the choice between QPID or RabbitMQ, but mostly everybody uses RabbitMQ. So you set that up, and then in each and every component you say, here's my configuration, here's the IP of RabbitMQ, here's where it goes, and then after that, magically, all the components can talk to each other. And every of these components, Nova, Glent, Cinder, etc., they also need to store things in the database to keep the state of your cloud. So many wires, and for each of them, you have actions to do so that it's configured correctly. So how it goes with Nova, then you would create the SQL database, grant access rights to it, edit in the configuration file, the SQL connection directive, edit my IP, define the RabbitMQ IP login passwords, then configure it so that the Keystone old stocking goes to Keystone. This is exactly what I wrote on the schematic, that's what you have to do with Nova. Once everything is done, then you magically invoke Nova Manage and then DB Sync, and then it populates the database with tables and content. So once you've done that with Nova, you have to do the same for Celerometer, Cinder, Glent, Heat, Quantum, and for site packages like Keystone and Horizon, you have a minimum of setup to do as well. So as well the DB Sync, it was a bit of a mess, you have to do it differently for every packages. And at the end of the day, people who really deploy the cloud, they do PuppetScript, Chef, DevStack, or many ways to do it so that they don't do it by hand. So what I've tried to do on OpenStack is to make it easier for newcomers to use the packages using just DevConf. What the DevConf does is that it uses OpenStack PKG tools that I wrote so that it automates all this process of registering one component to another thing. So why I'm explaining all that is because even though you have the DevConf questions, you still need to understand which components will be plugged where, otherwise you won't understand what to answer. So let's see how it works for Nova for example. The first thing is using DB Config Common to configure Nova to access MySQL. If you're used to RunCube or PHP MyAdmin, then it's the same thing. So do you need to access the database, configure with DB Config Common, then which type of database, MySQL, the password. And then after that is done, then the post-int of Nova will do the DB sync for you so you don't have to think about it. Each component has to register to the Keystone catalog, so I did that as well as an automation. So you tell what's the IP of your Keystone server, what's the tenant name for, in Keystone you have the concept of an admin tenant which every package of OpenStack will use to get in touch with others, to get authentication tokens to talk to other components. So it asks for the admin and the username and the password for Keystone, and then it does the registration. So every API, so what we call an API program in OpenStack, would be the REST server with which you will interact to do things. So for example, there's Cinder API to create block devices, there's Quentum server to request new networks or subnets. So every of these API packages, they have this endpoint configuration thing inside DevConf. So register with the catalog, so by default it's no and if you install the package it will still be no, so that way you do it only once, even if you install the package. So what's the Keystone IP, what's the Keystone token, so the other token of Keystone is a special one that you use to set up, to bootstrap writes. So to do the Keystone thing and register the services, you just use the other token that you set up when you created, when you install Keystone itself. You answer with the region, so the region is that, it's like a bit of an availability thing, you can have, I don't know, LeCanc or Zurich, whatever. So this, I've tried to set up some sensible priorities so that you have less questions when it's in high priority. If you want to have all of them and make sure you do what you want, then you just set it to medium. I tried as well to pre-fill all the questions, whenever it was possible to detect things like the volume group in Cinder, then I did that. And at the end, there's a meta package thing so that you can install everything at once. So one of the very common ways to set up OpenStack would be that you install a controller that controls all of your clouds and then multiple compute nodes. So when you install OpenStack compute node, it will set up Nova compute running with KVM and OpenVswitch. So this will include as well quantum plugin OpenVswitch agent, which has to run on both the proxy node and the compute node. So it will set up all of that for you. You don't need to worry much about which package to install. So on the first slide, I showed you that there was many, many components, right? Cinder, Glance, API, registry, whatever. It goes in a bit in every direction. So that's why I created these meta packages, so that it makes it easier for you. You won't need to select them one by one. So the result as well is that if you see on the OpenStack documentation, on the official documentation, just for installing something as simple as Glance to upload and use images, they have like that because of documentation because they need to manually create the database, manually create the access rights, manually populate that. So under dbian, you just do apitget install Glance and then it does it. So the other thing which I like with using depthconf is that it gives you an interface to test things and it's proven, okay? Like Andreas Beckman sent some bugs on the BTS saying that my packages don't upgrade from one version to another and that includes as well, sorry, populating the database thing. And another thing is that you can always use the dbian front-end non-interactive mode. So if you don't like having questions asked to you, then you can still run in a non-interactive way. So when I do my tests, I just use preceding and then you just install the proxy node at once without asking me any questions. Here's an example, preceding, so I won't go a long time on it. I guess everybody here in this room knows how to do preceding. But the thing is that I maintain the preceded script on that repository so you won't need to rewrite it from scratch. There's still some work that needs to be done there but it's mostly working. So another difference I have with Ubuntu, let me check the time, yeah, is that our release cycles are very different. So Ubuntu follows the release cycles of OpenStack which is every six months. While in dbian, I have to take care of other aspects like the new queue, it takes a long time to get new packages. I do my work on top of Weezy, so in production, in advance, which pays me to do these dbian packages, they use Weezy and like I guess, mostly everyone who wants to run OpenStack in production, they don't want to run Cid or Jesse. So I maintain on a site repository Weezy backports, including all the dependencies like hundreds of packages. So you are free to use it. My hope is that I will have sooner space to store that within dbian. I hope that I will have access to PPA soon if it happens in dbian. Okay, so I had lots of requests and bug reports for non-respecting some rules which are stronger in dbian. So I hope that the packages we have in dbian are more dfsg-free compliant. Also in dbian, we have Heat, which does auto scaling, you won't find it in Ubuntu. There's FTP CloudFS and SFTP CloudFS which I also maintain. So if you want to contribute, I would very welcome you to do so because even if I'm full-time, I hardly find enough time to do all what is needed in one release cycle. So welcome on IRC. I can help if you are doing setups. I'd be happy to do so. So before we go to the QAA, I want to show you that I've set up a small OpenStack cloud instance in dbian, in DevConf. It's in the server room there. So there's two compute nodes on which there's four hard drives each. Two hard drives on each node have been used for SEF. So we have a redundancy for the storage. And currently I've set up only one compute node. And then you can try the... Okay, so let me show you a bit how it works. Can you see the screen? So the way it works is that you use that. So if you can't read or whatever, you can have a look at the video. So the username is DevConf13 with password supercode power. And you can use it to push images, install virtual machines and see how it works. So the way it works is that after you go to APT Get, install OpenStack clients, and then you'll be able to use it. And then you can do NovaList. It's a bit slow because the controller is on a virtual machine which doesn't have much resources. Quantum, Netlist, et cetera. So I can show you the web interface. I'll show you the admin one. It's more fun. So everything that you do in Horizon is compatible. It's done just like with the Python API. So everything you do in Horizon you can do on the shell on the command line with the OpenStack clients or you can use the Python modules because every client is both a Python module and a command line thing. So when you use the console, then it uses Spice. Okay, that's a demo effect. Okay, do you guys have any questions? Go ahead. There's like 10 minutes remaining and we want to try it out. How much hardware do I need to try? How many hardware is required to play with it? Okay, you can pretty much use OpenStack with a single computer. The only thing is that it's going to be hard for you to go from one to three and two doesn't really make sense because you won't have redundancy. But you can start with one or three and then more. Best would be start with three. That's what I did here. Okay. So here I have a virtual machine running on Zen. It could be whatever, just a computer running the API packages. So there are no APIs in the API, Glens API, whatever. And after you can just set up two compute nodes, for example. It works with one as well. And then Ceph, which does the storage. It's best to run with at least two, I guess, as well. Okay, thank you. So you would need two gigabytes for running all the API programs. And then if you want to have some workload on the compute nodes, obviously, what are you going to start from? Okay. Is there any other question? No question. So, thank you, Daniel. How do upgrades work? Can you upgrade one component at a time or do they have to go in lockstep? Yeah, upgrading is a big problem because every component is stuck with Keystone. So I guess you could upgrade Keystone first and then upgrade the other next. The big problem you have is, let's say, you have 10 compute nodes, okay? Then you upgrade one node. Then with my packages, it's going to upgrade the SQL automatically. And then the other nine remaining compute nodes will have the old code running with the new database. And that's a problem. So mostly you have to upgrade everything at the same time. Though upgrading doesn't mean that you turn off your VMs, your running VMs. You can still have your running VMs and then upgrade all the cloud. It does work. The problem is that we scale when you have 1,000 nodes, then you have to upgrade 1,000 nodes at the same time. So it may be not really practical or you have a bit of downtime. Probably what you can do as well is use cells so that there's a concept of cells. Let's say you have a data center with three rooms. Then you can have a kind of tree and say I upgrade this cell first and then this cell and then this cell. That would work better, I guess. All right. Any other question? Hi. What is your plan for the stable relays of Debian? I mean, will you support OpenStack SX the whole life of Ruzi? I'm trying to do so to support OpenStack SX as much as I can within the stable relays. Though sometimes it's easy because patches are easy to backport. Sometimes it's hard. Currently I have a very difficult patch to backport to SX. If I get no help from anyone then probably I won't have time to do so. Me, as a Debian fan, I love to try to support stable as much as I can, though my customers and my company don't really need it, so it's a bit of a conflict. I say that because Ubuntu is OpenStack outside of... I believe Ubuntu, when they say they do security support for SX, they do it badly just as much as I do it badly for stable. They have no time and it's a different team and I saw many security problems that were not addressed in Ubuntu as well. So if you run an OpenStack cloud my advice would be to use the current release and the grade as it goes. So if it's a private cloud at the time you're not affected by the CVEs because most CVEs are on public infrastructures. It won't affect you if you're the only one having access to it. Though if you have a public cloud then you have to be current from the latest stable, that's my advice. So probably use my private repository for OpenStack on top of Weezy where I maintain currently Grizzly in Sunavana. So if you use a private cloud then stable is probably fine. I packaged Hadoop for Debian and I made the experience that all the people much rather used the Debian packages from the company Cloudera which had so many Hadoop contributors and nobody was interested in the Debian packages. So this demotivated me but of course, have you made similar experiences that people say we take upstream OpenStack in some form instead of Debian packages? There's no way you can use directly things from upstream with OpenStack if you have a large installation. I never heard anyone doing that. You have big companies like Rackspace that have their own deploying system. I know they, because they have so many nodes they can't even think about using an FTP so they deploy using BitTorrent. But I don't know how, do you know how do you deploy in HP? Can you reply to that? Hang on, microphone. The current public cloud is actually still running Diablo so we currently have a second data center which we try to be as close to upstream as possible. So the deployment guys internally they're putting a framework in place where they periodically pull from upstream and apply some HP specific patches on top of it and push it out. Alright, any other question? Because otherwise I have some, yeah, Thomas? So you already said that you're not using OpenStack for your company that would have interested me. What's then your motivation? No, no, no, no. I'm not using OpenStack stable in Weezy. I mean, I'm not using Debian stable with what OpenStack version is in. In Debian stable there is SX and then on the order there's Folsom and Grizzly. In Debian in OpenStack has a release name and I use Grizzly which is from last April. So you have an offer in your company based on OpenStack. So I maintain Grizzly currently in SID because I can't maintain it in stable as you know and then I use my own back port to Weezy with Grizzly. Does it make sense? Yes, but I looked at GPL host homepage and I couldn't figure out what product is using OpenStack what are you using OpenStack for? Currently I use it only for private cloud because I have no billing software to play with. So I can't do the billing. There's metering with salometer but metering only tells you how much resources the customer have been using and then you have to kind of doing billing manually, right? Or use something like J-billing or these kind of fancy applications where you can plug things. But yeah, for private cloud it works great. Any other questions? Okay, so I will use the remaining two minutes to tell you that I also maintain... You've got about 15. What? You've got 15, don't you? It's 45 minutes, right? I've got 15 minutes remaining. So I went faster than you. Okay, it doesn't matter. So I can show more about it. So... Is it installed? No, it's not installed. So I maintain as well a script to build OpenStack images which... So it uses the bootstrap to get it into a CH route and then after it produces a simple image that you can use in Glens. So that's the final result. The image list. It's a very simple script that does that. Who uses Parted to create the partition table, do the format, create the... Okay, so basically you just do like that with URL, ftp.ch.dbn.org.dbn-s. So you will be what you use for bootstrapping and then what goes in your source list. And minus R and release. So I'm going to... So because you had the demo with non-free services this morning, then I may explain to you how to use the free software services we have at DeadConf to create an image and then start to use it with OpenStack. So at least if you don't set up OpenStack in your enterprise or whom, then at least you can try it. I heard that HP can provide some free accounts as well. Maybe you can talk about it? I don't know. I'm sorry. It's the second time I do that to you. Well, that's my plan, but the higher upstone necessarily agree with me yet. So yes, the plan is to provide some limited free account to the different communities, Debian, as well. But we still... I mean, our buildings or our sign-up system requires credit cards. I mean, you guys usually don't like to put in your personal credit card. So I'm working with product management to get that resolved. So my advice if you need to use the cloud is to use something that is interoperable. So like either HP Cloud or Rackspace Cloud. And then after, if you want to... let's say it becomes large and then it would be... it can become relevant for your company to set up your own private cloud on top of OpenStack. So in that case, it's easy to migrate from a private provider to your own provider. I'm sorry, it's kind of a very, very slow network because bootstrapping is not doing anything. Sorry? Ah, really? So it doesn't matter. Let's say it's finished. Okay. So I'm going to SSH again to the same machine. So it's going to produce this, okay, and then you do Glens Image Create, or Image, the file, which is the Qco2 format. And then once you did Image Create like that, then it appears here, Glens Image List. So once you have the ID here, okay, maybe I should first show, explain that. So you have the concept of network, virtual networks in OpenStack. So what you see here is a graph that is maintained by Horizon dynamically depending on what you've created. So the network here in green is the real network that we've been using in DevConf. You may recognize the IPs. So when you create a new user, you either create that new external network to map the real IPs, or you do it once, and you say that that network is publicly shared. And once you've done that, then you add another LAN like in here on the IPs that you want. So because I'm admin, I can see both, okay, that's my own network as an admin, okay, and that one is the one I've set up for the DevConf 13 account. So I can see both. And both will have a different router. So every time you have a new user, it will set up a new router, a new virtual router, and then connect to it. So your virtual machines won't have an IP address directly. They will have only IPs on the LAN given by the DHCP. So what you would do is ask for that one of the IPs on the external network is forwarded through NAT to one of your virtual machines. And then from there, on one virtual machine, you can set up HAProxy, for example, to reach your other virtual machines. So on the command line, it goes like this. So quantum. So you see, I have my tree networks. And if I do quantum subnet list, then it shows me the IP addresses of these, okay? So here you can see my external network and the two LANs that I've created. Nova boot. And after with a simple script like this one, then you can create, okay, I list the images, I list the nets. And after, I create it with Nova boot, flavor, image, and the idea of the image with one nick. So if I don't put minus, minus, nick, net ID equal, then by default, it will attach the virtual machine on the external network and on the LAN at the same time. So it will have two other net ports. And that's the basics for how to use OpenStack with the clients. So I give you the credentials again if you want to try. Any questions? Got like 13 minutes to go. I think we had many questions already. But then that's it then. Thank you.