 Hello! Welcome to this presentation of the OpenStack cluster installer aka OCI. Who am I? I'm Thomas. I've been packaging OpenStack since 2011, which is more or less when the project started. I work in Anformaniac, which is the Switzerland's biggest hosting provider, and I'm also a contributor to OpenStack upstream. How it all started? I first started to investigate solutions to install bare metal, like mass, four-man cobbler, ironic, and I decided it wasn't adapted to what I needed. So I started my own project. OCI uses this technology, a DHP server, Apache, PHP MySQL, a TFTP server, and puppet. OCI is made of 50% PHP, 25% shell script and 25% puppet manifests. Everything is in Debian. When I say everything, I really mean it. This includes puppet manifests as well. The way the bare metal installer works is that your servers will be booted over PXE under a Debian live system. In it, hardware discovery will start and send all the information to the PHP MySQL. Then Debian scripted installation, which is not using the Debian installer, is started. The server reboots and then the first boot scripts are hooked and then puppet runs. Instead of explaining to you for hours how it works, I decided it was better to just show you. So let's start with a bare VM with this type of command. And let's install OCI. All of OCI is available through Xrepo, including all of the packages, including the ones for puppet. Here's an example on how to install it using OpenStack Usery. Xrepo is a bit like PPA, except that it's not about Ubuntu. It's about third-party repositories for Debian. So what does this package repository contain? It contains an OpenStack release. There is one Debian repository per release. It contains exclusively packages which are at least available in Debian and Stable, most of the time available from testing. And they are backported to the current Debian stable. Everything can be removed from source using these repositories, so you can tweak absolutely everything if you wish to. It's self-contained and you don't need any other artifact to install OpenStack using OCI. Everything is packaged, including all the puppet OpenStack packages from upstream. Let's install OCI on that new VM. So it's as simple as to get install OpenStack cluster installer, answering a few depth of questions, and you're done. As you may see, it's not optimal video with Impress, but that should be all right. You'll be able to see still. Just wait for the DB to populate. And that's it. So what's in there? I haven't showed you the dependencies. They were pre-installed because otherwise it takes too much time to display. Though in OCI, you have dependencies for Apache plus PHP, MySQL, PuppetMaster, TFTPHPA, 73 puppet packages, out of which 30 are puppet OpenStack. Some PHP, Puppet and Shell scripts from OCI itself. OCI uses Puppet OpenStack upstream as a base and then it adds a parallel classes and some dynamic PHP MySQL-driven ENC. OCI can deploy many OpenStack services in a fully available fashion using SSL, including SSL re-encryption within the cluster itself. Everything is scriptable and uses a REST API to send the comments to OCI. Configuration of OCI is made through a simple configuration file, an INE file. There's a few important bits to address there, like the DB and repository addresses, which are your trusted networks that may do hardware reporting. And a few things that you may want to enable, depending on your hardware, like RackADM or Megatly if you have this type of hardware. Before you start deploying, you need to start the script from OCI that is going to generate its own CA, so that you can do search re-encryption. Then the DB and live image must be generated. It's done through a simple single command, OpenStack cluster installer build live image. It uses an internally DB and live build, which is a very powerful tool. It's possible to customize the images, like in ETC, OpenStack cluster installer live image additions. You can drop any files there and they're going to appear on your image. That's very convenient if you want to diagnose problems like hardware issues, if you want to do firmware upgrades and so on. It's also needed to configure a DHCP server to serve the type of subnet ranges that you wish to use. And now we are ready to PXC boot the servers. Yay! So let's see. The server boots over PXC, that's Linux. It fetches the RAM disk and the kernel from HTTP, then boots on it. Then the server is going to fetch an IP address from the HTTP server and then fetches the SquashFSC images over HTTP. That's the progress bar that you just saw here. And then once it's there, it's going to do the hardware discovery and images are going to, like, hardware server nodes are going to pop up on the OCI interface. The hardware discovery script will report a bunch of things like the chassis serial numbers, nick speeds, BIOS and IPI-MI firmware versions, product names, amount of RAM, block devices. Here's how it looks like from the OCI client side. So what you are seeing is a bunch of VMs that I use on my development. They are just popping up one by one and the hardware discovery script is reporting to the central OCI server. As you may see, we can see that the machine with serial C4 has two hard drives while the first three have only one and so on. So just wait a little bit and more servers will pop up. Once the hardware node is booted and reports to OCI, then we can do OCI machine show and its serial number. And then we can see a bunch of things like the install status, the hostname, the hardware specs in a more detailed fashion than with OCI CLI machine show, the IPMI configuration, networking and role-specific values that we can tweak depending on what we want to do with the hardware node. Now that we have a bunch of hardware booted, let's start building the first cluster. So we start with OCI CLI cluster create Z, informagnac.ch is the domain name. Then we say that we want an NTP server for it, otherwise it just uses the debian default. And then we create two Swift regions. Even if you're doing compute, you need to create Swift regions. Then we create the networks there. Before creating the networks, we use locations. Locations are using the regions just set up earlier. So we first create zone one location and public one, which is going to hold our public IP for the API. Then we create a management network using this IP address. And the location we just created, no means not a public thing. So then we create a VM network. So the VM network is for the VXLAN traffic between nodes. So VMnet, enter. This environment is using VMs and I'm not using VLANs. Therefore, every network is going to be bound to a specific network card or the VMs. So here's the final result. Once all the network has been created, on the bottom of the screen here, you can see that I'm creating an IPMI network. That IPMI network can match a range from the HP. And if one hardware node boots on that range, then it's going to be assigned an IP for IPMI using the range defined on the IPMI network. So if I go to the next page, then you see IPMI addresses being assigned to the new servers. So the one where you can see detect IPMI address field means that they are already set. The ones where it's still zero, it didn't happen yet because it needs another run of the hardware discovery script so that it can report. Now that IPMI is set up, let's add machines to the cluster. So I got 19 VMs to play with. Let's add the first machine, C1, as the controller to the cluster. Okay, then we're going of course to set up three of them. C1, C2, C3, and then suspense. Let's add safe monitor nodes. C4, C5, C6. Let's add network nodes now. So two of them will be enough. C7, C8. So of course, when it's real hardware, you just type the real serial number of your server. Now we add three OSD nodes. Cd and some single volume machines, two of them. Finally, some compute nodes. So everything on OCI CLI has batch completion. C1, C2, C3, and let's see. Now we have all of our servers with the hostname pre-calculated IP addresses already assigned. The next step is either to do OCI CLI machine install OS, and then either the hostname or the serial number of the machine, or simply OCI CLI cluster install Z. Then it's going to go through each machine one by one on the correct order, installing first the OS, then waiting for the perpet to run to go to the next role. So everything is scheduled in a nice order so that it's well enough optimized, but still giving a schedule. So this is nice already, but that's still not enough. When you have many machines, you may wish to go further in automation. So manual setup is too much work. It's prone to error. It takes too much of your administrator time. It's very repetitive and boring. So let's go the extra mile and do a hands-free setup. First thing is you need to describe how is your physical network. So over here you can see a file indicating switch names, where they are in which data center, which row, which rack, what's the name of the location that we've set up already with OCI CLI location create. And we also can set up a compute aggregate for that rack. On top you see product names with how many use every hardware is taking. Then we have hardware profiles. So you can define as many hardware profile as you like. A hardware profile is not bound into a row, meaning that you can have multiple hardware profiles for a single row. What happens is OCI is filtering hardware with what you are writing here. So you have product name in here. I wrote only a single one, but you can write many type of product names. So about PowerHR640 is from Dell. You can add HP or Lenovo, Gigabyte, any type. You just define how much memory and hard drive. And when you do that, you can enable the setup of the RAID profiles so that whenever a machine enters the live system, before the operating system is being installed, then the RAID profile can be applied. So here we have a compute node with a system disk, so a bit less than 300 GB, and a much bigger RAID one for viral live nova instances. You can also see that there's machine set. So machine set is options you would type to an OCI CLI machine set command. So here we say that even if there is stuff available in the cluster, we don't want that compute node to use it for viral live nova instances. You also see that we are setting up CPU mode and CPU model, which matches of course the hardware product name that you see above. Upon being that with this type of hardware, I want a specific CPU model. And because of what we've set up earlier here, there's compute aggregates over there and location names, then we can set that we want that node to be in the compute aggregates that we've defined on the auto-racking.json. There's also Adhost Intel compute, because I don't know, maybe I have also a bunch of AMD machines and I want them to be in another compute aggregate. So what will be the process of doing a hand-free setup? So you plug your server into the rack, start it up, if PXC boots, then OCI setups the IPMI configuration, like IPs, passwords and so on. Then firmware upgrade happens. You may want to upgrade your BIOS for example, and then server may reboot after BIOS upgrade. Then IPMI may be upgraded too. Then RAID profile is applied. LLDP information is processed and transformed into Racking information. The server is then added to the cluster depending on its role and its hardware profile. Then if you have enabled the DNS plugin, then OCI can call that script and register your machine into your DNS. The operating system is then installed, then the machine reboots. Once it has finished rebooting, a password for root can be setup and eventually saved into a vault using again a plugin script. Then Puppet is run once by a first boot script. Once Puppet has been successfully applied, OCI can call a monitoring script that will register your machine into your monitoring system. And finally Puppet as the agent is started forever. That's about it, about the full overview of what OCI does. Though I have a bit more to tell you about. OCI can do a bit more advanced networking like BGP to the host when you don't have compute machines in the cluster. For example, you could set up a very large Swift using BGP to the host. That scales wonderfully. When you have compute machines into the rack, they do need L2 connectivity to the rack. For that, OCI can use BGP routed network using that patch for a new trend that you see over there. In the future for OCI, I wish to add more services to it like designate Aronic, Magnum, Manila, Trove, Watcher. The risk goes on and on and on, as you may know. Thanks for watching. Thanks to Infomaniac for sponsoring my work on OCI and giving me the opportunity to present you this solution during the virtual summit. Now, this is time for Q&A. Thank you for watching.