 Hi everyone. Yay, we're all so excited to be here. Woo! Wow, you guys had some caffeine. That's good. Welcome back to the Ansel Day if you were here before, or welcome in general if you weren't here before. Julia is going to talk about one of my favorite things on Earth. I think it's awesome. So she's going to talk about Bifrost, and we're all going to learn lots of stuff and be totally enlightened, so take it away. Good morning. I'm first going to apologize because I don't have any of my notes available to read from, so it's going to be a little rougher than normal. The talk is titled Using Ansible and Ironic to Deploy Bearmail CI. I'm an ironic contributor, and also I work as a developer advocate at IBM, and the microphone is just a little too loud. If we can turn that down. Maybe? Okay, I think that's better. So in the world with virtualization containers, why talk about even talk about having bare metal? Well, something has to deploy the operating systems that you build everything else on. And you may have requirements like memory, production-like environments, high-performance computing, hypervisor hosts that you have to deploy, like TRIPLO does. Or you may want container hosts, which you might see tomorrow morning. Or you have regulatory compliance needs that require that someone is able to put their hands on the server that the data lives on, which is an arcane concept, but we still have to support it in the new way of doing things. So how does one get started? How does one actually wrap their head around the ability to control hardware? Well, first you have to get ironic. OpenStacks answered deploying bare metal. And besides, it's a cute bearer with drumsticks. Get it? Bare metal? I know it's a bad joke. So Bifrost specifically is a subproject of Ironic that installs Ironic for standalone usage and provides for rapid or tooling for rapid deployment of systems. What makes Bifrost special? Other services are not required. You don't need Nova. You don't need Glance. You could probably bolt them on yourselves, but you don't need them to actually use the playbooks to deploy machines. Keystone is supported, non- fault-by-default, but it is there, and you can use it, or hand-neutrons support. That's a whole more bit of work than I'd ever thought of. And Bifrost is largely based on Ansible and uses the OS Ironic and OS Ironic nodes, or OS Ironic node modules in Ansible to deploy machines in an environment. So who here has heard of InfraCloud? I see a few more hands should rise up. Well, InfraCloud is deployed with Bifrost and allows for the InfraCloud folks to rapidly wax a machine and rebuild it using their processes and tooling and move on in life. What about OPNFE? Raise your hand if you've heard of that project. Excellent. A few less people, but I'm sure more people raise hands over the next few summits. OPNFE runs test environments called pods that are built with bare metal with the networking in mind to simulate production environments. They operate production like bare metal CI testing, which is unlike what most environments do. Most environments just do testing in virtual machines and say, that's enough. We don't need to really go much further. They're actually testing in full certified, racked, cable environments. So they need tooling like this to actually support deployment at the time the job runs. So they have latest code deployed to those machines. As such, they also supply Bifrost with third-party CI. That way we don't break them or try not to break them. So how to get started? First, you need a bare metal drummer bear, a rock surgical mascot. And you need lots of computers, lots of hardware that is remotely manageable while you're installing Ironic. And all the other little tiny things that you need to support network connectivity in the environment. You also need network connectivity such that you can manage the machines and the machines can boot and reach the Ironic node that's installed by Bifrost. And then one will wonder, and then what? You'll need inventory data, most likely, to tell Bifrost the playbooks what to actually do with the hardware. In this example, it is written in YAML. We have a name. We have variables that are picked up by playbooks. And then we have variables and data structures that are picked up by the Ansible modules themselves when they're executed that are shipped off the Ironic API so certain actions can take place. So if we have one inventory with a bunch of data in it that's generated and supplied to us, we can actually form all of the actions on the bare metal nodes from enrollment to deployment with just hanging in or once or one job being triggered. That setting your inventory data source and then enrolling your nodes and then deploying to them is three steps. Utilize an environment variable that tells where to find the file. And then you have two playbooks using a dynamic inventory that reads the file contents and parses it out in a way that Ansible is able to utilize and understand, oh, these are my machines that I need to run these playbooks and these actions upon. Or if you want, you can just take the modules and write your own playbooks. This would be a playbook to enroll a node and this would be a playbook to trigger the deployment of a node. Now I'm sure someone's wondering what if I don't have details about my hardware to be able to deploy to my hardware. Raise your hand if you've ever been told you have hardware but you have no idea how to access it. Oh, I felt like half the room raised your hand, okay. Bifrost supports hardware discovery through the ironic inspector tool set. So you're able to gain enough insight to begin your configuration of your environment without actually knowing the hardware you have as long as you have networking networks and the host's network boot. And the neat thing is, remember those two Ansible playbook commands I showed you earlier? Well, there is a variable, a setting that you can pass in that's just ironic and the intermediate library shade will go and retrieve the inventory of hosts directly from ironic. So everything it knows that's in ironic, you can then write playbooks and have logic and steps driven around all that data you receive. Ideas percolating? Excellent. So what about my CI workflow? How do I tie this in? And that's not a question I can answer. What I can do is I can point you back at these commands and remind you that you have playbooks and you can combine them. So it could be one action to feed the data in and execute upon it. We have a good example of doing this, the testbyfrost.yaml file that we actually utilize to test byfrost in virtual machines and against virtual machines where we actually deploy them and then we go and log into them and make sure that we can act upon them. So just add the commands to your job runner. There's no concept of scheduling. You have to explicitly state what hardware you want to deploy to. And that's usually, in most cases, fine. If you need scheduling across a different pool, then maybe Nova is what you need. There are caveats to doing this, though. The Ansible Modules networking support was written in Kilo time frame. Since then ironic has gained support for port groups and various port options that are not yet supported. I would expect new support to appear by the end of the year, but that's been my hope for the last six months. And when you're deploying programmatically using this sort of tooling set, you want to actually have some step that checks to see that your machine is in a state that you expect before proceeding to the next state. A good example would be if you're just checking SSHs alive while your deployment RAM disk might get the same IP and might also respond to SSH. So if you're just checking SSH, then all of a sudden your playbook starts running against the deployment RAM disk and not the end machine that's being deployed by the deployment RAM disk. Which could be very bad. Any questions? Seems like they're eating, so. I think that would be something that will appear in the next six months. And there are a few different issues that we need to work through specifically configuration workflow within switches. Because every network operator tends to configure things a little differently. And each switch manufacturer has various defaults. Such as the switch I have at home, I have to explicitly tag the VLAN as a tag VLAN on the port every single time for the trunk interface. Other switches I don't have to do that. Oh, well, the whole intent of this is to get an operating system that you can then turn around and configure beyond that. So OPNFV's whole idea right now is use Bifrost to stand up the base OS and then use OpenStack Ansible to deploy everything else on top for their OpenStack deployments. Does that answer your question? Excellent. Any more questions? We like questions. What's your question? Can you say your question one more time? I'm just trying to understand what the question is. Turn it on. Maybe if I get closer. Okay, so what's the question? Okay, the reason it hasn't been updated is because the intermediate library and the password. Intermediate library called Shade has to be updated and the modules have to be updated. Both tasks take a couple months to actually execute those updates so that you have that support. I would hope that it's not a total change. I hope that it wouldn't be a breaking change. It would just be we'd probably have to write another module to help facilitate the workflow that occurs as a result of having port groups and all these other options now available for ports instead of just you create a port. For the Neutron integration, locally you'll actually use OpenVSwitch. Just so you know if that's another question that you had. Okay, any more questions? Thank you.