 Okay, so the talk is titled Getting Started with Ironic Bifrost, and Bifrost is kind of aptly named for this exact purpose. The purpose of Bifrost was to be a bridge into Ironic, and the name comes from Norse mythology, where we wanted to give some people some great way of getting into Ironic and using it and using the power of Ansible at the same time. So we wanted to make something very easy, we wanted to make something very customizable. We wanted to provide something to enforce and drive the stand-alone use case, and we wanted to also improve Ansible support in OpenStack, and this was a time when anything that had the word OpenStack was viewed as impossible to install without every other service, and we had kind of really lacking Ansible support, and this was really when Ansible was taking off as a huge thing in automating systems and software. At this point, this was before we had OpenStack Ansible as a project. It was a long time ago. So to provide some backstory, because backstory is always an interesting thing and also an important context for people, we started in 2015 as a collaboration amongst some Ironic contributors and some triploid contributors specifically. We started with a proof concept for the stand-alone use case, and a lot of this really started in essence as an idea that kicked around in 2014 in the triploid community of how do we drive Ironic directly? How do we remove NOVA from the picture? So that combined with, well, one of the major use cases where people have to go spend long hours in data centers is to deploy lots of machines. So what if someone was able to run a playbook and do it? Or maybe a couple of playbooks depending on their process, and they don't really need all these other services, they don't need SWIFT necessarily, they don't need Cinder, they really don't even need Keystone. So Bifrost was really born. And in March 2015, which seems like a long time ago and also not a very long time ago at the same time, we announced the creation of this project. And with the goal of basically trying to contribute and work with other teams and going back through it, I realized we've found bugs in Ironic, Shade, Disk Image Builder, Glean, OS Client Config, Ansible itself, and our code is now in the Ansible Collections for OpenStack, which is kind of a testament to the effort that we've put in. But we involved from a model where we really only had no support for authentication to actually supporting authentication, which was less of a hurdle than we thought it was going to be. And even last year, we kind of realized some people don't really want Keystone still, they don't need the complexity. So we actually added basic auth support in to Bifrost so that one can still have a little authentication, especially if they're walking into a data center and it's their laptop and they're running these services in their laptop, they might not want someone to do something awful while it's plugged into that trusted network. At the same time we did the HP basic auth, we also added TLS support. So you can now actually have TLS-based TLS-enabled Ironic with Bifrost on your laptop in your data center deploying your farm of machines. So this is where it starts getting into how all this works. Installing Bifrost is actually incredibly simple and I really have to thank Dmitri, who was on the call for creating the Bifrost CLI. In essence, you clone Bifrost, you run the install command using Bifrost CLI, you set your authentication environment. This is the standard parameter used to set up Clouds configuration in OpenStack and OpenStack services. And at that point you should be able to run Bifrost or bare metal commands or OpenStack bare metal if you want to do and create bare metal nodes. Once the installer is done, you should have a fully functional Ironic deployment. And so you're installed, you can actually do things. And is it just taking over the world at that point? But the important thing is once you've created nodes in Bifrost, in order to leverage Bifrost to deploy node, you have to set instance info and you have to deploy the node. And here I have to provide two links into our user documentation where it walks through how to do this in a step-by-step example series. But Bifrost is also written with playbooks and use of playbooks in mind. So potentially you can use the playbooks. In fact, the Bifrost CLI uses the playbooks. So you have the ability to customize your settings, run the install playbook as well, just like anyone else. And the big secret is even our CI in Bifrost is a single playbook. It has all the actions, all the commands, all the steps wired in so that it just goes through it one at a time and if it fails, it fails and we fail the job. Which is actually pretty elegant if you think about it. You can learn more about the actual installation steps using the playbooks in the Bifrost documentation at docs.openstack.org slash Bifrost slash latest slash install. But here's where the power of Bifrost really comes into play. Bifrost uses what's called an Ansible Dynamic Inventory Plugin. And the way it works is we are able to provide it by gamble or JSON file to basically define machines and parameters and settings that get passed into the playbook. Unfortunately, there's one downside. You have to set environment variable. And this is a design limitation in Ansible where you're only enabled or allowed, I guess as we put it, to provide it environment variables. Generally most people are feeding inventories of static files that have some more formats. In this case, we have an extra little feature in our inventory, which I'll talk about in a minute. But what Bifrost does with all this steps and takes all this data that you provide it and it'll do things like generate configuration drives inject SSH keys. You can actually use Doveride things like what image to actually deploy. So you could say I have this YAML file. I want these images. I need them separate and go around the playbook and done. It'll take a little time, which is to be expected. It takes time to move bytes over a network. Now the inventory file, this is an example. And the parameters mostly mirror Irox on API. The difference is Nix and under driver info we have this power parameter. In reality, if memory serves, driver info and things like instance info can just be supplied in. You can also see here we have an IPv6 address. This is basically the address that we have the machine assigned if we were to deploy it. And this is used in the configuration drive generation because we do inject basic networking configuration so that you can go from bare metal machine to operating system to I can have my playbooks SSHN and run execute additional commands and perform the next setup steps. So in short, I guess the way I put it is magic happens with the inventories. An example of how to do this is if you're to go into the playbooks folder, define your environment variable to say a bare metal.json or bare metal.yaml file, run the enrollments step and it will add all the machines in that file to Ironic. And if you want to say I need to deploy all these machines now, you can take that same inventory and run against the deploy playbook and it will deploy an operating system onto all of those machines. And things like what operating system you use or what image or if it needs to build images, these are all parameters or settings that we're able to disable and provide it in for the user to run the playbooks with. Bifrost has a lot of logic in it to handle things like oh you need to build a disk image. Okay, cool. These are cases that came up and are logical to kind of support in this case. Another neat feature is you can say I need to redeploy all these machines which comes very much handy if you're running a lab environment and you have to find everything. You have playbooks to set up your lab but you don't need them in use anymore. You just want to redeploy everything. That's a possibility as well. But here's the secret with inventories that I mentioned earlier or hinted to. You can set the inventory source, data source to be ironic itself. When you do this, the inventory plug-in pulls ironic, gets all the data about the nodes and populates parameters based upon the instance information of the machine. Basic parameters that are available via the ironic API become parameters which you can do logic on. So you can say I want to deploy all of my available nodes. And really nothing at this point, friends, anyone from taking the playbook, modifying it, adding additional steps or additional logic, it's all extensible and it's all based upon Ansible. And a quick example of our use of the magic inventory feature of setting it to ironic is you can see in this example that we collect the local facts and what that's doing is it's actually taking all the local information and building all the parameters and saving on the memory. It's an Ansible thing. And the next step is against all bare metal machines where it creates configuration drive if the machine is available and not in maintenance state. And then it deploys the machines if they're available and not in maintenance state. I think I may never have used this one. But the idea is there and it's actually pretty easy to extend. So do I have any questions? Thanks a lot, Julia. Are there any questions? One of the questions I have, if no one has one, is like if Bifrost is mostly intended to be used as like a one-off deployment of a data center or nodes, or if it's also intended to be used like to manage like a node throughout its life cycle. You see what I mean? Rather than like okay I have a thousand nodes. I want to deploy them. Bifrost is the tool I use because it's easy. I can create the playbooks and install it once. But once I need to like manage this over a longer time, can I still use Bifrost or do I need to move to something else in that case? Here's kind of the beauty of how this works. Playbooks, the plugins, modules, they can all be used against a full-blown Iran deployment. Or you can keep the deployment running. That's part of the reason why we add Keystone support and we added authentication support and TLS so that you can leave the deployment there. What we found is we found this also in telecoms specifically too is where they would use Bifrost to do basic hardware setup and validation like make sure the machine environment is exactly the way they needed it and then they redeploy with some other installation tool on top of that cluster. And so they would turn off the Bifrost or disconnect the Bifrost deployment or in the case of a laptop disconnect from network walkway and go back to the hotel. So these are all possibilities and we've extended Bifrost a couple times to enable some of these things. So it's not impossible and we encourage people to use the playbooks and the modules against Iran itself. And the core of the functionality is really answerable modules. And those are in the OpenSat collection at this point. Thanks. Any other questions? Not a question really but I think it's useful to know. You speak up, I can barely hear you. Okay let me try. Do you hear me now? Yeah it's better. Okay just a useful addition maybe I didn't hear you mention it. Bifrost has a development mode so it can be used for testing with VMs instead of DevStack and I have been doing that for maybe a year already. Yeah I was about to ask this as well or mention this because I missed that in your presentation as well because what I usually when newcomers join our team what we've well kind of encouraged them to do let's say is use DevStack and DevStack Ironic and when I mentioned this on the channel a couple of months ago everyone was just like what just use Bifrost it's so much easier and everyone is using this now instead. So it's maybe also a good point like for newcomers to like play around as Dimitri said like create this on VMs you know patch the code. That is true we have it's a I believe it's install dash dash test in ENV and it honestly for years I only fired up DevStack if I absolutely needed to and the only reason I'm currently using DevStack is because I'm doing cross-service work at the moment. So if I'm not doing cross-service work I'm developing against Bifrost. Right and I think to like bring it up is like I forgot Dimitri it's like two commands or three it's something like this right I mean it's very very easy to one command. There is one more command to create testing VMs. Yeah you start this Bifrost SLA test-temp and then you add test-temp flag to install just as Julia mentioned. Right I guess I did this a couple of weeks ago it's pretty quick and easy to like set up and start get started. I forgot the VM creation separate step. I have a question if one needed to add like an additional service let's say you wanted to use Swift for example would that be easily accommodated with with a Bifrost setup or are you just better off going to DevStack? So if you needed to do it for purposes of cross-service development I would recommend DevStack. If you needed to do it for just data storage there are some alternatives. You could also write a playbook to install Swift. We don't have one out of the gate. The only reason we really added Keystone was we had a lot of interest in basically operators running Bifrost for long periods of time and having the ability to orchestrate switch configuration. So when some operators want to just be able to take their laptop into a data center they just built plug it in and have the entire thing as a ammo file and ultimately be able to configure all the switches all the networking and just detach it and walk away. That didn't go anywhere or I should say it worked well on a demo and we didn't really have the time to continue pursuing neutron support. Thank you. Are there any more questions for Julia? Excellent well thank you everyone. No okay in that case thanks a lot Julia.