 Hi. Good afternoon, everyone. So, welcome to Bifrost. I want to deploy. So, my name is Miguel Mejado and this is my colleague Dimitri Tancer. We are both principal software engineers at Rehat. You may recall me from, well, I'm not sure I'm a explorer, but I used to be the curriculum editor before. It's been a long time since I've been on a face-to-face open Slack summit. So, I'm really glad to be here back in Berlin. Basically, I wanted to tell you a little bit about you, but what is Bifrost? What is Ironic? And how are we using those to deploy a lab? Well, I'll be basically letting Dimitri go ahead for, you know, the Ironic and Bifrost part interactions. I've got a demo, hope it works. I hope you will be able to learn something and be able to contribute back. So, thank you, Dimitri. Thank you. Okay. Let's start with a short introduction to Ironic. I actually hope that many of you already know what it is, but anyway, short repeating. So, Ironic is a bare metal provisioning service under the OpenStack umbrella. Ironic can be used as a world back-end for Nova, providing you the same API for virtual machines and for bare metal, for virtual machines, yeah, for bare metal machines. There is a project to use Ironic inside Kubernetes. It's called Metal Cubed. We did a presentation earlier today about that. And it can be stand-alone. For example, with Bifrost. Ironic is a major project, more than eight years in production, public labs, reshows and institutions, small labs and many other folks use Ironic. There were awesome talks from CERN about how they use Ironic. Unfortunately, it was in the past, but check out the recordings. Just to be on the same level, I'll repeat some concepts from Ironic so that you understand what we're talking about later on. A node is a representation of a physical machine. So, use Ironic API, create nodes and run actions on nodes that corresponds to actions on physical machines. Ports are network interfaces. Port groups are representation of bonding. And we also mentioned thing called config drive, which is a Nova concept. It's essentially a way to provide information to your instance. In our case, we are writing them to separate partitions on the machine. Ironic consists of many components. The critical one, so it's restful API and open stack spirit. We have conductive instances which handle nodes. And we have an in-ram disk agent, which you start a node to do actions. It's called Ironic Python agent because it's for Ironic and written in Python. You also have a lot of extra components such as Ironic inspector, which is hardware introspection service. Everything Ironic is doing is doing through drivers. They're a whole family of different drivers which are actually designed as hardware types and hardware interfaces. I won't go into details about that. You can find a lot of information on the network already. We separate a huge variety of technologies, standard generic technologies like APMI, old school, hardware management. Redfish is the next big thing, HTTP, JSON, everything you like. We have limited support for SNMP for power actions. And we have quite a bit of vendor drivers. Just to name a few, Dale HP, Fujitsu, Huawei and the Nova have specific drivers in Ironic for more actions that the standard ones are providing. And we have a thing called Ironic staging drivers, which is a repository for unsupported drivers. And by first actually support installing that too. Let's talk about stand-alone Ironic a bit. Ironic can be used with no or some OpenStack services. You can, for example, plug in and out Neutron, Glance, Cinder, Swift, Nova. Or it can be plug-in for Nova. It can talk back to Nova. And we have over the time developed some specific features for stand-alone case. This includes HTTP basic authentication. And it includes support for using static or external HTTP. It includes support for images, not from Glance, but from HTTP, HTTPS and file locations. And some other. And with this short introduction, I'm going to talk about Bifrost. And Bifrost, the name coming obviously from the North mythology, although I know we pronounce it in the wrong way. I think it's actually Bifrost. But who knows. So Bifrost is a set of Ansible playbooks to first install Ironic itself and optionally some additional components like Ironic Inspector, Keystone, Staging Drivers. It's fully powered by Ansible and also a set of playbooks to use Ironic. So it's all in one. You install, you use, you'll see how easy it is when it shows the demo. Since recently we also have a small CLI that makes it easier to use Ansible playbooks in the most common cases. So we prepared some good defaults for you if you don't want to figure out all the hundreds of options we have. Ironic system support currently includes Stream9. We just deprecated Stream8 because of Python 3.6, which opens tech no longer supports. So if you use Bifrost, if you need Stream8, use Bifrost Yoga. Ubuntu LTS, currently Focal, we are working on the next one, which is, I forgot the name, and the event stable, whatever the latest table is. Yeah, the idea is that you literally run a playbook and you start from zero to a data center with ideally one playbook. A bit technical details on how it's done, especially compared to normal OpenStack installation. We use a combined API plus conductor process. So in OpenStack, you usually separate API surface from like workers, conductors, engines, we didn't do it here, just one Ironic. No RebitMQ. I know we already figured out on the MetalCube talks that people don't like RebitMQ. So no RebitMQ, they use GSM RPC. In case of Bifrost, we use NGINX for several tasks for IPX itself. IPX is GTP-based, I think, virtual media, which is next big thing with Redfish. We serve instance images with the same NGINX and we use it for TLS termination. Then we have DNS mask, which we also use for two purposes, for DHCP and for TFTP. And we don't use it for DNS, so just a bit, that's a bit ironic, isn't it? We use static DHCP and we rely on being able to use BMC to set boot device to local disk so that it doesn't boot into the RAM disk again. As I said, we support some additional services. Ironic Inspector is a part of Ironic ecosystem, obviously, we support that, but we also support Keystone, just in case you want a bit more than HTTP basic authentication, or you want to use a service catalog. We support Ironic Permissius exporter. We don't support installing Permissius itself, but we support the exporter that can provide you with metrics from the BMC of your hardware to your Permissius. And on top of what Ironic can do naturally through its API, we have playbooks that do some additional features. We integrate with this image builder for building your instance images. So you can provide some parameters to Ansible and the image will be built for you. That applies to both IPA image and instance images. Of course, if you don't want to build your IPA image, we publish once on our both OpenStack work and Bifrost can use that. We automate config drive building. You can detect your local SSH public in embedded automatically. You can provide also static networking parameters and Bifrost will catch them and we'll use them. It will be done through inventory. Then it will show you the inventory. And the most exotic feature, we can also do optional DHCP allocations in DNS mask. So when you have your inventory, have MAC addresses and you have desired IP addresses, we can also configure DNS mask. For these MAC addresses, you'll get these IP addresses. Sad things. IPv6 support is absent. It should be easy. IPv6 works as ironic. We just need to do something about Bifrost because there's a lot of APV4 code everywhere. We're kind of multi-node ready, but there's no playbooks to install Bifrost on several nodes. Nodes here, I mean nodes on which Bifrost is installed, not ironic nodes, sorry for confusion. Bifrost is a single controller of things. And we would love to have optional support for OpenStack projects or maybe non-OpenStack projects to, for example, Neutron, but we don't have that. So it's on ironic installed so far. If you want to contribute, come to us. And on this positive note, I'll pass the mic to Daniel. So I want to give you a little bit of a context. Basically, I'm a networking guy. Most of my background is on networking, so why the hell did I start with ironic? Well, mainly because I'm lazy. So I was working with some people from some lab and they told me, you know, you got your servers, you want to deploy those. So there you go, go load your virtual VMC on every of those. So, well, I tried to do so. It felt miserable because, you know, there's a lot of options, the UFI, BIOS, no, you got to go back again and you got to go change that specific setting. So at some point, I just gave up and I decided to give it a try. We know to ironic, which I have done for OpenStack for a long time. But again, I'm lazy. So I really don't want to install a full-flex OpenStack. I don't want to install a full-flex ironic. I don't want to have, you know, the API running or whatever. So I just decided to create some kind of bus, you know, what would I want you to do as I'm going to show you. So, by first, as Dimitri was saying before, you have like, you could do these things in two ways. I'm going to be showing you both. So the first thing you could do, you're lazy, you may even be more lazy than me. So just use BITROS CLI. So you don't have to know anything. You just have to go, basically, launch that command. And it's going to be doing most of the things for you. The caveat, again. So when I first started this, I thought this was working on CentOS Stream 8. And then I found that PIP didn't install my libraries. And I found myself into a living hell of debugging PIP and what was going on. Again, that was because of Python 3.6 support. It was deprecated. So don't, you know, save yourself with the hell and just go CentOS Stream 9 plus. So that should work, too. But it's not really supported, probably. But, you know, yes, Python 3.6 plus, let's say. Regarding the drivers, this is going to be using IPM and Refish as the default ones. If you want to install some, you know, more fancy stuff, such as NIDRAC, you could do so. But it would require some minimal intervention, let's say. So what does this BITROS CLI command do? So install Python binaries and then install the NSIBUL collection. In case you don't know what NSIBUL collection is, it's basically a set of rows all set together. One of the things that we could improve at some point in BITROS is to allow the usage of NSIBUL execution environments, which would probably simplify much more all the dependencies things. Basically, it's a container-wide system with all the dependencies built in directly. So you would be able to trigger that from whatever you could. Again, let's say you just want to go and test BITROS CLI, but you don't even have a bare mental server. You don't need those. You could just ask, you know, a tool called Shushi Tools, which would basically go and run a virtual VMC for you, and it would, you can just run BITROS CLI in a way such that it would create just two test nodes for you. You just need a couple of RAM. I mean, this is not Kubernetes, so not 100 plus RAM. So yes, I guess if I'm having a laptop with maybe 80s of RAM, which I guess it's an industry standard. So I really hopefully that any contributor would be able to have at least 80s of RAM in the laptop. So, recap in. How would you, I don't know nothing. I just want to run BITROS. I want to test it because I think it's cool. And I want to test Ironic on my system. So how would you do this? You just have to, well, know a little bit of Git, just clone the repo. Well, I may be assuming that everybody out here knows what export OS cloud means, but it might not be the case. Basically, it means that you are using OpenStack Client in this case. So it should have a file that would be basically done for you later on, which is called Cloudstone Jamel, and it would have a Jamel file basically containing the configuration for your cloud. No OpenStack here, but you still need to do this. And you also are supposed to, when you do this, it would just go, you need to source the beautiful environment, assuming you're one of those. Okay, there we go. So, BITROS VLI install, and then here we are going over the complex route. I'm going to be showing you a simple run on the demo, assuming it works. But you can specify the network interface and the DHCP pool that you are going to be using for our DNS mask. So, in the end, let's assume you want to do something a little bit more fancy. In the end, this is all ansible, case-on, inventories. So you may have just, you know, one of those JSON files that you can modify then in any way you can. So you just got to pass those to BITROS. You could choose the driver. In this case, I'm showing here IPMI, but you could use Redfish, iDRAC, or I don't know if there's any more fancy drivers, probably there, but anyway. So you could even use RAID if you want to, or do whatever you want, depending on the driver you are using. So, but this basically, this works in two, one to two and a half steps, because once you've got all these, basically you need to enroll your nodes. That means that you are, you know, giving access to buy first for them to control those and then deploy them. And well, if something goes terribly wrong, you could either un-enroll them. There's some cabinet about, you know, that's maintenance mode, whatever, which I think it could be more simplified, but maybe you won't even need those because you may be able to just, you know, redeploy. So that's the first one, enroll, then we'll just go, don't mind the parameter for passing to the Ansible playbook because you don't really need those, but at some point it'll be passing to deploy in a state, then basically you'll have to wait a little bit so they finish and call back, and when they call back, you can just go and see them as active. By the way, there's also a parameter that you could pass to the reference slide which is dash, dash, wait, if I record correctly, and then just wait for them to be completely deployed. So that said, that might not be enough for you because, you know, but I run my own playbooks and I want to customize my environment. I just want to, you know, deploy a system. I want to do things with them. So again, here comes Ansible. So this installs Ansible for you so you don't need to do anything fancy. It also goes and you use Ansible Galaxy to install every needed collection for you to use and it'll be configured Ansible for you. So don't think about Ansible base or whatever, you know, it's all done for you so you don't mind. But the power of this is that if you already got a couple of playbooks that you go for your own usage on your system, you could just reuse them, even make your own Ansible collections and just go and deploy. And that's, trust me, that's so much, much, much easier. And again, I'm lazy and time effective for you just to deploy your servers on. How would I be able to, you know, run these things again? If you see, this is pretty much the same as we were commenting before, like with, you know, Bifro CLI, but just running Ansible playbook directly. So at first you export the file where you are storing your configuration in the JSON file, then just run and roll, deploy and as I said before, if something really goes wrong, you want to debug or maybe you just don't want to debug, you want to try, you just go and see, relounce, redeploy dynamic demo. So, okay, let's do a simple demo because I talked a lot, but I think it is worth, you know, showing something, otherwise you may think that I'm, you know, basically fooling you. So, let's see. Let me know if the font is big enough for you. Okay, let's see. I'm gonna be starting us by here. So I just need, this is a CentOS Streamnime. Again, .use8, it's not supported and it will fail miserably and you'll be, you know, terribly frustrated, don't. So, this is super fast. Thank you, folks, for the network here. I like the KubeCon. So we are basically going to enable Apple and Apple Next Reapers here. Let's say that they're installed. Okay, they are. Now we're gonna basically do a Columbi first. Go over there. And, okay, so again, we are going to go, for instance, we like to create a minimal setup for apps to test. Let's say you want to join apps, you want to be a developer, you could just use to, you know, create a quick setup, test it out for you to be the developer. It'll trigger two VMs, use Instruction Tools, it'll install those. And later on, we'll be installing, like, graphics, by first and then running itself. So you would have, you know, the API and the other component running. Okay, so this is now installed. Let's install the test. You see, you know, this is pretty much both proof because it tells you every step that you need to follow. So if I could do it, trust me, you could do it yourself too. Okay, let's say that Ironic is running, it is. Let's say that Ironic is back door. And again, this is a little bit different from OpenStack because if you were using a full-fledged OpenStack, you will see much more components. This is not the case with bad first because basically we are trying to minimize the workload here. So it's, you know, basically, it's kind of a theme person of Ironic. Okay, so we got now two, let's say bare metal, fake bare metal machines running, they are not running yet there, but the definitions are over there and we've got social tools running too. So we are gonna be basically hooking up into the BMC. Activate this. Export it off-cloud. As we were speaking before, like, basically this comes built in with two different drivers, which are IPMI and WordFish. If you want to add any additional drivers, you could pass any options to the Ansible playbook, any Putyaska trigger, even those staging ones. I don't know. Those staging ones comes like as a proof or are experimental or... Which one? The drivers on the staging, are they experimental or are they supported? They're constantly experimental, yeah. Okay, so maybe don't try those. So let's see what an inventory looks like. So it's pretty much what I was showing you before. You could even, you know, wanting to use host groups. You want to use specific properties. But in the end, you are basically setting, like, the machines, the configuration for the IPMI, passwords, SSH keys and so on, but you could just use a lot of specific configuration should you want to. So let's enroll these things out. I'm not using the dash, dash, white here. So should I be using those? That option. By first would hold on and wait until the whole deployment is complete or failed, hopefully not. Because they shouldn't, because we are super-duper developers. But in the case, this is not the case here, so let's see what's going on. You see, now we got basically two different bare-metal machines available to the ironic. So you could just basically go ahead and deploy an OS into those. As the image was seen before, we are assuming some defaults to that, but you could use any emails that you would like to use. You could just modify the defaults as well. But we are just doing something simple here, so let's kick the deployment. Let's see what's going on. Okay, let's see. Oh, they are deploying already. Placing fast. At some point, this should just be going to... This is doing something. It's... IPA is running there, so the arachnid Python agent is kicking off things there and it's basically waiting to be calling the IPA again so they enroll it there. But they are active. So, well, basically that was the quick demo we wanted to show you here, guys, here. But as the image was saying, there's a lot of work to do here. So things you would like to contribute here if you'd like to. Well, one of the things that I would really like to see here is, well, these playlists were made before the Ansible execution environments were even designed. So it would be really cool to have one execution environment that you could just run, just stock arise, or use implement or whatever you want to call it. Just cry obey, let's say, to run all these things. So if you are using Bifrost on your environment and you know you've got some things already done or we're really looking forward for you to contribute it back. And again, so dependencies. I hit the dependency issue with Python 3.6 here. That shouldn't be the case in Ansible execution environments because the Python version would be really hard coded. In a formal session from here, Dimitri and Yuri, which is basically the PTO of Ironic. So you can blame him about everything. We're discussing about Python packaging. Even though I love Python, I'm also a goal developer and I have to say that I was totally lazy when I discovered that I could just go, use GoMod Bander and forget about dependencies when they're using, you know, copying the maximum version and even version and so forth. That's somehow a pretty complex system that could be somehow avoided if you are just using your own execution environment. So I think that would be really cool for newcomers because they wouldn't have to care about that at all. That said, I guess that was the email, but questions so far? Could you go there please? Is there currently a way to like have the nodes auto discover by just pixie booting and then enrolling in Ironic without providing an inventory? Which, sorry, what did you say? So the question was about auto discovering through pixie booting. Yes, there is such a way. It's a mode of Ironic inspector which is enabled. I think there's an option to enable that. It's not on by default. You can enable it in Byfrost. It works. The usual problem is to know the BMC credentials. So Ironic will enroll the node for you if we populate introspection data. But to manage it, you need to have these credentials. And some vendors actually have them well known. In this case, we can write a thing called introspection rules, which is a mini language inside of Ironic inspector that defines the rules to execute on freshly introspective nodes. And you can say, if this is newly inspected and the vendor is Dell, set root calvin, for example. If we don't have an easy solution for credentials problem, it will be harder. But yes, it is supported. Thank you. The other thing then because I come from the mass canonical world, where when you have the nodes just pixiboot, they get discovered and then they do, as you say, they just run ipmetool and set the username and password for the mass service. But you can do something similar, I suppose, with Ironic. So the question is about just configuring credentials automatically, right? I think that's what Manus is doing. We had that, we dropped it. There's a lot of interesting issues around that, starting with the fact that it only works with IPMI. Okay. There's a ratfish who cannot do that. It works sometimes. And also, something we're trying to avoid in Ironic is to be primarily responsible for your BMC credentials. That's not the role we really wanted to take. It could probably integrate as Barbican, but yeah, we're not quite there. And there were also issues with that. So if it breaks midway, your hardware is in a weird state. So we made a conscious decision not to do that. Thank you. But you can write a plugin for Ironic Python Agents to do it. If you're really curious, I can explain you how to do that later on. Thank you. Can I use something else but Libyert, like OpenStack VMs? So for the servers, when you created the two VMs in Libyert, can I use my OpenStack credentials somewhere? But well, I created the VMs with VRSates and I'm using SushiTools, which is a tool to create a fake virtual VMC. But this is meant to be used by, you know, like bare metal servers. If you can, I assume it should somehow work. I didn't test it, to be honest, creating VMs with Nova and attaching that to SushiTools. Give you a try. I could have like a specific turn on for that and another one consuming. SushiTools has an OpenStack backend. So SushiTools itself supports using Nova instead of Libyert. Bifrost doesn't expose that. So if it's some hackery, you can do that. Okay. Thanks. But yeah, please keep in mind, that's only for testing. Bifrost is for bare metal. So if I wanted to do like using OpenStack clouds for running my bare, fake my bare metal, I could just use SushiTools directly. That's what you mean. So SushiTools is a library. SushiTools is an emulator. SushiTools is what provides fake Redfish BMCs here, but it's backed by Libyert VMs. You can change the backend to be OpenStack. That will require some configuration. Probably Bifrost doesn't do by default. Thanks. Is there a way to upgrade the version of I-Running with Bifrost or not? If you want to upgrade to a major version of an existing deployment of I-Running using Bifrost, can you do it with the tool? Yeah, pretty much every time, in the spirit of Ansible, every time you run install, it installs the latest version. So if you have an existing installation, for example, install using yoga, get pulled by Frost to that, you run install again, it will update everything to that. Okay, that's perfect. Thank you. We even do upgrade testing in the CI and we even have some workarounds in our playbooks for great issues. So that's supported case, yes. Sorry. So I think it was mentioned and we didn't do that, that this is running on SEUL, on the CI, so every I-Running patch in the end is really running by Frost over the CI. So I think it's pretty much bone proof. Any more questions? Okay, then thanks all for attending and enjoy the run.