 My name is Julia Krieger. I work in the Ironic project. I am not the PTL. I'm going to stress that right now. But I'm here to give the project update. We have the former PTL in the room also. So if we have any questions, actually two former PTLs. You snuck in. So we can get questions answered if you have them. And feel free to stop me at any point in time and ask this question. So a quick overview suggested by the foundation was what does Ironic do? It's the bare-mail service that we can use to deploy bare-mail servers. It is the support of tools and libraries to facilitate that deployment as opposed to physical machines. And we have a cute little bear as logo with drumsticks. Well, pixie boots, kind of. Everyone's calling in pixie boots. A little bit of the background of the project. It was started during the Havana cycle. During this last cycle, we had 183 contributors to the project. Up from 175 contributors to the previous cycle. Cycles before that, 131, 121, and 100 contributors. So Ironic continues to grow from a contributor standpoint. From the user adoption or user survey adoption numbers, the number of deployed clouds that have Ironic in production or test phase indicate that they're using Ironic 9% or testing, which is 13% of the clouds. Essentially, this tells us 29% of the user base is interested in using Ironic for something, most likely deploying bare-mail servers. In Pike, what we are going to be seeing is Redfish, which RIT landed last week. It's basic support for powering on and setting the boot device. As time goes on, that will evolve more. We have a link to the actual documentation that's been posted. The other major feature that you'll start to see from this cycle is called driver composition, which is the ability to define the specific interfaces you wish to use for your hardware. Traditionally, if we had a driver that would represent using iSCSI deployment and Pixie booting and some other power interface, well, now we just have hardware profiles and it's a hardware profile and a number of selectable interfaces. As some of these hardware profiles do support multiple types of interfaces for power control. In that case, it might be the case where you have policies as you cannot run IPMI, but you could run Redfish probably. So that might be useful for some operators in the room. We also can now attach fifth interfaces to running machines as when they're deployed. This seems like a minor change, but it is actually a fairly major change in terms of allowing you to use additional resources on the node in terms of network interfaces while it's already online. So I'm having trouble hearing you for the fifth interface. So the network interface actually attaches the port attachment to Neutron. So it's calling Neutron to achieve attachment. Did everyone get that? OK, moving on. The additional features for Pyke that we're hoping to get out the door right now are rolling upgrades and boot from volume. The rolling upgrades will support zero downtime upgrades. And that is presently in flight and expected to land this cycle. Boot from volume is a little more complex since it's integrating Cinder into the life of a machine potentially. And that is a little bit more at risk of landing the cycle. It might not land. Some of it has already landed, but it's a work in progress. Here we have an example of driver composition. And this is the biggest change a user will notice. On here, you can see we have now a boot interface, a deploy interface, a console interface, a driver, which is actually the hardware type in this case, an inspection interface, management interface, and network power interfaces, and radar interfaces, and as well as a vendor interface. Essentially what this allows us to do is allows you to build a custom node of varying drivers. So if you don't want to use the IPMI driver or you know you have to use the IPMI driver and not the ILO power driver, then you can change it out. It's up to the operator. Any questions? For VIF interfaces, this is an example of attaching a VIF interface. You can see that we have a port assigned for a node. You can see with this command that we've listed the VIFs and that that's the UID for the VIF. And that here's the UID for the port and that the port status is down. In this screenshot, we can see that we're attaching the VIF. And down here, you can see that the same port, the same port ID, is now active. So essentially what would happen is Ironic's telling you to remove the network connection and then you just hold it to put the network connection back in place. We use this in our cleaning logic and deployment logic so that as a node is moving through its life cycle, the ports are removed and plugged back in. So now a user can do it. Moving forward, we have themes. Right now, we have a major focus on resiliency, manageability, and modularity, as well as interoperability. The user interface is more of a minor focus. That's mainly a resource issue at this time. For queens, we're not 100% sure what we're going to see. We're hoping the major focus will be interoperability, manageability, and user experience. But it's a little too early to tell. In regards for enhancements for queens, most likely what we will see is routed network support. The physical network awareness, which is needed for net, I should have listed these in reverse, sorry. But one is to allow us to know how the network is architected or interconnected. The other allows us to understand how and facilitate the network connectivity for deploying nodes in that environment. And then we're hoping to get rescue mode picked back up for queens. It unfortunately fell victim to the OSIC impact. For Rocky, we haven't talked about it yet. If anyone has any suggestions, we're absolutely here to listen. And we do need your help. Ironic is an operator-centric project. Reviews from operators are just as vital as reviews from developers to us. We gain insight by your operational experience, and that helps us make better decisions and plan in a more efficient manner. The impact of OSIC has helped, has hurt us like many other projects. And right now, we don't know the full impact. We probably won't know the full impact for a couple months at this point. But we are re-prioritizing our priorities so that we can attempt to get more done, even though we've lost some people in the process. Other things we can use help with is third-party CI. And it is honestly amazing what you can see happen if resources are donated to developers that don't actually have them. So if anyone wants to volunteer some lab time to an ironic developer, that would help us quite a bit. Any questions? I would think it varies based on, well, the question was what kind of lab time is needed, because the microphone probably didn't hear. I think it would vary based upon network architectures that are in use or desired. So SAM could probably use a very complex network for a couple days. I could probably use different controller management interfaces, things that we don't normally get in having a server in our closet at home, which is what most of us have if we have anything beyond virtual machines. So if you had a list of needs, like a shared Google Sheet or something that might help with some of the other, some of us, maybe we could find a way to try and help you out with that. That would be a good idea. We should do that. Thank you. Hi. So about driver composition, do you have to define each and every interface for each and every node? Or do you define a kind of node type and then apply it to the nodes? You define a node type instead of a driver. So the node types have a list of drivers defined or sub-interface drivers, as we're calling them, or interfaces that the hardware type knows to support. And depending on if those types are enabled in the conductor or not enabled, it will choose the available drivers and use them. Yes. OK. And another question. You talk mostly about pure, ironic functionality. Is there any new big changes in IPA or the inspector? In inspector, I'm not aware of any. OK. Thanks. Regarding IPA, which was another item that was asked, I believe that we will not be seeing any features in IPA any time soon. Any more questions? Well, I feel like I have a captive audience. You noticed in IPA, someone was trying to change that I'm just from tiny IPA to something else. Chris Smart, I believe. The name? Yeah. There are several ways of building an ironic Python Asian image today. There are two supported in the ironic Python Asian image. There's a tiny IPA, which is a tiny column of space distribution, and a core-ice-based distribution. Both have a bunch of sponsors. They're also unsupported because we don't test it. You might as well use this image builder to produce an ironic Python Asian image, which gives you a few more options. Basically, C-Smart on IRC had the idea of using tooling available to build an advantageous distribution for the build group to produce a small, extremely likely ironic Python Asian image. And it avoids a number of issues in the other distribution methods space. Unfortunately, I believe he lost him because of the other issues. So I think all the code is on the end of the slide. So if someone's got cycles to put into it, then I think it would be a great contribution to the ironic community. Jim, do we have a list of things that have been dropped? OK. OK. Any other questions? I think Scott started with ironic and have a couple of more operational questions, I guess, since nobody has any other technical questions, it seems. So one of the things I've noticed, I've been using the networking generic switch setup. And I've noticed that if I define a machine with two network interfaces, but then the user creates a server that only defines one, it sometimes leaves the other network interface still in the previous v-line. Is that a known? So that is essentially going to be a known thing. The networking generic switch is really just intended for testing. Yeah, so you should use the same thing as probably. Probably. There are also some things we need to probably look at with networking generic switch to make it a little more friendlier and have a little more control over that. But that's going to require some planning on our part and figure out where the bounds are with that, because we should be asserting expected state. And maybe there's a middle ground for us with bare metal that we need to think about. OK. So what would be a better option to use was just our first lab, that's fine. Different question is, how do you keep the end users out of the IPMI in-band? Some machines have a biosetting to turn off that data access. There's to be able to do that basically. Oh, that's pretty much what we've figured out. There's a downside of giving people hard-of-hearing. Same thing, but it's OK. So one way to mitigate that if you don't have part of it that lets you turn it off is after you've deployed a user on the machine recognizing you might try to put out a three-dacty mic board and filter that traffic at the switch like that. I assume you have a separate switch for your public base model, the 10-tapet in your command and you still put the filter rules on that switch. Yeah, I guess I'll be saying. I was more concerned about basically the user walking the system out of accessing power control. Thanks. Any other questions? Well, thank you, everyone.