 Hello everyone. Hello everyone. My name is Julia Krieger, and this is the Ironic project update for November 2018. So what is Ironic? Ironic is the project provide bear mail services and functionality as part of OpenStack. It can be used by, as part of, I am having one hell of a day. Sorry. It can be used along with OpenStack services and in a standalone model. So I first want to talk about the adoption of Ironic because this was noted in the keynote and I think these are actually very powerful things to understand or keep in mind. Looking at November 2017, we had 19% of deployments with Ironic. The year before that it was 9%. In production. Last year, we had 11% of deployments in testing Ironic that has gone to 9%. Meanwhile, we've grown our usage base from 19 to 23% deployments in the user survey, which is a huge massive change. When we look at the deployments with Kubernetes in the user survey data, 37% where they're using OpenStack are doing it on bear metal with Ironic. When we look at those, just those testing in that same scenario, it's 10% of those deployments. So if you actually step back and think about it, Ironic essentially is engaged in half of the deployments that involve Kubernetes on OpenStack. What was that? No, we're not going to duplicate Nova. Oh, that's awesome. Okay, velocity. Because for some reason, yeah. So one of the things I looked at when I started pulling statistics is the number of commits from operators specifically from Pike to Queens to Rocky. And what we can actually see is that in the Pike cycle, we had almost 1% of commits coming from pure operators. In the Queens cycle, that grew to about 6% or 7% and in the Rocky cycle, it was 13% of Ironic commits were coming from operators. Not organization selling products, but actual operators using Ironic every day to solve their problems. And if we look at the review velocity for the community the past several cycles, we can see that we do have dips in our review velocity. Usually those are around holidays, summits, when people tend to take vacations. This cycle in Stein, we do still have a decent review velocity. You can see that the actual peak over here throws the numbers a little bit. So we're about between probably 150 to 200 reviews right now. It's not horrible. I personally would love to see more, but I'll take what I can get. Overall, I think given that we have dips in our review velocity, it's fairly normal for us to see this. And I don't think we should be alarmed by it at all. With patch sets, we can also see that we're fairly consistent. Again, we still always have some dips, summits, and it seems fairly normal to me. Now, since we just had our Rocky release, I'm going to talk about our Rocky cycle and what we did in it. We had some major features and changes in the cycle. We had a feature called conductor groups, which allows operators to map ironic nodes to conductors or groups of conductors that they can define. This is a fairly free-form functionality, but hopefully allows operators to convey their physical realities in their configurations so that you don't have a conductor trying to deploy nodes in another continent. We also have the ability to now get and set biosettings with the ILO and IRMC drivers, or hardware type, sorry. And we start work on deployment steps, and this is largely under the hood. We don't expose this yet, but you'll hear more about in this next cycle because it allows for greater flexibility and greater customization. There's nothing stopping an operator that has already modified the code to go ahead and try and leverage this functionality this time. It's just you'll be beta testing it, basically, for us. Things worth noting, we thanks to Tony here. I had partitioned image support for PowerPC-64 architectures added. Previously, all systems had to be deployed with whole-disk images. Dimitri added a capability to reset driver interfaces, which apparently was a major issue. Otherwise, if you kind of broke things. Thank you, Dimitri. And a number of people helped ensure that we have the ability to set boot mode properly for reg fish systems. We also added the ability to do manual boot from volume configurations, and we had the functionality added to recover automatically from power faults. In this case, normally what would happen if ironic loss connectivity to the MC, a machine would be marked in maintenance mode, and that machine would have to be manually returned back to normal operation by an operator. We now actually identify when that's the case and return it back when we can see the BMC again. As was a project goal, we also added support for SIGHUP for our logging and version upgrade pins. Version upgrade pin being very important because if you're in the process or middle of an upgrade, you want to prevent users from possibly asking the conductors that might not actually support functionality yet to do things that it can't do. We also had some enhancements and fixes to the ATA Secure Race Func code. We learned that there were some issues there, unfortunately, and if the person's here that reported that, I am so sorry. And we probably made Dimitri's day. We finally removed classic drivers. And Dimitri's kind of bouncing here. Now Stein. And yes, I took a photo with a Stein for Stein. Today, already in master, we have per node automated cleaning that is merged. We have a direct deploy interface, or sorry, the direct deploy interface can now leverage a local HTV web server. So a user no longer needs Swift to download images in a direct deployment scenario. We have enhanced checks and support aligning with what was implemented in Glance in this past cycle. We also now have the ability to enable parallel erasure of disks on systems. Previously, Ironic would go one disk at a time. As you can imagine, if you had 32 disks, this could take a very long time. So hopefully this should be much faster for operators now. And we've also gone ahead and separated the Pixi and iPixi boot interfaces. And the real motivator behind this was we found that many operators want to use iPixi for XA6 hardware, but you cannot use iPixi for things like PowerPC. ARM is actually not a distributed binary. You have to build it yourself. So that's a barrier to entry for those users. So they're generally forced to use Pixi by default, and we wanted to make it so that you could run both. We have a number of things that we're also planning or hoping to have in this cycle, which includes out-of-band inspection for Redfish systems. We're looking at restricting the outbound communication of our deployment agent. We are working on adding support for booting from an URL for those that aren't aware this is part of the UEFI standard where you can essentially say I want to boot from this HPS URL and the firmware will just boot the image. We're also hopefully going to make progress on deploy templates, which takes the deploy steps that I spoke of earlier, and allows us to take this one step further and allows for operators to customize deployments. We're also hoping to make the deployment and cleaning steps more visible. Presently right now you have to either read documentation or look at the code to figure out what actually is available, which is very, very, very far from ideal. There's also a session this week on SmartNix. SmartNix in this case would be network cards that have an operating system running inside of them, and potentially the hope is that we could have configuration set so that it is almost like a hypervisor in that the port configuration is done in the card and we no longer have to talk to a switch in those cases. This is still early in discussion. I don't know if we'll actually have land code for this this cycle. It requires Neutron to agree. We're also hoping to have DHCP less virtual media based deployments. This is a combination of work also with boot from URL that will help enable operators to no longer need DHCP on edge systems. So conceivably you could have deployments that you're booting far, far away that are booting just from a URL. There's one vendor that has committed to adding a driver to Iran to have this functionality as well as part of their default, but that code has not yet been proposed. We're also working on increasing our Python 3 testing. We have patches in play to change most of our testing over to use Python 3. Looking forward, we should see DHCP less virtual media based deployments, graphical consoles, and more scalability improvements for the conductor process. We recognize that as a bottleneck for Iran and that's something we do want to fix. If anyone has thoughts, opinions, needs, requirements, please do let us know. That is a very abstract and complex topic and that we do need operator assistance from to understand their cases. So how to give feedback? Unfortunately, the feedback session was this morning. This seems to be the curse of ironic. Please feel free to load the etherpad. Feel free to join us on IRC. And starting November 19th, email openstack-discussatlists.openstack.org. If you wish to contribute, we have an onboarding session tomorrow on level 3, room 1, at 11.50 a.m. Feel free to join OpenStack and ironic. Say good morning and ask questions. We also have links to our documentation, storyboard, and our release notes. Any questions? I run it. That means that the communication will only flow one way. The hope is to enable support for one-way communication. Naturally, this also forces us to consider the implications of such and why we're also trying to talk about scalability changes or what that might look like because it does basically require us to engage to have polling. Better tracking of what needs to happen for those agents that are running and essentially still polling. Right now, every IPA actually polls ironic and causes the event to reach back. So it's much more interactive right now. Any other questions? I love it when there are no more questions. Thank you, everyone.