 Yeah All right. Good day, everyone. My name is Jacob Antares I work for Red Hat and today I'll be talking about enhancements that we've made to NVMe cleaning in the world of the release of ironic First of all, let's recap on the problem description where the challenges used to be NVMe start becoming more and more common, especially in performance sensitive applications such as HPC databases and everywhere we have a lot of small IO and in a way NVMe start becoming a de facto standard for those applications Now historically, we didn't have a whole lot of great tools for cleaning NVMe nodes in ironic So typically what would happen is we wanted so-called deep reading or erase devices that just removes all the contents of the NVMe device in a thorough way as opposed to just doing metadata clean We would have to essentially shred them possibly multiple times which is horrible for the write-in durations of the NVMe, but it's also very slow and when we have a expensive HPC hardware available We don't want it to be sitting there for hours or days running through cleaning with with NVMe It's more hours and days, but still it was less than ideal There were a couple of things we could do to alleviate that using the race devices metadata was a decent option However, it wouldn't be suitable for higher security requirements Some people would disable cleaning entirely, which I think is insane, but sometimes it had to be done And also in terms of SATA flash drives, some of them do support SATA security, right? So we had something for them, but not for NVMe So that's a matrix comparing the options that we used to have Private to all NVMe to see that other than the SATA case, we don't really have a good option Metadata is not really secure and shred is not really fast or efficient So we took together in the community and we figured we needed to do something about it We looked at implementing what we call NVMe native cleaning capability Which essentially means adding NVMe awareness to ironic software So ironic is able to detect that storage devices in NVMe Check its capabilities and clean it accordingly So not use any of the existing methods, but just I think new This new option essentially involves using NVMe format feature We implemented using NVMe CLI toolkit We check capabilities if the device is capable of cryptories with the data If not, we can do user data erase And if none of the options are available, then we have to fall back to the old ways But most NVMe is on the market to support those So it's not a big problem in my experience And also we added some configurations so that the operators can Control the behavior of this new functionality It is unable by default, but so much to disable it that can be done And we made minor changes to IPI builder Just to make sure that we have the packages that we need If you're interested, I pasted the links to the Garrett changes here Have a look And with the enhancements I described included in the code base We have the full option, which looks nice and great In our matrix in the context of NVMe devices We have the security features, we have light on the NVMe itself Provided that the firmware is a reasonable degree of compliance with the spec And also it's super fast, which I will show you in a second So the long story short, we kind of have feature priority with With subtle erase, but for NVMe And this is a drastic significant functionality gap That was really bugging me back in my operator's dice Couple of words on how to use this feature If you'd like to take it for a spin You need the latest release of Ironic And that applies both to Ironic Conductor API As well as the IPA image If you use any mix of older versions, it might not work Because there are changes in the code base all over And we have some compatibility features that will prevent instrument writing Or working in unexpected ways But if you want full functionality, use the Ironic at latest IPA And I'll take this opportunity to quickly demo how my state works As we go through this, I'll just briefly mention that It used to take about an hour prior to making the change So you'll see how quickly it runs now It didn't fit in the lightning dock So just cycling the node to manage and provide the trigger cleaning And we just started now I'm looking at the console output here You can see it's NVMe formatting the first NVMe now It's just the demyful of time as well One belong connect will jump to the second one And we're like 15 seconds into the process now You can see the provisioning state changing Within 30 seconds, the node has gone through the cleaning And as I mentioned, prior to enabling this, this would have taken an hour Usually apologies for that This pretty much sums up the work that we've done I would like to take this opportunity to thank my colleagues Who will also contribute to enabling this functionality But providing advice and code reviews So specifically I would like to thank Dimitri, Julia Rick Arnold, who's joining us here today And Dimash And with this, I would like to open the session to questions