 Hello everyone, and thanks for joining us today and welcome to Open Infra Live. Open Infra Live is an interactive show sharing production case studies, open source demos, industry conversations, and the latest updates from the global open infrastructure community. This show is made possible through the support of our valued members. So let us thank them for supporting us here today. My name is Kendall Nelson and I will be your host for today's show. We're streaming live on YouTube and LinkedIn and we'll be answering your questions throughout the show. So feel free to drop questions into the comments section and we'll answer as many as we can by the end of this episode. So it's here, 2023.2 release of OpenStack, affectionately named Bobcat. Today we have with us several community members representing various OpenStack services to talk about highlights from the cycle. So first up, we have Sylvain here to talk about Nova. Hey, Kendall, thanks for introducing me. So thanks, folks. I'm happy to see you again. So this time, yeah, it's the new Bobcat release. It was a pretty smooth release. We bought any bumps on the road except one CVE that happened and that Rajat will explain later. So can I, oh yeah, can I please get the next slide? Okay, cool. So this cycle, this cycle, it was an interesting cycle because we are able to merge more blueprints than the previous cycle. So as you see, we accepted 17 of those. On those 17, we basically accepted to have implementation for eight of them, actually nine. Unfortunately, we found a last minute problem on one of them. So we needed to revert it before the first release candidate. So that wasn't impacting any operators, no relations found, and blah, blah, blah. But yeah, basically the number is pretty good given also we did not add a lot of contributors this cycle. So thanks, very thanks to all the contributors that were around because we are very productive this cycle. As you see, the numbers are also pretty good in terms of bug fixes. We had that specific CVE, that Rajat would explain later with Cinder. So again, definitely kudos to the team. It was a good one. And as last reminder as well, maybe some of you already know about the new cadence that we have for upgrades. You will be able to skip one release. And I'm happy to say that Bobcat will be the first keepable release. Meaning that if you're already on entropy, the previous release, you should be able to upgrade directly to Caracol without using Bobcat without upgrading to Bobcat first. So if you really want to use Bobcat, that's definitely good and I would appreciate that. But it's possible to be skipped. Next slide please. Okay, so what do we have this cycle for Bobcat? So first, it's important to mention that we are currently changing our quota system. The limits that you were previously providing directly in Nova should now be provided in Keystone. That will help you to define all the limits for all the projects on one single place. But that will also help you to define some specific quotas for resources that were not available yet. In order to ease that move, this cycle we're providing you a new Nova manage command that will automatically import the limits into Keystone. As a move, we also decided to duplicate the previous quota driver. That doesn't mean that it won't work, but that's a good incentive for you to test if you're running Bobcat, the new unified limits driver that's a configuration that you can modify and also use the migration tool. Hopefully next cycle, we should definitely be changing to use the unified limits system. A second feature that we did care for the previous cycle was actually how we wait hosts. Some operators at the previous physical PTG in Vancouver told us that they are having troubles with the current sorting that they were having for all the host candidates. We heard them and we then provided two different widows, two different schedule widows that can be used with different units and use cases. For example, if you want to migrate or create instances to hosts with the lesser number of instances, you can use that waiter, but you can also try to pack your instances into specific hosts by changing the waiter of that, by changing the waiter number. Another waiter was also created due to some discussion we had with operators. They wanted to create instances within, I would say, the last compute nodes where, for example, the US was the latest. So we heard them and basically we created a waiter that was looking at the hypervisor version, sorry. So, for example, you can then default to use the compute that have the latest livery version, for example, but you can also, I don't know why, but you can also modify the number and say I would like to use the older ones, that's possible. Again, during the physical PTG in Vancouver, some operator told us that he wanted to call Magretts with an end user. We helped him, so we basically created a new authorization policy for call Magretts. By default, you now have two policies for call Magretts. One is for call migrating without providing a host and the other one is for call migrating by providing a host. Both of them are by default for admin users, but you can actually modify that for one of them. For example, and that's the use case that was asked by the operator, you can modify the policy to say for an end user, I can call migretts without knowing any host. That's no possible. Some other features, for example, the liver driver can modify the cache size for the translation block, which is nice for short memory instances. That's due to some new liver version and no, it's supported by Nova. We also deprecated a configuration for the running driver. I won't explain more of that, but basically, next cycle hopefully, you will see you will have a new behavior for, I will say, HS strategy. I won't tell that by now because that's one feature that we currently looking, but look at this configuration option. You will see why we deprecated it and basically we're working on a new feature for next cycle with the running team. And we also did some stability improvements about instance reboot, not sorry about instance reboot, about volume attachments. During instance reboot, automatically we verify the volume attachments to make sure whether we have residues. And we also modified something important for us as an implementation, but for you, you will see the use case is that, for example, when you want to rename a compute, no, it won't be accepted. Because if you did that before, maybe you know about that, but it was creating a lot of problems. And basically what we did is that we threatened the linkage because compute and services. So no, basically accidentally you shouldn't be doing that. Next cycle, please. Next slide, please. I know what I'm saying, cycle. Anyway, again, so as you see, we discussed a lot with operators during the previous physical PTG. This cycle, we have a virtual PTG. That's actually nicer for operators that were not able to go to Vancouver. This time, this is virtual. That means that you can discuss without. It's simple for you, it's free. So if you have some questions, concerns, I would say new features would like Nova to have. Look at the dates that we have for the virtual PTG. Look also at the link that I provided. And that's definitely the time where you can discuss with the contributors and you can explain your use cases. Voila, that's basically it for Nova. So thanks and Kendall, back to you. Thank you. Yeah, I definitely want to invite everyone to get involved at the PTG. Like Sylvan said, it's a free and virtual event. So please join us there. We would love your feedback and your help developing the Caracol release. Wow, 17 blueprints merged in Nova and just about half of them got implemented. That's amazing. Thank you, Nova contributors for all of your hard work. Next up, we have Neutron updates from Rodolfo. So thank you, thank you very much, Kendall. Hello everyone. So during this last cycle, last cycle was quite intense for Neutron in terms of March. And I'm proud to say that we have a healthy community contributing in new and very interesting features. I will also want to highlight the participation of many users during the Vancouver PTG. Maybe the problem we had is that we didn't have time enough to finish all those ongoing developments. But we have some new features introduced in the last book, for example, please next slide. So for example, the new APA, which a new APA which allows defining a set of security groups that can be used automatically in new security groups. For example, that will allow the administrator not to be constrained to the full security group rules that are now created when the full security group is created along with a new framework that will improve the security if needed by the administrator. We also have a new port attribute that will modify the local OpenB switch user space transmit package in feature that allows the user to configure the performance of each form when you're seeing obviously the game. You're seeing different strategies. We also have a new implementation for new APA policies for the role-based access control with system scope and default roles that belongs to a common effort that started more than two releases ago implementing the secure RBAC policies. And this is aligned with the OpenStack community goals to implement this new APA for every project. And also we have a rate limiter for the metadata services to avoid TOS and stop misbehaving instances. But I would also to mention, yes, that access line thing here. So to be honest, next time I will be more more explicit when creating those kinds of first line. But let me explain this a bit better. I will be better explained that making those slides. So we have several going developments that have been started not only during this release but also in previous one. For example, improving the OVM-E3 scheduler that will be optimized for balance to balance the gate with nodes. This, those new schedulers will handle better the availability zones. And there will be also new classes that will be able to work fine with bare metal posts using the same scheduler using in core OVM. We will also have router flavors for OVM allowing to use external hardware devices along with software routers in ML2 OVM. We will have also IPv6 metadata support for OVM closing another ML2 OVS to ML2 OVM party gap. And we will implement OVM gate with multi-homing which is a feature currently supported by core OVM. And as I said, this is an ongoing feature in Neuron but it's taking more time than expected. As Sylvain mentioned, the next virtual PTG registration is open and all the DFTAR paths are open to propose any new feature. So as a summary, I will say thank you and stay tuned for the next cycle. Thank you very much. Yes, lots of excellent work done by Neutron developers as well. Thank you for all your hard work and being involved in the Bobcat release. And Neutron is also planning to meet at the PTG so there are more opportunities to get involved and help out with that future development. Joining us next to give updates on Ironic is Jay Faulkner who is actually the newly elected TC Chair as well, congratulations Jay. Well, thanks for that. Yeah, I appreciate folks having the trust in me for the next six months. But let's make sure we have one of these for Kerakow before we celebrate too hard. So let's talk a little bit about what Ironic did this cycle. I always like to start out with some of the statistics about the development. During our cycle, we had about 324 commits, 32 different contributors, including a couple of new first-time Ironic contributors and welcome to the team across over 15 different companies, 3,200 change lines. And notice we don't know this, but bare metal is pretty complex. We actually have 22 different repositories that we juggle to keep all this going. So I'm very excited about this because our team is growing, not just in code contributors, but in operators who are part of the community. And if you're one of those, you're welcome to come join us in IRC or through the matrix bridge sometime to come say hello to us about your installation. But let's talk about what we delivered for you. And so I've kind of broken these up into three big themes. The first theme is with Ironic, we don't only provision bare metal hardware, we also manage its life cycle. And that's where a lot of the effort in Ironic goes to make sure that you're able to maintain a bare metal server throughout its lifetime in your cloud. So what this means is we're always making that better. This time what we've done is we took our step framework, which is the way in Ironic you would perform maintenance tasks such as erasing a disk or upgrading firmwares or really anything you need to do on a server. And it used to be that you could only do those for cleaning, which is when a machine isn't provisioned or at deploy time. But now we've added in the ability to perform service steps. And these can be triggered via the API with a set of steps to run. And those steps will run actually on an active provisioned server. It may reboot your instance or something of that nature, but it's very exciting because when you combine this with the Ironic feature, which currently only is supported for standalone, we're coordinating with our Nova friends to get this in for Caracal for Nova users. But if you're using Ironic standalone and you enable our automatic lessee command, you can configure your policies in such a way where the users who provision bare metal hardware in Ironic can be actually given the keys to perform service steps to maintain their own devices. So imagine a world where your users are able to coordinate firmware updates with their own software, making API calls to Ironic, removing nodes from rotation before beginning the update, things like that, just completely removing a class of work from the traditional cloud administration team and giving the users the keys to it, which is very exciting. And that's a pattern that I think you'll see emerging more. In addition, and these were sort of the next two were kind of targeted toward, well, this is still in the server maintenance. The last two bullets there are about adding some steps to enable some coordination. And this may, these may seem a little disconnected, but they're both a part of an effort to support DPUs or smartNICs or in general, these advanced pieces of hardware which might have their own power supply, which might have their own BMC and might require coordination between the operating system of the machine it's in and the firmware of the device itself. And so in order to support that, we've had to add some enhancements to our steps. We've added steps that permit flow control. So you can do things like, you can say hold during cleaning and that node will hold in a clean hold or service hold or deploy hold state while an external coordination that you've configured goes, reads, does actions on that node until it sends ironic API and unhold command. And this is useful for coordinating things between different servers that may need to coordinate with this or even if you have a third party system that you wanna be able to participate in those processes, they can now look for that state and go for it. And that's exciting. For power control as well, we've exposed those as steps. So power off, power on and restart just as we would call them in ironics code or as what happened if you called the HTTPS APIs, you're now going to get that power control ability in built-in steps. In addition, we've added a concept of a parent node and a child node. So again, you could have a server that's the parent node with a DPU as the child node and using all of these features combined, create a coordination that can upgrade that firmware between the cooperation between the PCI device and the OS itself. It sounds complex, it's because the hardware's complex. We're trying to do all we can to make it usable. And so along those lines, we've actually taken one of our concepts that used to be sort of more complex and more custom and we've taken it out of that realm and we promoted it to a top level ironic API. This is always an exciting moment for us if we can get the next slide, please. This is always an exciting moment for us because ironic often does things like firmware update support, for instance, before the standards are in place to make it similar across to our hardware. And now that we've proven that out, we've shown the value that it can do, we've taken that, we've promoted it to a top level API. So what this means is we have a driver interface for ironic that can be implemented to allow access to this firmware support. We do have an initial implementation that supports RedFish. I will say hardware is wildly different. This is a new feature. It's been tested on actual RedFish hardware, but it may not have been tested on your actual RedFish hardware. So please just make sure you test it, do your own due diligence. We can't always count on our hardware to behave the way we expect. And this is gonna allow queering through the API for firmware information. So if, for instance, you have a security requirement that you're able to return the firmware version that's on all of your nodes, we'll be able to do that now. If you want to be able to check that it's running a specific version before deploying a specific workload, you can do that now. In addition, just like with all of our other interface-based things, that's where we've centralized the update commands to update and manage that firmware. So this is very exciting. Thank you so much to Yuri for writing the support, working through it. And Julia actually did a lot of the work for the cleaning enhancements before. And we always want to make sure that folks get that recognition. But realistically, those are exciting for certain use cases. Everyone always wants to see more stability and more fixes. And that is definitely something that we focus on every cycle. I picked out a few top-level items that we've had for bug fixes and just sort of minor enhancements that hopefully will make your life easier, depending on how you use Ironic. We had several issues reported to us by the Metal Cubed project under CNCF, which we're grateful to our partnership with them, related to SQLite database utilization. And large amounts of effort has gone into making this work more reliably, making it work more quickly. And even if you're not using SQLite, those cleanups in our database code to prevent locking, to handle failures and contention better, are going to make your more highly-scaled Ironic's work better due to that error handling work we've done. So it's the sort of thing, I'm not gonna be able to point a finger to one specific thing you'll see there. However, it's our hope that we've maybe removed a class of potential issues or confusions in database layer. The other, one other item is something we've been working on to make it easier to run a single Ironic cluster to manage multiple CPU architectures. Traditionally, we had a top-level, single setting to set defaults for things like RAM disks and kernels for cleaning or deployment, such things. That obviously doesn't work very well in a situation where you might have multiple CPU architectures. And while we had a way you could do it by overriding it on a per node basis, that's obviously not sustainable at high scale. So we've added the ability to set those defaults by CPU architecture, which is a big boon and it should enable people to easily run mixed x86 and ARM clouds or heck, maybe some other CPU architecture. So come talk to me, because that sounds like it would be fun. And thanks to Cuba for that feature. The other item is operators can now force the use of non-MD5 hashes for image verification. It's not our belief that choosing MD5 hashes for image verification at the last step is necessarily a security issue, but we know that we have some security sensitive operators and environment to require the ability to turn that off universally. And so we've added that and hopefully that makes your life a little bit easier if you have that requirement. Finally, I'm gonna plug just like everyone else has the Ironic Virtual PTG. We definitely had a lot of participation, a lot of new faces at the physical PTG in Vancouver and it was nice to have those conversations. We'd love to see you back at the virtual PTG. We've not exactly aligned all of our topics to times yet, but all Ironic sessions will be within these windows in the fulsome room. And in addition to the Ironic VPTG, which we have every year, we also are renewing our bare metal SIG operator hour. If you've never been to our bare metal SIG meetings and you're not familiar, this is something that's run by, open stacks run by Ironic contributors, but really, we've all been working on provisioning hardware for years and years. And we wanna talk about the troubles, the success stories, the sad stories, just meet each other and commiserate for a little while about that. So I don't care if you're an Ironic user, if you use a metal cube, if you're using something completely off the board, come say hello to us, talk about bare metal provisioning and we'll see what we can do. And thanks so much. Got to love a group therapy session. The statistics that you mentioned at the beginning were my favorite part in particular of this update. I really love the data and knowing that there are new contributors to Ironic this cycle is even more exciting. Congratulations to all new and old contributors to Ironic. There's always more work to be done and we always welcome more contributors. Open community is an incredibly important tenant of all of our open infra communities and projects. Next up, we have Bobcat updates from Manila. So take it away, Carlos. Thanks, Kendall. Hello, everyone. And yeah, I'm Carlos, I work in the Manila team and I'm a DOPEc Manila PTO. And yeah, today I'm gonna be sharing you some of the highlights of our Bobcat release or 2022.2 as you wish. Couple of items, couple of things that we managed to complete after an effort of a lot of cycles. The first of the things that I would like to mention is the research logs for shares and access rules. And this came from operators requests and feedbacks. All of these were really important for us to cover two main use cases for this feature. One of them is operators that wanted to prevent accidental deletion of shares that are mounted and consumed across multiple user workloads. And the other one, which is really important for us as well, is the usage within OpenStack Nova. So we are actively pursuing a share attach or detach API for the Caracol release. And this feature will enable users to request their shares to be attached to VMs using Virtio FS. So Nova, in order to do this to happen, Nova would need to lock the corresponding shares from accidental or intentional deletion until it's properly unmounted by the compute instances that can prevent outages and some unwanted things. So this research lock was for shares was one of the things that was implemented. But we also use the same research locks mechanism to do other stuff that we would need to get the work. We needed the Manila side for Virtio FS to be completed. And the other part is the access rules, visibility lock. So in the past, let me walk you through what that means. So in the past, access rules in Manila didn't disclose the access to an access keys field. And that could end up exposing a VMIP or an access key. And we can call those sensitive fields because I mean, we don't want to expose information like that to people that shouldn't be seeing it. So we needed a way to make sure that the access rules created by Nova VMs would have their, we would be redacted. We would redact the IP addresses or the access keys of the Nova instances. And now you can also place a visibility lock in the access rule and it will have the, some of the fields are redacted and you'll be able to hide a couple of things there. And these were like the parts we discovered we thought we needed to do and they were completed during the Bobcat release. And as I told, like the resource locks API is generic as you can see. So we could reuse it for both locking shares and access rules, like access rules can also be have their deletion locked. We also want to make it a deletion lock because otherwise someone could attempt to delete the access rule to the Nova instance and then people would lose like access would not be able to write data to shares anymore. So deletion locks for access rules are also in place but this API it can be extended through any of the monial resources in the future. And we designed this thinking about that and all of these authentication authorization is having our back in mind. So there, please check out the blog post that features these changes. It has some reasonings, the specifications and a special quote from folks at Clura that mentioned about their use cases and how this feature will make their life easier. We've heard several requests about this in the past, and the operators feedback was really incredible and it helped us to shape this and to get this done. And a special thank you to the Nova team on this as well for working with us on the design and also Gotham and Hene for leading the implementation of both Gotham in the monial team and Hene in the Nova team. We are targeted this for the Karaka release. So another major feature that we added to Monila was share backups and it works similarly to what Cinder volume backups do. In Monila, basically you'll be able to backup and restore shares generically with the help of the Monila data manager service. And it will work like similarly to the way we do share migration in the generic approach. And this indeed uses some parts of the generic migration mechanism. And this is currently only implemented in a generic way but in the future, the third party drivers and the vendors will be able to implement this as well and leverage the features of their own backends. One other big thing that took several cycles for us to complete was after a lot of work and a couple of releases we have managed to add the last piece we were planning for in the Monila client which was we are adding the deprecation warning to the Monila client. And first huge shout out to all the interns and the community members that worked with us to reach future party and do proper testing on this. And we reached party in the previous cycles and completed the last piece we needed on this cycle and then came the decision to start the deprecation process for the Monila client. So now when you are issuing a Monila command that will be a very nice thing that came from operator feedback as well. So every time now you issue a Monila command you will get a deprecation warning plus a suggestion on what's the command you should be using in OSC or OpenStack Client. And that's gonna be really useful. So thanks everyone for the feedback on that as well and for helping with that. And also a couple of more updates I think in the next slide we have next one is limiting the maximum share size for share size extension through the share type extra specs. Some important enhancements to this FNFS driver when doing migrations from previously deployed Ganesha cluster to SFADM-deployed Ganesha cluster. Now the export paths will contain a preferred export location that will always default to the SFADM-deployed Ganesha and the access rules will be replayed when you do the restart. And environments that use the NetApp back and you'll be able to use their NetApp active IQ a scheduler layer. So this is something that the NetApp team worked on. And it has some built-in AI to help you place the pools to help you find the best pools to place your shares. And yeah, that's something NetApp specific. So please take a look in case you're using the NetApp back and this can be quite useful to you. We have two new backend drivers being the Dell Power Store and Dell Power Flex and several bug fixes and enhancements. So these were all of the items we managed to accomplish. I mean, you can check out the things we worked on the Bobcat release as well, checking the release notes. There's a lot more that was done there. Lots of bug fixes and things that we were proud to get completed on this cycle. And for the next cycle, we have some plans to support user-modifiable metadata for share export locations and continue the work, the course team work with the Nova team to allow users to attach and detach their Manila shares on VMs via virtual FS. Now some enhancements to share backups as well. And we have that kind of completed already, like the APIs in the SDK for research logs and everything Nova needs, but we just need to merge that on the OpenStack SDK, but it's almost done as well. So as I said, I tried to highlight a lot the importance that the operator feedback had on all of these items. So please, if you want to, if you have some feedback to give to us and if you'd like to see something enhanced, please join us at the PTG and we will be happy to hear your feedback. So yeah, that's all for the Manila updates. Thanks. Thank you so much. I just want to take a second to call out the resource lockwork in particular. It really is an excellent example of the feedback loop that we aspire to for all of our projects. So if you're an operator of OpenStack, we really need your feedback. The PTG is coming up, please register and participate. It doesn't matter if you're not running Bobcat, if you're a couple of releases behind, that's totally okay. Your feedback and involvement is still crucial to the continued growth and success of our community. So please get involved. We really want you there. We want to hear from you. We want you to be more integrated in our community. Next up, we have one more project. Bobcat highlights from Cinder from Rajat. Yeah, hi. Thanks, Kalyan. So I am Rajat. I am serving as a Cinderpreet here. So let's get on to some of the new things that we have in Bobcat. So we added three new storage drivers. So first one is the Yadro Tatlin Unified FC Driver. So in the last cycle, they added the iSCSI driver, and it's good to see they are actively contributing. Now they have their fiber channel driver as well. So yeah, more contributions from them. This one is a new one, which we have the ToEunit Store iSCSI driver, and it's good to see contributions from different vendors, like adding more to our list of supported drivers. And finally, we have the Pure Storage Flatchery NVMe TCP. So Pure already supported iSCSI and fiber channel, and now they also support NVMe TCP. So it's kind of the most used protocols that Pure supports now, which is really good to see. So that's all the driver we added, and it's not a new feature, but it's actually a bug fix that we could not restore RBD backups to normal volumes. So in center, we have the volume backend, and we have the backup backend. So when we create a volume in, let's say, LVM or any other storage array, and we try to restore RBD backup to it, it used to fail before, but now we have fixed that functionality, so it works now, you can use it. Another one is the OSC SDK work. So in the last cycle, we reached parity with OpenStack Client, and now we are pushing it further to add support for missing APIs in SDK. So once we have all the sender API supported in SDK, we will continue to migrate the OpenStack Client and to point to SDK, and finally, we could duplicate center client at some point, but yeah, it will take couple of more cycles for us to get to that. But yeah, the work is ongoing, and even we are going to discuss it at the upcoming PTG, so if anyone is interested working on it, then surely join us. And on to the next slide. Yeah, so this is kind of an important one, so I filled up a whole slide with it. So we have a CVE, which is kind of important and has a deployment impact. So I would just like to quickly explain what is the situation. So when a volume is attached to a NOVA instance, and we just unmapped that volume, so we have the attachment delete API in sender that unmapped the volume from the NOVA host. So we have unmapped the volume, but NOVA isn't aware, like if the volume disappeared because of a network issue or any other issue. So it keeps on checking, like if the volume can be available later. And in the meantime, if we try to attach another volume and it uses the same values, like the host bus, the same PCI configuration, then the NOVA instance thinks it's the same volume and it gets unauthorized access to that volume, which is really bad. We don't want any random instance to get access to a volume that is not intended for it. So the solution that we preferred was to block the attachment delete request, unless it's one of the reasonable reasons that... So for NOVA, the reason is like, if the request is coming from NOVA and NOVA has a service token, then we will accept that request. But now end users cannot just go and delete attachments because it is a security loophole that we fixed during the cycle. And there is a deployment change associated with it. Like we have a whole documentation around it, how to configure service tokens for NOVA. But you will need to modify the NOVA.conf file to add more credentials. And so yeah, the document covers everything, but there is a deployment impact. You need to change some things for your attachment delete to work. So if you just upgrade to Bobcat and see the attachment, volume detachment failing, then you can just refer to this talk and see the changes you can do to fix that situation. So we have merged and release this CVE on all the supported stable branches. So from 2023.2, bobcat.yoga, we have this fixed merge and release. So you can be rest assured like all these, if you use any of this branch, the latest version, then you will not be seeing this issue or it is not exploitable for you. And finally, we have information about the PTG. So as everyone already said, like we have a PTG upcoming. So Cinder is keeping four days blocked from 24 to 27 October. And the time we keep for every day is four hours from Tuesday to Friday. So usually the Cinder topics fill up all the time. But if there are less number of topics this time, we can just shorten the duration. But for now, I have booked the cactus room in the PTG link. So I will update the meeting URL, but if you just go to that during the PTG and just click on it, you will be redirected to the Cinder room. And we have a planning etherpad. So I would like to request you to please add topics if you would like to discuss it with the Cinder team. Yeah, that's pretty much all the updates from Cinder site. Awesome, thank you so much Rajat for all of that. So that wraps up updates from five OpenStack services, Nova, Neutron, Ironic, Manila and Cinder, but there are just a few more OpenStack services than that in case you are not already aware. So if any other OpenStack projects want to share their Bobcat project updates like these lovely folks did, remember that you can always record your own and then send it to us and we'll make sure to share it with the entire community. So while we wait for audience questions to roll in, I want to invite everyone back on camera. Do any of you have anything in particular you want to mention about your amazing contributors or operator involvement, being the two focuses of this release in particular? Well, I was going to follow up on something that Rahat said about Cinder in that CVE in particular. I'm just noting if you've not upgraded for that yet and you are using Ironic, Nova or other OpenStack services, please make sure you read that security notice in detail. There is an Ironic patch required if you're using Ironic Cinder integration that goes along with that. So that's a particularly tricky one. Make sure you look at that and make sure you get all your services updated or else you're just going to see weird failures. Important thing to note, definitely. So while we wait for any questions that the audience has about any of the updates you all have shared today. Yeah, so I would just actually reiterate what I said previously. I mean, if you really look at the numbers, I think a third of the features that we implemented this cycle were actually provided by use cases from the operators. Nothing is magic. Nothing is magic. We all have priorities and we basically try to do our own stuff. But if we don't hear any feedback from the operators, I mean, we could be wrong. The only, and you mentioned basically a feedback loop. For us, this is crucial. Particularly this time, we'll have a virtual PTG. I then reiterate the fact that, yes, indeed, this is free. Yes, indeed, there may be some concerns about the fact that we have contributors. But here, all of the PTLs are trying to make room for operators joining. And that's definitely the right time for operators joining in. They don't think their own use cases that they have in order for us to find a solution for them. And that happened last cycle and I would really like to have it happen again for Caracol because that story works. Yeah, Jade, do you wanna talk a little bit about the TC trying to organize operator hours for? Well, I was gonna play off of something that Sylvain said first, which is don't ever feel imposter syndrome or feel nervous about engaging with the community if it's not something you've done before. There have been multiple times, not even always in upstream context. Sometimes in a downstream conversation, it'll be the most junior person in the room who will have the idea that breaks through the problem or things like that. So please come talk to us. It's not gonna be imposing, especially if you come at the VPTG or one of the operator hours and come talk to us. No ideas too silly, no problem is too weird. Let us be the ones to say no, don't say no for us. We wanna have those conversations. And as Kendall indicated, the TC is encouraging projects to hold operator hours explicitly for this type of interaction. But I'm gonna be honest, in the past we've not had great participation in that. And in order to justify that time from the contributors, we really do need folks who are using OpenStack to reach across, to come join us and have those conversations. Or even like I said, if the operator hours, if coming to a video session, talking to people, coming to IRC talking to people is not it. And you have some good idea for how we can gather feedback, bring that to, right? Like the meta feedback. We'll take whatever you've got. We want to be where the operators are, because y'all know where the real problems are. And quite frankly, I would rather hear about your problem from you and solve it rather than wait until the people I work for experience sit. And have to solve it themselves, right? Like we all have different environments and you will have a unique insight if you come and chat with us. Yeah, addressing problems before they become problems is great. But if you're having problems, we would love to know about them so we can help you work with you on fixing them. So actually, if you are unsure about getting involved in the PTG, you can look at the schedule that's developing as various project teams sign up for time. So if you go to ptg.opendev.org, you can see when teams are planning to meet and we're working on getting the operator hours set up on that schedule as well. So keep checking back for updates. And yeah, perfect. Yeah, there you go. A link makes it easier for discoverability. Yeah, just one last notice. So basically, when you look at this website, we'll find that the teams will meet together on specific timings. Don't be afraid of the fact that you will see it's like, for example, for now, this is four hours per day. That doesn't mean you have to attend four hours per day. Obviously, you can actually join anytime you want. We'll have some kind of agenda depending on the projects. Some projects do have some kind of agenda before they meet while some other teams are trying to run on the topics when they want. But the fact is, keep in mind that if you're only able to join at some specific time, no worries, people will try to make room for discussing about your own use case. That's not a problem. Take it more like the fact that some projects are meeting together for four hours per day. Take it like you have the opportunity to meet them anytime you want during those four hours. This is the other way you should be seeing it. Yeah, it's very flexible. And a lot of teams are willing to shift topics if you say that you really want to be involved in a particular one, but can't make it at that time. Everybody's really friendly, I promise. And honest, we do have a number of teams outside of OpenStack that are meeting at the PTG as well. StarlingX will be participating, Kana containers with a little bit of the confidential container project from the CNCF is actually going to be meeting as well. So if you're a project that is interested in meeting at the PTG, but isn't already signed up, feel free to reach out and I can get you set up and get you involved. There are going to be a lot of excellent conversations all throughout the week. And yeah, we'd love to see you there. Love your feedback, love your involvement. Doesn't look like we've got any questions so far. So are there any other Bobcat things that people want to mention before we wrap? I know that Nova, Sylvain did a good job of mentioning this in the presentation, but we should be a little loud about this, that we did hear operator feedback about the upgrade carousel that you get on a little bit. This is our first release that you can skip. So if you've somehow listened to all of this and you go, I don't need it, you can skip it and you can upgrade directly to Caracal. We've worked hard to give you that option and have been testing it. So please do not forget that. And it's pretty fun that we go from having projects where you could upgrade their software six times a year if you wanted to, to now you can upgrade once a year. So we're trying to maintain that flexibility to keep you off of that upgrade carousel. I've done open stack upgrades in large clusters before. It is, you can make the software as good as you want, but making that many computers do what you want is an incredibly difficult task and having to do it once a year instead of twice is gonna be a benefit for a lot of folks. Yeah, and that itself is operator feedback that we've adopted and figured out how to make dreams come true, you know? So I wanna thank all of our awesome speakers today. Thank you, Salon, Vidalpo, Jay, Carlos, Rajat. And thank you to our excellent audience for being here with us today. And we look forward to the next episode. Don't forget, if you have an idea for a future episode, we really want to hear from you. Submit your ideas to ideas.openinfra.live and maybe we'll see you on a future show. Thanks again to today's guests and we'll see you on the next episode of Open Infra Live.