 My name is Allison Price from the OpenInfo Foundation, and I'm here to host today's episode of OpenInfo Live. This is an online series where we talk about the latest advancements in open infrastructure from both a development, a commercialization, and an adoption perspective. Today I'm really excited because we're going to be zeroing in on the development of OpenStack who just released the 29th release yesterday. So I first wanna congratulate over 500 contributors who made OpenStack Caracal become a reality yesterday. It's a huge milestone for the OpenStack community and I could not be more proud to be part of it. So today we have several of the community leaders who actually led the development with their teams of some of the features that landed in OpenStack Caracal to kind of talk about what those features are, answer any questions that you may have, and talk about some of the future plans for the 30th release Dalmatian that will be coming later this year. So we have a great lineup today. So they'll be popping on later this episode, but for now I just wanna say welcome and we will be accepting questions throughout the episode. So if you have a question about a particular feature or if you just wanna know something more about a project or component of OpenStack, drop into the chat wherever you're streaming this episode and we'll get to as many questions as we can. Before we get started, I do wanna thank the OpenInfo Foundation members who make things like OpenInfo Live as well as the development of OpenStack a reality. We have several or hundreds of organizations that support the foundation, but who are also employing developers who contribute to the releases like Caracal. So if you have team members that wanna get involved in building the next generation of OpenStack software, please reach out to some of the folks that are on today's episodes and get your teams involved because it's definitely a community effort and we appreciate everyone's involvement. So yes, today we're introducing Caracal. It's a really exciting time and I couldn't be more proud of the community. Before we get started, I kind of wanted to talk about why I think that this release is coming at a really opportune time. So we are seeing a lot of OpenStack momentum right now which for me is really exciting as I've been working with the OpenStack community for over 10 years and it's awesome to see when these opportunities come and the OpenStack community from a development perspective deliver features that users are eagerly needing to run new workloads on OpenStack. One example that we'll talk about probably a little bit today but happy to answer any questions is around AI. So everyone's talking about AI and what we wanna talk about is the infrastructure that needs to power it, which I think that there's some really good cool features delivered in Caracal and within some projects that really deliver enhancements that will ease the support of AI workloads. But one thing that we can't develop a software without is feedback from you. If you're running OpenStack, if you're an operator or you work with operators, please encourage them to take the OpenStack user survey. This is a really valuable tool for the development teams to know what your priorities are and so that they can make sure that they understand the requirements needed within the software itself to deliver the workflows that you need. So if you have ideas or questions or things that you want to really see or you just wanna share more about what you're doing with OpenStack, please take the OpenStack user survey. It's a great community tool for us to learn more about how we can better support you. But without further ado, I do wanna pass this on to the contributors who've made this release possible. So to start us off, I wanna introduce Gonsham who will be bringing us updates from the OpenStack technical committee. Welcome, Gonsham. Oh, I think you're muted. Oh, sorry. Thank you, Alison. Thanks for the introduction. So hello, everyone. I'm Gonsham Man, member of technical committee in OpenStack. I'll be giving you some update from the technical committee side and what we did in the character release. So as Alison mentioned, this is the OpenStack character 29th on time release and here's thanks to our release team to make all those 29th release on time. I remember, if I remember correctly, it has not now been delayed even a single day as planned. So thanks to release team and all the support team. Also thanks to all the 500 contributors from 70 different organization who contributed in this release. Without them, it won't be a fun. And I would like, as Alison also touched based a few of them, like we increase like all of you to start communicating with community and involved in community. Even you are not a developer, contribution is not just about the feature development or bug development. There are a lot of other way to contribute to documentation, bug reporting. And more than that, I think the information is the most powerful thing. So if you can get engaged in community, tell us like how you're using the project, throw user survey, provide emailing list or like those kind of feedback and involvement is, a great for us to develop the stable software in OpenStack and provide new features. One thing I would like to mention about this release is this is the first skip level upgradable release. OpenStack Antelope, which is 2023.1, which is previous to previous release, that has been our first slurp release, which means you can upgrade now. OpenStack Oracle can be upgraded from Bobcat, which is previous release, as well as from the Antelope release, which is the previous to previous release. So we have been testing this in upstream and obviously if you, of course, if you do the upgrade, you encounter any issue, feel free to feedback us so that we can improve our testing. We can improve how we can make sure that slurp upgrade testing is happening in a good amount of a way in upstream too. Next slide, please. And few of the quick updates from technical committee side, we had technical elections completed for next release or next term for technical committee, five seats. And we have many new leaders and returning leaders in community to lead the project. So he's thanks to all those leaders because that is to do in the, as a leadership and it's very important for our community and the software development. Next one big thing one I would like to highlight is new stable branch support policy. So most of you might have no, like we have the extended maintenance support model. So we have one phase, like if you've seen this table on the right side, we had the maintained phase where we maintain any of the release for 18 months. And that is called our support rate release by upstream team. We do release them, we do bug fix them and all those stuff. And after that we used to have extended maintenance where we kept those branches up for maintaining. And there were few of the challenge we faced in that model where one of the most challenge was the communication where the operator and users thought like it's a project, upstream project team responsibility to maintain them and upstream project team. Or whenever we made the extended maintenance model, we always had expectation like the user, vendor, operator, they will come forward and help us maintaining them. And there are, like there is no clear indication like when those maintenance, extended maintenance branch can go to end of life. So it was a little bit, you know, the not that extended maintenance model didn't work out that well. So we came up with the new support model which we call un-maintained. And with that un-maintained term, we are thinking like it's a clear communication now to everyone that those are not maintained by upstream project teams. And it will be external team who can be made up from the operator, vendor or few of you know, from our upstream project team also because there's always overlap. So we have that explicit team to maintain them and only slur release will be applicable to be moved to the un-maintained state. And only same way like bug fixes will be done but there will be no release on those un-maintained. And one more important things to note down from the namespace perspective, previously we used to keep all those stable branch, stable releases as stable slash, say train, yoga, wallaby, right? So with this un-maintained model, we have renamed them to un-maintained slash train, un-maintained slash, say Victoria. Train is in end of life anyways. So any of the branches will go to un-maintained. It will be renamed from stable slash release name to un-maintained slash release name. There will be separate core group. There will be one global core group and separate core group to maintain those. So if any of you know, I think that those releases branches are being used in your product customer or services. So feel free to come forward and step up to maintain them. Obviously we'll be there to help you to know the process and all those stuff. With the current status, we have moved all the Victoria, wallaby, zina and yoga to un-maintained and any release before Victoria is end of life. There's a link in here as a TC resolution. So it has all the details and there's a project team guide document also. So feel free to read that. And as I said, the best best way is come forward, send us an email, IRC or join us in Vivocial PTG if you are interested to help in those un-maintained branches. So we'll involve you in the process. Next slide please. And one of the project update, I'll say these are very important for us and especially to communicate to everyone, users vendor. So as I mentioned, we have the technical election for all the projects to select their leaders. And these 11 projects on that side, these are the leader-less project. Since there was no leaders to volunteer to, came forward to maintain or lead them. Few of them have already been identified the leaders like Freezer have the DPL model license and OpenStack Charm and Skyline also have their few members volunteering now. So if any of these project are impacting your product service or you are involved in the community or using them. So we encourage you guys to come forward and help us maintaining them, come forward to fill the leadership gap. PTL is not just the one solution that we need you because you may sometime think that PTL responsibility has a lot of responsibility and all. So there is a alternate leadership model we have called distributed project leadership. So there also you can distribute your task and leadership activities there. And on the right side current inactive project. So inactive project means they have not been active their gate is broken, they're not releasing. So those are the, we have seven project currently in the list and because they have been inactive since previous release, we might be taking some action in this release or next release to retire them or replicate them. So again, if you, any of you your company known member using either of these projects. So it's a time for you to come forward and take maintenance of either of the project and retire project, triple O and open stack set are retired in this release. So yeah, next slide please. And this is a virtual PTG as you might know. So virtual PTG is happening next week and that's a great platform what I already mentioned like if any of those inactive project, little less project you are using or maintaining feel free to join us in virtual PTG and your heads up. TC schedule, technical committee schedule in virtual PTG will be April 8th where we'll be meeting at 17 UTC to interact with the leaders, community leaders to get some conversation with them. And obviously user operator and everyone is welcome to join that. Then we have a TC specific topics on April 11 and 12 to discuss. So we again increase everyone come and join us and give us feedback, discuss us and have a chat with us. Yeah, I think that's all from my side. Hand over to Alison. Awesome, thank you. I think that it's a really great update from the open stack technical committee. And I think it really shows kind of that what the call to the community is we need some more folks to come in and help contribute. So I guess one of my questions for you is as we have a lot of organizations that have reached out and they're like, we wanna contribute to open stack and they see these projects, they see these opportunities to help define the next generation of open stack software. What's the best way for them to get involved? Yeah, so we have the first contact sync there and you can contact them. So in term of the process tooling and everything they can help you. And otherwise, like I'll say the very easy way we have the open stack discuss mailing list. So we have all the developers operator vendor has been involved in that communication, that mailing list. If you just send us a mailing list, I want it to be involved in that specific project. So you might get more than one or two mentors helping you in the process, as well as in the technical discussion also how the project is shaping. So don't think that you need to be expert in that any of the project. So if you are even a new, we have the cases in many times in past like Mistral and other freezer also where people just said like we wanted to help there but we don't know about this project anything. So that is also the case. So you can come and you all in community and you can get all the knowledge from about the project. We have the IRC channel also, all of the projects and our technical committee open stack does have this channel is there. So you can feel free to ping us and we are distributed in almost all the time zones in world and some of us might be contacting you and giving you the next step for that. Awesome. Well, that is a great call to action and I think that it can be intimidating if you're not an expert to get started. So the first contact SIG as well as the mailing list are great entry points to get involved with the open stack community. So I appreciate you for sharing those and hopefully anyone listening, if you want to be part of this amazing group please reach out. We're very welcoming, nice community. And I think that, you know, we're building the next generation of open and for software. So be a part of it. Well, thank you, Gonsheim. If we have time for more questions at the end, we will bring you back. Sure. Thanks, Alicia. Yeah. All right. So now we're going to dig into the specific project components of open stack. And first up, we'd like to do open stack Nova. So welcome, Savon. Thank you, Allison. I'm glad to see you all of you know back for the Karakal release. So I'm Sylvain working on open stack Nova from a decade. And now what's about the Karakal release? So the Karakal release for the Karakal release, we had 19 blueprints feature requests, if you prefer, that were asked. And we accepted, so we basically accepted them. And eventually for this cycle, we merged the implementation of 10 of them. For what it was, it's a good cycle for us because like the other nine blueprints were not merged because of different points, but not because nobody was looking at them. About the bug fixes, at least we had 28 bug fixes that were merged. It was also a good one. It was also a good cycle because of that. So basically, kudos for the team about that. Gansham already discussed about Karakal as a reminder, at least for Nova, if you want, if you use Antelope, you can roll upgrade your servers, all your controller services directly from Antelope to Karakal and it will work. And then you could continue to use your Antelope computes with the Karakal services and it will work. When you want, then you could be upgrading your Antelope computes to Karakal directly and it will continue to work. Next slide, please. So about what we merged for Karakal, we had, as I said, 10 features. At least the first one was about to make sure that you were able to live migrate, sorry, your instances with virtual GPUs. Nova was able to use some flavors asking about virtual GPUs since Queens, sorry. But where you are wanting to live migrate the instances, you are having some problems. No, Nova supports it. It asks QMU to move the virtual GPU and it will work. Also, maybe you know about GPUs, but new GPUs from NVIDIA are having SRIOV support. The problem that we had is that because of the driver, we are having some bugs because placement wasn't supporting them. What we did on Karakal was to modify some stuff to be able to support the SRIOV GPUs. For example, EA100 GPUs are not supported by Nova. Another feature that we merged was about the ironic driver. If basically before that, if you were able to have some way to have different computes and then some ironic nodes, you are needed to do some way about that. I will say, no, we are creating a new shining model for that, please look at the documentation to know how to do it, but basically it will work and you won't have any problems like you had previously if you want to shard your ironic nodes. We also have a new feature that provides a new feature and new performance for instances using VirteoNet. About in between 20% or even more than 20% if you use VirteoNet. So again, look at our documentation about that. Another security feature was about to say, okay, I want to opt in to make sure that if a token expires then no, if for example, if a user has a token that expires then it can't just continue to look at the server console and basically no, if you opt in about that then automatically Nova will then close the server console. If you don't want to do that, that's not a problem. It's opt in, but if you are afraid of some users still having some server console, even if the token is expired no, it's no longer possible. Another feature, maybe you know about availability zones most of the operators were asking us they were not exactly knowing whether it was possible to move an instance of a specific availability zone because that depends whether the user was asking for a specific AZ or not when he created the instance. So basically now you have a new parameter that's returned that tells you, for example, this instance is pinned to a specific AZ and you can't move it from that AZ. That's because the user wanted that instance to that AZ and if you modify that then user will be wondering why. Another feature is about configurable memory address space for instance, that means that for example if you want to have a large memory for a specific instance it will be better. Again, look at our documentation about that and we have a lot of stability improvements. Like I said in this slide we are detecting the max number of instances with memory encryption because we support memory encryption for instances. You are now able to provide some hostname instead of an IP address for an inbound address for a live migration. Something you wouldn't see but basically we use QMU device aliases not QMU but Libre device aliases for block device management. So no, you should have less problems with block devices with NOVA. Hyper-V driver is not fully removed because basically unfortunately the driver wasn't... I mean we had any way to test it so that's why we removed it and we had a few tweaks about CPU power management that we provided on Antelope. You could look at the documentation. Next slide please. So what we'd like to discuss at the PETG we'll discuss about a few things. You could see that like making sure that you can provide NOVA services on top of Kubernetes extending the memory encryption support that we currently have some firmware support and that would be stateless. I'm just basically telling what you can look. Open API schemas for API docs basically for making sure that you don't need to look at the documentation but basically... Anyway, look at that. Look at those topics. If you want to have your own topic to the PETG you're an operator and you're afraid to discuss with the NOVA community. Don't be afraid of discussing with us. We have that time. If you really want to discuss with the NOVA team because, for example, you found some bug or you would like to have other features discuss with us. You don't need to be around the PETG for the whole week. Just add some topics in the top hat that you can see here and then I will ping you so that we could be organizing some specific time with you for discussing about your own item. So again, don't be afraid of discussing with the NOVA community because, as said by Gansham, we need you to understand what you exactly want us to provide for the next cycles. That's basically it for me. Next slide, I guess. Sorry. I have a quick question but thank you for the update. I think there's always a lot of momentum and activity in the NOVA community. So yes, if you're interested or a lot of you are running NOVA please come participate. I did have a question about a particular feature that we've gotten a lot of excitement over and I just want to see why this is such a significant advancement in Caracol. You talk about the live migration of instances with VGPUs. Before this was enabled, what did that process look like in the past? Why is this such a milestone for the NOVA project? Basically, when you want to use instances with VGPUs, what you need to do is to provide a specific flavor. So once you want to migrate the instance, previously, NOVA unfortunately wasn't providing you an exception and we should have provided the exception. So NOVA, the NOVA API was saying, okay, yeah, yeah, you can basically live migrate this instance and it will be working and hopefully some problem. The point was that nothing by NOVA and QMU was asking to modify the memory of the GPU in between two different computers. So what we did on that specific cycle was about to ask NOVA to ask QMU to say look at this specific VGPU from the source, look at that other specific VGPU from the target. Ask them to discuss in between them and ask them to pass the memory from the source to the target like you do with like the other memories and that's what we are doing now with QMU. Awesome, well that is such a great advancement and I really want to congratulate the NOVA project team and the community who built that together. So thank you for that and thank you for the update. We will bring you back at the end if we have some more questions. Thanks and thanks by the way. All right, so next on our list of projects that we are going to hear from is Ironic. So we hear a little bit about some of the integration with NOVA from Sylvan but now let's hear from Ricardo about the advancements with the Ironic project and the Kerakal release. Hello everyone. Ricardo, I've been a core contributor of Ironic since now five years and I'm very proud to say that for the next cycle I was actually elected as as PTL, it's going to be my first experience as PTL and I'd like to thank J for coming to be PTL for Kerakal and of course all the contributors for like it was a great a great cycle Kerakal for Ironic so let's dive in the numbers for Ironic in Kerakal so first of all I'd like to mention the number 10 which in this case is very important for us Ironic first release was in Icehouse in 2014 so happy 10th birthday to our Metal Bear which is our mascot, the Ironic mascot and well as I said we had a great cycle just look at the numbers we had over 360 commits for a total of more than 30,000 lines of code new lines by the way 30 contributors 30 different companies 134 bugs fixed and I'm very happy to say that we also have some new contributors I think it was two or three so that's great because we always we're always in need for new contributors next slide please so again let's have a look at some highlights and new features of Ironic in in Kerakal probably the biggest feature was the support for UEFI HTTP boot so UEFI is the unified extensible firmware interface and it's opposite as the BIOS all the legacy BIOS so UEFI actually includes specifications for HTTP boot since 2015 so it was about time to give that to give that support it is actually an evolution of the classic Pixi boot so it's network boot but it just uses HTTPS which is way better suited for modern network infrastructures and security policies compared to the classic Pixi it can use encrypted connections and of course it uses TCP so much more reliable than UDP and also it doesn't use as in Pixi the TFTP server but instead using it uses only HTTP which is definitely way less cumbersome to set up and maintain than TFTP the second feature is the integrated node auto discovery and inbound inspection this is so this is in the line of simplifying and unifying the Ironic service, the Ironic and the Ironic inspector services as part of the inspector functionalities integration in Ironic we've added auto discovery and inbound inspection the inbound inspection enables the inspection feature for the inspector agent now integrated in Ironic and the auto discovery just allows to enroll new bare metal nodes automatically by booting using the IPA RAM disk which is the rank Python agent a special RAM disk that runs on a bare metal node and provide information in terms of inspection data and then it adds the node automatically into Ironic then we had the support for OVN VTEP switches both physical or logical they are now supported OVN which is the open virtual network uses a special VTEP port to control top of the rack switches and well we had the support for this feature allowing the ability for bare metal network imports to bind to tenant networks and also gracefully moving the port between provisioning and tenant networks another improvement was the attach a generic virtual media device to the Redfish driver so the Redfish driver is quite fast quickly let's say changing so moving from IPMI and it allows to just mount virtual media devices with ISOs remotely and in this case we added a generic feature that allows to mount ISOs after a node is provisioned this can be very useful in case for example we want to provide some data in an isolated environment and we have like a virtual CDROM standing there for example next feature is the HTTP basic authentication strategy support for image and image checks on download processes so when the ironic Python agent loads in a node and prepare the nodes to be provisioned we have to load an image that is going to be written in the hard drive and in this case before this feature it was possible to use a node to just connect the to the remote location of this image while now as a security improvement we are using HTTP or HTTPS basic authentication for this process and again another one was the improved management of API resources as performance and stability improvement now Ironic is able to reserve some threads to allow interaction with the ironic APIs and help preventing API calls to fail next slide please thank you so even more features we actually had a very big cycle a lot of things to talk about so it was actually hard to choose between all of them but anyway let's continue this is more more light things we have some CI improvements some basic testing for OVN based deployments related to the support for the VTAP OVN we now support Tiny Core 14 for building the Ironic Petronage and RAM disk as a testing RAM disk in CI and also we are experimenting with code spell as a test to check and verify some common misspelling in text files, very useful because it's dedicated to actual development languages files in this case we also had a good improvement in the integration with Metalcube which is a CNCF project a lot of cores in Ironic are actually involved with that it's a way to deploy Kubernetes infrastructure directly on top of bare metal and it uses Ironic as the main engine for that so now we have the Metalcube releases that are tied to bugfix or stable Ironic releases so the connection is even more deep than it was before we added support for Debian Bookworm which is great because Bookworm by default uses Python 3.11 we have this amazing collaboration with Debian developers to maintain the OpenStack packages the Ironic packages there and of course this helps a lot in checking potential issues when compiling the packages and also prevent some deprecations in the code alright and then we decided to deprecate a lot of hold hardware drivers in favor of Redfish based one again Redfish is like the current standard for communicating with bare metal BMCs the base management controller so the old hardware drivers are slowly going away and we have a lot more of Redfish base drivers and it's time for the old drivers to go away alright next slide thank you these are some anticipations on it's a quick preview on the ptg topics that we're going to discuss next week so unfortunately we have decided to deprecate the TinyCore support because well it has unfortunately some limitations that in the years we're not able to overcome there is no great support and well not a very big community it's kind of cumbersome to support at this point and we're going to explore some alternatives during the next cycle so as you can see it's kind of a call for help as well because we use the TinyCore image as a base for the ironic pattern agent from this NCI so it's kind of impactful for us another big topic is the inspector merge with the main ironic service we're going the process is already started as you saw before in the future presentation but we're going we'd like to finish the migration of inspector features to the inspection agent in ironic directly the final goal will be to deprecate the inspector and just use the ironic services and of course we will have a lot more of redfish again it's kind of a very important topic at the moment since redfish is the standard it's not the standard to communicate with bare metal nodes and we're going to continue adding support for its features we're going to look also to new vendor implementations integrated in the next generation hardware like HP I06 and iDRACTAN next slide please thanks so I remind you all next week we're going to have the virtual PTG ironic will be there so if you want to know more about the future of ironic if you're an operator or a contributor or even better both would be great to have you there so please feel free to join us and as you can see I'm also mentioning an ironic meetup that we will have at CERN in Geneva Switzerland at the beginning of June it's going to be co-hosted with the opening for days which is going to open June 6th and again if you want to meet us in person this time I hope to see you there in Geneva thank you all all right thank you Ricardo that was a great update and first of all I want to congratulate you on being elected PTL it's a great opportunity and I look forward to seeing your leadership with the project and what you bring for Dalmatian the list of features for Caracal was very long but I mean it was very impressive so congratulations to the whole ironic team we did have a question from an audience member based on some of the plans for Dalmatian so the question was around the deprecation of Tiny Core so are there going to do you have in mind like alternatives for Tiny Core? yes we currently have a potential candidate which is Gen 2 I know the point with Tiny Core is that it was well very tiny when we started working with it and unfortunately it's not that tiny anymore so that's one of the limitations that we unfortunately like during the last cycles we had to constantly increase the amount of memory dedicated to load the RAM disk for a running pattern agent and also we found more and more bugs we had to fix ourselves like some tricks some hacks and we really don't want to do that anymore also because we want to focus more on developing actual features and not just maintaining maintaining the Tiny APA so Gen 2 has like of course a big community good support and we can actually build very small RAM disks with that so we're definitely going to explore that solution first but of course if there are like proposals from operators or contributors we are open to that perfect well thank you again for the update and we look forward to hearing more from you during the Dalmatian cycle thanks to you alright so next on our list of project updates we're going to be hearing about Manila so please welcome Carlos Carlos hello thank you Alison Carlos the open stack Manila PTO and I'm happy to share with you some of the Manila Caracol highlights as well as some of the plans we have for the Dalmatian cycle and also Gen 2 invite for you to join us at the PTT so we had a bunch of new core features and a bunch of them are pretty much related to usability and I'll get to all of them the first one I would like to highlight is the support for custom export locations but what does it mean in practical terms well if you use Manila or if you use like a shared file system like at the end when you are mounting a share the mount path would become like a gigantic string that it would be very difficult to remember so with this feature we can enhance the usability and allow the users to specify a name for the mount path that they will remember in the future and that name will be prepended with configured prefix or the project name so that way it's easier for the users to actually know what's going to be the mount path of their shares and it's easier for them to remember whenever they need to mount the share again so that's one of the nice features another one is setting the status being able to provide a reason while disabling Manila's service so that one it's pretty much useful when you are doing a schedule maintenance or something like that and you would like you know to provide a reason why that service is disabled so that someone that's looking at it at this services list can remember what's going on in the future next one is shares and snapshots can now have their deletion deferred this also can be configured by the administrators and if the deletion of shares and snapshots are deferred what does that mean well we will send the request for Manila to delete the shares and then Manila will say okay I will release the quotas immediately of this share and then we will have a periodic charge that are in the state of the deferred deletion and then it will try to keep the deletion of the shares going on later this is a very useful feature in case for example you have scenarios of replication that the deletion of the share might take a lot of time for example you have to undo a lot of things under the hood and yeah you would like to avoid the users waiting a lot on the shares to be deleted another big way for the usability another feature is that administrators can now configure the metadata keys that only they can manipulate well this is also a very nice feature in case you would like to prevent some users to update some metadata we currently support metadata for shares, snapshots export locations that I mean only a few things can be modified there and only by the drivers like support for that is going to be enhanced in the future and we also support metadata for share network subnets so you have now a config option that you can specify which keys can be modified by regular users and then by no regular users and that keys will only be available for modifications to administrators or someone with more privileged access to the cloud and in the list of new core features is the enhancement to the share network great workflow in the monilla UI so we had a couple of changes in the monilla core and the monilla UI was falling behind considering those updates and what we are doing now is we are catching up on the changes on the monilla core so now the workflow for share network creation will look a bit different and later we will continue the work introducing the share network subnets and the workflows to the monilla UI as well that is a very good thing and also I think that it deserves a mention it was an outreach intern working with us on that feature so that's a very nice thing and we try to run internships all the time and this is one of the things that we got done with the help of one outreach intern for that driver features for the CFFS driver when you create self-native shares using the self-native protocol this was a request from operators during operator hours and also other encounters we had with operators they were missing for example you can have multiple file systems configured in monilla and you might not know where the share ended up in which file system the share ended up and that wasn't available and then you would need some help to look it up and then that would make the amount of the share a bit more difficult for humans and also for automations but now well we added that through a config option called mount options and then the file system name will be available there so automations and users will be able to easily pick it up it's in the shares method data it's created or now we also added something that will add that to existing shares when you do the upgrade so pretty useful thing and also we had a couple of requests of that in the past I'm happy that this got done and for new driver features the last one that I would like to mention is the advantage backup approach for data backends so we implemented share backup a couple of cycles ago but now it got a bit interesting for the NetApps people deploying with NetApp because NetApps implemented more advantage driver approach and you can use the NetApp hardware for backing up your shares and restarting your shares so that's all for the new core features and driver features now let's talk a bit about some things that also happened in the Caracal cycle we had a couple of deprecations of some drivers the TGL driver the X, Gloucester FS, Native and NFS and Windows SMB drivers where we didn't deprecate them we didn't remove the code but we added a deprecation warning saying that we intend to remove this in the future we decided to do this in a slurp release because our plan is to actually remove the code for these drivers in the next slurp release so in the 2025.1 we intend to release we intend to remove these drivers and also the deprecation of the standalone NFS Ganesha support in the CFFS driver please use Gloucester NFS Ganesha instead and I'm saying this like it sounds like a deprecation please don't panic like this doesn't mean that we are dropping the support for NFS with CFFS we are only stating that in the future in tree debuts based NFS Ganesha interface will be removed in favor of a more advantaged approach using the CFFS manager APIs to create NFS experts this new method will allow you to deploy NFS Ganesha easily with a highly available cluster service so this leaves and bounce better than having to stand up the NFS Ganesha server probably with base mage care doing high availability so kind of like more advantage we have been doing this for a couple of releases now and we also intend to actually remove this protocol helper in the 2025.1 cycle so it's just like a deprecation warning being added now but we will deprecate this in the future so for our plans to the Domation release finally we have we merged this back for integration with Barbican so what does it mean so this is back end share encryption not front end so users would be able to create encryption keys on the OpenStack Key Manager service for Manila shares another feature we are planning is providing NPR for the insure shares mechanism which would essentially help people who are recalculating the shares expert and also getting fresh configuration without restarting the Manila service we have a lot of feedback from operators so I'll work on this back for that and we will implement that during this next cycle also CERN is working on a driver for backups it's called C-Back they have developed this pretty nice too and they are trying to integrate it with Manila so it will be an advantage driver backup approach again and it makes use of the SAP storage while backing up shares so another very nice thing and also we are planning a couple of new features for the SAP FS and the NFS drivers so all of those will be discussed at the PTG again so if you check on the next slide please there will be the information about the Manila PTG so we will be meeting the Juno Room and we will meet on Monday Tuesday, Wednesday, Thursday and Friday or not Tuesday only the other days other than that the PTG agenda or the PTG Etherpad is available already in that link there so if you would like to add a topic please add there and then I will reach out to you and we can discuss more details so that's all ahead for my updates awesome well thank you and one thing I wanted to call out specifically that really stood out to me is you know when talking about the Dalmatian roadmap that CERN the operator of OpenStack is pushing something specifically for integration with Manila and I think that really just highlights operators we welcome you into the development community I think it's really important to have that involvement if you see requirements if you have needs for running your OpenStack infrastructure please join the developers and get involved that way as well because I think that's a really impactful way yes all right well thanks Carlos we are running short on time so I'm just going to go on to the next presenter so for our last project update we are going to hear from Rajat about OpenStack Cinder Thanks Alison so hi I'm Rajat I have been the PTL of Cinder for the past four release and Carical was my last release and I will talk about it we can first move on to the next slide please okay so just a quick bunch of highlights we added a new core to Cinder with Pete Seitzel and Pete has been around in OpenStack, I mean his contributions go way back to 2012 and he is also a core in Swift so he is well known in the community and he has been contributing to Cinder for the past one year and he has like a good number and quality of reviews and a bunch of features and work fixes so it was a great addition to the team other than that we had like 42 commits and these were not small commits if you take a look at the line of code that were changed it's about 14,000 so it was really good and we had 23 different companies that contributed with 63 different contributors and you might see that the contributors number is exceeding the commit number because we take reviews as a contribution as well so a bunch of people have reviewed the patches in Cinder and that adds to the contributors so yeah great things about the character cycle for Cinder and moving on to the next slide we had a bunch of drivers that were marked unsupported so if you see the list of these drivers 3 drivers from Dell and 2 windows drivers were marked indicated and the reason for deprecation was we didn't have active CI support for them and even we talked to the vendors like Dell told us that they are not planning to maintain these drivers the storage center VNX and Xtremio drivers they are focusing more on the power flex power store drivers and if you want to use these drivers there is a configuration option in Cinder enable and supported drivers which you need to set to true you are aware that you are using an unsupported driver so yeah moving on to the next slide so we had a bunch of features driver specific features which I thought would not be very great to mention because they were very specific to their drivers and we also had a bunch of improvements in our NVME and fiber channel area but this one field the most important to me since in the user survey we have seen a bunch of most of our the people that deploy OpenStack usually prefer Ceph and RBD as their storage solution so we have a new feature in RBD which is the trash functionality and the main problem case we had was when you create a snapshot from a volume and then create another volume from that snapshot you are not able to delete the snapshot because it has dependency on the volume and it worked for all the drivers except RBD which was a big issue for a lot of people so we finally have fixed that issue now you are able to independently delete any volume any snapshot and it is made possible by the RBD trash functionality so to take leverage of this feature either you need to enable like trash purging on your RBD side so your RBD keeps on checking like if some RBD image is trashed then it will take care of it like remove it and in center we have this config option enable deferred deletion that enables this trash functionality and we can see like when we delete a snapshot which has a volume dependent on it we tend to flatten that volume which might be a very storage consuming operation because we usually do copy on right clones in RBD and now we are flattening the whole volume so it might take a bunch of storage and if you are performing concurrent operations like 10 snapshot deletion at the same time and 10 volumes are getting flattened which are over like 1 TB then it might put a lot of load on your system so we have this new configuration option and concurrent flatten operations which will limit the number of these operations that would happen in your deployment so it defaults to 3 for now and yeah so moving on to the deletion PTG we have a new PTL John Bernard and John again has been contributing to sender for a very long time he is a core reviewer and he is very active in the community so he will be leading the PTG this time and we are conducting our sessions from 9th to 12th April which is Tuesday to Friday from 1300 to 1700 UTC and the room is Newton if you follow this PTG link and you click on the sender tab in the Newton room then you will be redirected to the meat padling where we will be conducting our PTG so it is like a very easy way to enter the PTG and for other projects as well and yeah I would just like to encourage you to add more topics and give us feedback on our project yeah that is pretty much it for sender alright well thank you and I know this was your last cycle after many serving as the PTL so I just wanted to thank you for your leadership and getting the sender project and the team to where we are today I think that we even had someone call out the excitement around that functionality you just provided so thank you I think your leadership was very impactful in the open stack project thanks Edison so those are all the projects that we had today so I wanted to wrap up the episode with a few more links and information just on how to get involved in the open stack community as well as you know what there is to do so every single presenter today talked about the upcoming PTG for those of you who might be new I think we had someone say in the chat open stack the PTG is called the project teams gathering this is where all of the different project teams within open stack as well as other open info projects like Cota containers and Starling X and even other open source communities that integrate with our projects like open Euler come to think talk about the latest you know features that have been developed in their projects any feedback that they receive and then start working together on plans to release this is a great opportunity to really see the breadth of open stack projects where you can get involved in so as you can see here on the slide we have lots of different teams that are going to be meeting next week registration is free and it is held virtually you know one of the things that I'm most impressed by with our community is how global it is you can see in the comment section we had people from Sweden and Puerto Rico and Korea we had presenters from Brazil France you know I think that it really it's one of the most exciting parts of our community so wherever in the world you are the PTG is a great place to come learn more about the different projects that you may want to get involved in and what their plans are so that you can start being part of the solution and building the next decade of open infrastructure please join us it's a great opportunity to get involved we also have other events that are more like conference style where you can hear how some of these projects are actually being used in production or there are also still some highlights there on some of the development plans that you can hear from presenters like we had today the first one we have is an open and for days road show we're calling it in Europe in May and June we have I think it's five open and for days now as well as some supporting meetups I know that the CERN one was mentioned but these will be happening throughout Europe so it kind of gathers the local community to discuss what the latest trends are around open infrastructure what some of the open source development opportunities are and just what the future looks like so registration sponsorship and speaking opportunities for all of these events are available on openinfo.dev slash days and these are and we dropped a link at if you would like to get involved in any of these events another event that we're really excited for this year is the regional open and for summit Asia we've held an open and for summit for the past over 10 years but this year we're doing it a little bit differently where we're working really closely with the regional open and for groups within Asia to for them to host the event supported by the open and for foundation so the open and for user group is very active we actually have a few of their members in the chat right now who have been watching along but they've held several successful open and for days in Korea and so we're excited to for them to launch the regional open and for summit in Asia later this year in September in Korea so this is another great opportunity to meet with the community learn what some of the regional trends are from wherever you are in the world so it's not limited just to folks within a particular region so the CFP is open with a deadline of May 29th and registration will be opening very soon you can find more information at openimford.dev slash summit and we hope that we see you at one of these upcoming events depending on how you want to get involved in the community so thank you for tuning in today on this episode of open and for live the recording will live on on YouTube and LinkedIn so please feel free to bookmark it and share it with folks who you think might be interested in the latest developments of the open stack software until next time see you around the open and for community