 I hope everyone's having a good day. I wanted to kind of start kicking off this meeting. We're going to be talking about OpenStack history, then your release of OpenStack. We're going to be covering a little bit about the, what happened over the current release as well as we have a few community members that came in to talk about. The work that they did inside their own projects. A few housekeeping things. We are recording this so that we can share this with the community for those people that can attend. So I asked if everybody can make sure to keep their microphones muted. So we have good audio. So just a brief introduction. My name is Mohammed. I'm the current chair of the technical committee. I'm also a board member of the OpenStack foundation as an individual member. And I'm really, really excited to talk about the new history release and have our community members share what they, what they worked on on this cycle. So first of all, I think this is like an amazing number that we always kind of look at, which is just how many code changes that we get accepted in a single release. It's an amazing effort from our community to still be able to drive so much change throughout the project. But I think what's more amazing and it's more interesting is they're kind of more than a number. There's a lot of bug fixes and improvements and features that make part of all of these code changes that were accepted. And we'll learn more about a few projects that are going to be sharing their stories today. But first of all, I wanted to talk a little bit about community goals. So OpenStack has this idea of community goals where we pick something that we feel like will drive the community and all of these projects together to improve. And then we usually have champions that try and drive that work across all of these projects. So the two community goals for the usuri project, which was creating project specific contributor and PTL documentation, which pretty much creates a set of contributor documentation so that any new contributor that's trying to onboard a project has all that information available in an easy space. And Kendall Nelson from the OpenStack Foundation did an amazing job in helping drive that forward. Our second community goal, which is probably something that, you know, as long overdue is dropping the Python 2.7 support. So as you know, OpenStack has gone full Python 3 with Python 2 going out of life. And one of the things that we wanted to do is we had a lot of kind of compatibility layer to make sure that we supported both Python 2 and Python 3. And with this goal that was driven by Gunshump, we actually were able to drop all of the cruft and extra code that was there to try and support Python 2 still and make it so that the code just works for Python 3 with no kind of workarounds or anything to support both languages. And so the next thing I want to talk about is our PTG. So our PTGs are something that's kind of very popular inside of our community. A lot of our community members look forward to seeing each other and talking with each other, both to kind of hang out and catch up as we work together a lot and also just to be able to drive more in-person discussion, share some of the use cases, discuss some of the specs that are proposed, and really drive a lot of discussion moving quicker in person that, you know, is a lot easier to hash out. And, you know, historically we've always had this event, you know, with every release in person. Unfortunately, given the current circumstances, the Foundation has decided it's for the safety of everybody for us to do this remotely. So the schedule is up for the project team gathering that's going to come up, which is going to happen from June 1st to June 5th. And there's, you have the ability to go and register there online. If your team hasn't signed up for it, please make sure to contact PTG at openstock.org just to make sure that you have all the, that you have a schedule and that you have a time slotted for you. And I'd like to also express, you know, our thanks to the Foundation's Platinum and Gold sponsors because they're really helping drive the PTG happen, especially with how it's changed into a remote online only model. The next thing I want to also talk about is the OpenDead event. So it's a virtual event series that is kind of happy with what is happening. The idea is it's really just focused around discussion and working together. It's not necessarily something where you kind of just come on and just simply, you know, sit and hear somebody talk about something. The idea is that everyone can come in and share whatever they need to talk to and discuss with other operators or other people that are involved in the same subject to share with some of their colleagues and also some of the challenges that others deal with and hopefully come up with something that is useful in sharing that information. So that's a three-part event across different dates with three different themes. We have large-scale operations, hardware automation, and containers in production. So the registration is open currently for the large-scale operations event. The other two should be opening sometime this week from what I understand. So I'll look out for that. And yeah, so that's it about the open-dev event. So what we're going to get into now is the project highlights. So we have several community members from their projects to kind of talk a little bit about what they've accomplished in the past cycle. And just a small housekeeping note. I want to, I know they're going to have very detailed information and they're going to share some interesting information with us, but if anyone has any questions, feel free to drop them in the chat at any given moment. And what I'm going to do is at the end of all the community member presentations, I'll go over the chat and I'll kind of just moderate all the questions and get our community members to answer them. So to start off, we're going to talk about Cinder, our block storage project, and I'll go over the upcoming features of this cycle. Hi, I hope you can hear me. I'm Brian Rosmaida. I'm a senior software developer at Red Hat. And I'm the Cinder PTL for the storage cycle that just ended and for the next one coming up. So yeah, the, there's some basic stuff about Cinder. We provide the block storage service that you can use for the rest API and then some other functionality that goes along with that, the scheduler and the volume service and a whole lot of drivers for various vendor back end. We also have client libraries. So OS brick that's used for making connections and Cinderlib, which is a, Cinderlib is an interesting project that's being used by Ember CSI. So it's part of the container storage interface that we have for Cinder to be reused in a container context. So the advantage to operators is that you get drivers for back ends that have been tested with Cinder, but they can also be used in a different context. So that kind of keeps us relevant in these container oriented times. Just some things about the project health. I just looked at commits. So if you go on stack analytics, you can also look for reviews and other things, but this is just a number of commits. So we're seeing a trend that's a little distressed, well, very distressing, right? So in Stein and train had roughly 150 contributors from around 40 to 50 companies. And in your story, we've had 30 contributors from 13 companies. So this data is from the latest on stack analytics was from April 22nd. I don't imagine a lot of stuff has changed. And the percentage you see on the right is the percentage from my company, Red Hat. So in your story, we did 66%. So that's good and bad. I mean, the good is that we're competent people and we're very interested in keeping open stacks running. But the downside is it would be nicer to have more diversity of opinion and interest in our project. So I just want to bring that to your attention in case you have the developers you can influence who might be interested in working on cinder or block story service. Okay. So as far as content in this release, basically, very important part of cinder is the back end drivers for various back ends. So we've got 68 right now. And there's seven more that are an unsupported status. What that means is that in order for a driver to be considered supported, it's got to have a running third party continuous integration system because that's the only way we can test with all the different back ends that are available. And so those, depending on vendor interest and various things, those go in and out. So to compare, it's roughly the same number of drivers that we had in the previous release. The difference is that the number of unsupported drivers has gone down. So that's good. Thank you. Thank you. I'm going to go back to the CF and the CI going and keeping this running. I want to mention we have one security note associated with cinder. You can go look that up on the open stack wiki. I announced it on the mailing list. Back in early December. Just in case you missed it, it configuration option. The reason I want to mention it here is that the way we're going to fix this is just remove that configuration option because it does not seem to be something that's needed. So if you're using Ceph and you're using that option and you absolutely need it, you should get in touch with us as soon as possible so that we know that. But like I said, I've put an announcement out in the mailing list back in December and never heard of anything. So I'm assuming our plan to just remove it is fine. As far as new features, nothing really major. A lot of drivers added capabilities so that they can do. We have a set of required operations that all drivers must support and then it's up to the drivers whether to support the other ones. So a lot of drivers added more capabilities, which is always nice. I guess I should mention as part of the Nothing major, we did add support for glance multi-back ends and for glance image colocation, which is important in the edge kind of scenario. So that's something to notice. But I do want to point you to the Cinder release notes. We do publish always the detailed release notes. So a lot of stuff is documented in there. So please take a look. And then as far as stability went, we added more voting gate jobs and more testing and we plan to continue to do that to keep the software stable and to find bugs early. Okay. So what's going on in the future? We're already underway for Victoria milestone one, which happens like two weeks, I think, after the PTG. Anyway, we're working on, we've got volume local cash. Work started in usury and it's going to continue into Victoria. That affects OS brick, Cinder and Nova. So you can look at the spec online if you're interested in that. Working on encrypted volumes for NFS. There's a patch up for that already. Hitoshi is adding a new driver. So their CI is running and the driver is being reviewed. And then some existing drivers have already posted patches to add some new capabilities, which is nice. And then there's that, there's an encryption effort to do in flight encryption. And brick is going to get GPG encryption support as part of that. So that's something people are working on also. And then the virtual PTG is coming up June 1 to 5. It's not too late to participate in the discussion. So if you have particular things you would like us to be aware of or address, please feel free to go to that etherpad and add something there. Some things we're definitely going to be discussing is an iSCSI driver for SEF. There's a British company I believe that's interested in that. I mean, I think a lot of people are interested in it, but they're interested in it to the point where their HPC group has said that they could provide some development help on that. We're also going to discuss keeping unsupported drivers in tree. We've had a strict policy in the past, but as soon as a driver, basically as soon as their CI starts failing and hasn't been reporting or isn't fixed, that we remove the driver in the next cycle. We decided in this cycle to maybe leave the drivers in a little bit longer as long as they're not breaking any of the cindergates, mainly to give vendors a little more time to address the problems and so that there's not so much churn in the code with drivers being in tree and then disappearing and then coming back in tree when they get reinstated. So if you've got, I mean, hopefully that's useful for operators. If you have an opinion on that, I'd be interested in hearing because it is kind of a pain to, because we have to maintain these if the vendors aren't actually doing it. And then there's a continued emphasis on stability and improved automated testing. So we're planning to add a bunch of tempest scenario tests to the cinder tempest plugin, which is run by the third party CI's and that way we can catch bugs before they happen, hopefully. And that's basically it. So thanks for listening and I'd be happy to answer questions later. Should probably hit the unmute button before talking, but thank you, Brian, for discussing all of that and sharing all the progress of the cinder team. Next up is Neutron, our OpenStack networking project. So, you know, feel free to go at it. Hello, thanks a lot. So I'm Slavik Koploński. I work for Red Hat and I'm in PTL of Neutron. I was PTL of Neutron during the user brief cycle. So first of all, maybe a bit of stats from Stackalytics. We completed four blueprints in Missouri. We fixed more than 240 bucks. And that was done basically by around 58 individual contributors who sent more than 500 patches to all Neutron and Neutron Stadium projects. But most of them was to Neutron itself. So that's a bit of statistics. Now maybe we can talk about those new features which we implemented in Missouri. First of all, it was discussed during the PTG in the Shanghai and before also we met networking of EN driver into the Neutron. Now it's one of the entry drivers, the same like OVS or Reynolds Bridge or SIOV. We will maintain one more driver as an entry driver. The reasons why we did that is basically that we believe that it will first help to bring more people to OVN driver as it will be in Neutron 3, not a separate stadium project. So people in Neutron cars will basically take a look more often on this driver and we hope that it will help us also to close some future parity gaps between ML2 OVS especially and ML2 OVN drivers. In the next cycles we will probably also think about switching for example in the DevStack switching the default kind of default driver to be OVN as we believe it scales better and it works better than ML2 OVS. But that's something what we'll still have to discuss and it's not decided yet. From the next feature which we completed in Missouri is support for stateless security groups. So now you can create a security group and mark it as stateless. So all the rules in this group will be stateless and will not use contract which may be very helpful for some use cases where offloading of the firewall rules are needed and without contract it's much easier to do. Currently it's supported for IP tables based drivers so IP tables and IP tables hybrid in case of OVS agent. There is no way to no possibility to mix and to use stateless and stateful groups so rules also for one port so if you will attach stateless security group to the port you can attach a stateful security group also so only one kind of groups can be attached to one port at the same time. So next feature which we did is added support for role-based access control for address scopes and subnet polls. Before usury we had a possibility to share networks and US policies for example with different tenants using role-based access control. Now it's also possible to do the same with address scopes and subnet polls. So operator can for example create subnet pool and share this subnet pool with some specific tenants which you want to. The last thing which I wanted to mention about new features is a possibility to tag resources during the post request or during the creation. Basically this request came to us from Kurer team. In Kurer they are creating a lot of ports mostly ports but a lot of resources in Neutron calling Neutron API and before usury they were creating ports even if they will they was creating ports in bulk so many ports in one call after that they had to iterate over the all those ports and send a single request to create tags for each of the ports individually. Now it's possible to set those tags during the post call directly also when creating in bulk resources. So it may improve performance of Kurer for example or other apps greatly. That's also done in this cycle. Next slide please. Oh yeah we have also two more things which I wanted to mention it's support for IGMP snooping so now operator can enable IGMP snooping in OVS or OVN depending of the driver backend driver which is used. That enabled IGMP snooping allows later to it's useful for multicast traffic because it will send OVS will only send multicast traffic to ports which are subscribed to specific multicast group not to broadcast kind of broadcast to all ports. So it may reduce this multicast traffic on the bridge and the last thing is possibility to create a list of IP addresses in IPv6 addresses in DNS mask. This came this request came from from the ironic use case where they're using they may use different IP addresses IPv6 addresses during the boot phase because during the various phases of the boot process host can send different ID combinations and because of that when DNS mask got only one IPv6 address for specific host the second request basically ended up with no address available so there were problems in booting process of the bare metal machines. Now one port can have more than one IPv6 and all of them will be listed in DNS mask for this port and it may then give for various requests for requests with various ID combinations DNS mask will basically send different IP IP addresses so it will work fine. That last feature request at least DNS mask 2.81 because it relies on changes in DNS mask to allow such configuration so I know it's backported to some older DNS mask versions in CentOS 8 for example but I don't know about other distributions so basically it requires some also some changing config option in Neutron to say Neutron that I have the DNS mask version which supports this and can be used. That's all about Neutron and new features in Neutron. Of course I didn't mention here but of course we dropped support for Python 2 in Neutron and all stadium projects and from one last thing about stadium projects which I also didn't mention in slides but I want to say a bit about it during the PTG in Shanghai we were discussing status of stadium projects and during those early cycle we deprecated kind of deprecated Neutron firewall as a service in as a stadium project so because of lack of maintainers so if in the Victoria cycle we will probably start the start process of moving Neutron firewall as a service to the unofficial to be unofficial project and also during the virtual PTG in June we plan to review the list of the other stadium projects and check what else maybe which other will maybe also have to be deprecated so if you are interested in Neutron firewall as a service or keeping it in stadium or any other stadium projects please feel free to contact with us and say about that that you want to maintain a self-project and we'll be more than happy to to work with you about that and that's all from me thank you. Thank you so much Salvik, it was a very great presentation and it's awesome to see so many things happening in Neutron especially that working alongside other OpenStack projects like Courier. Next up Nova so I wanted to also kind of give an extra thanks to Gibi who stepped up since Eric Creed wasn't able to finish his duties as a PTL and also thank Eric for his amazing work throughout the project but you know Gibi I'll let you take it over from here. Okay thank you one yeah so I am Balazs Gibi there known as Gibi I will talk about what we did in Enola in the last six months. Small thoughts we had 86 individual contributors for those who released and with them we merged around 900 commits and with that invented 19 blueprints that's I think average average performance from from Nova team and I will talk about some of the features you will see I suggest you to read the the novel release notes that contains a lot more detail and and the longer list of features we implement but these are the most important ones. So first we merged cross-cell migration and resize support in Nova in Ussuri. Nova allows to separate your compute hosts into cells for security and scalability reasons but so far we haven't supported moving Nova servers between cells. With Ussuri we support code migration and resize to to move them. Right now we don't have the manpower to develop this further so extra server operations like live migration and the vocation is not planned to to be supported through cross-cells but if you are interested in those features then please contact us and and and we'll get you up to speed how to how to contribute those. The next one is a long-standing request to support pre-caching glance images on compute hosts. Nova or it does on-demand caching of the glance images so when you create a server from a glance image before used in a compute host then Nova will download that glance image and store it in a cache on the compute but when your user uploads a new glance image into OpenStack then the first boot from that a glance image on each compute will be slower because of this image download time so it's a logical request to have a way for deployers or administrators to pre-cache images on certain computes. We added support for that in the in the aggregate API so you can you can select you can say that which glance image should be downloaded to which computes in a given host aggregate. Okay the next one is a cooperation with the cyborg project in OpenStack. Cyborg gives support for accelerators mainly FPGAs and physical GPUs and in those recycle we implemented the first step in Nova to integrate its cyborg so basically now you can create Nova server with a cyborg device profile in the flavor extra spec and by that Nova will contact cyborg during the server creation process and ask cyborg to prepare the requested devices in the device profile by that cyborg will find devices and load software on the FPGAs for example and then it will let Nova know that when the device is ready and Nova will when Nova creates the VM in the hypervisor it will attach the cyborg devices to that to that VM so by that you can create servers with FPGAs and physical GPUs exploiting your your processing in that VM. This is the first step of the integration we support now creating and deleting servers with cyborg devices and also some some basic operation like stopping and starting and restarting those VMs but advanced operations like resize, cold migration, live migration of a crate is still not supported and Nova will reject those if your VM has cyborg devices but we have plans in the next cycle to extend this together with the cyborg team. We are still discussing which operations will be done first in the Victoria cycle. The next one is a similar story a couple of cycles ago we added support for VMs with minimum bandwidth guarantees but at the time we didn't support all the server operations and basically in Ussuri we finished up the remaining server move operations to support the minimum bandwidth guarantees so now we support live migration evocation and shell in Ussuri and this completes the picture of all the server move operations. Okay the next one is for administrators and deployers. There are certain cases when when the nova and placement resource resource handling mechanisms are faulty and that can lead to orphaned resource allocations which could lead under the utilization of your resources in your computes. We added a new CLI tool in the NovaManage CLI called la manage placement audit. This CLI command can detect these orphaned allocations in the in the placement service and also give a way to to clean that up. Okay the next one is is also a big step forward. We added new API policy roles in Nova that now supports the Keystone scope type capabilities. We added those in a way that the old policy defaults are still in place and the new scope policy rules are just added with an OR connection so you can still use your own your old policies but in the future in the coming one or two cycles we are planning to deprecate the old policy roles and that way you we will be forced you to move forward with the scope policies. Last but not least we added support for rescuing servers that was booted from Woyum's. Rescue is a way to fix your root file system of the VM if you get a this corruption or a file system corruption but so far we didn't support this rescue operation if you have if you booted your VM from from Woyum's. Now we added that and along the way we also enhanced how the rescue attaching the rescue image into the into the server. Before usury, Nova reordered the images and reordered the block devices of your of your server and the rescue image became the first block device but now we support with the libido driver we support stable order so when you rescue your server the rescue image will be added at the end of the end of the block device list keeping all the device names intact. That was all I wanted to highlight but I am happy to answer questions. Thank you. Awesome thank you so much that was great and I think it's awesome that the Nova project is still making so much progress over you know these past few cycles. So next up we have COLA so COLA is it's a combination of container images, deployment tooling, using Ansible and a few other things. So I'll let Mark the PTL of the project come in and discuss what's updated in the past few weeks inside COLA. Great thanks thanks Mohamed. So I won't dwell on these for too long but just to show you that the project is in a pretty healthy state at the moment and relatively we that said would always appreciate more contributors particularly potential core contributors lining up to fill in the gaps where people move on. So we've got quite a few features in the yasuri cycle. The main one was switching everything to Python 3 and dropping support for Python 2 which as Mohamed said earlier is something that every project has done but for us there were quite a few different places where we were using Python in different ways and of course we were deploying all of the other services which also needed to switch so it was quite a complicated maneuver and also tightly coupled to that is the addition of support for CentOS 8 which has really only support for Python 3 and CentOS 7 only has real support for Python 2 so there wasn't a very friendly migration path between those two. There was quite a lot of work went into the CentOS 8 migration and much of it has been backported to the train branch as well which means that you can deploy train on CentOS 7 migrate to CentOS 8 and then upgrade to yasuri on CentOS 8. A nice security feature we've added is TLS encryption of the back-end API services so we already had TLS encryption both with the public and recently added the internal API. This adds the leg of the communication from HA proxy through to the back-end services and encrypts that as well. So we started with Keystone but have now added quite a few different services, Cinder, Glance, Heat, Horizon, Keystone and Placement and I think we'll also have Nova and Barbican by the time we release. We added support for OVM as mentioned by Slavic earlier and the integration of that service with Neutron so that will be quite a nice option for environments that are looking to scale their network that bit further. We dropped support for our homegrown Ceph deployment instead preferring to rely on another tool such as Ceph Ansible or Ceph Deploy and in the process improved our integration with Ceph making it easier to integrate with the Ceph Ansible Deploy cluster. We added support for deploying the Zoom container networking interface so this allows you to use Docker along with container D to support Zoom capsules which are similar to Kubernetes pods using the container networking interface developed as part of Kubernetes project. We added an elastic search curator image and support for deploying it so that you can manage elastic search data and do things like pruning, retention policies and that kind of thing. And finally, Menlox networking now has improved support in our containers. I just want to mention about a project that not everyone might be aware of called Koby. It started as an unofficial project but during the train cycle we added it as a deliverable of the Collar project. And it's really complementary to Collar and Collar Ansible in the way that it works so Collar Ansible focuses on deploying the Collar containers to a set of hosts and what Koby adds to this is the kind of the from zero provisioning of the cloud and it uses Bifrost and Ironic to do that. So we get all of the nice features of Ironic in a fairly minimal provisioning environment under Bifrost. We can do automatic discovery, hardware provisioning of those servers and then loop in Collar Ansible to deploy containers to them. And you can try that project out at this universe from nothing project here. Hopefully the slides will be made available so you can grab the link. And finally, I just wanted to mention that we started something called the Collar Club. So we took a little while to decide on the name and you can see at the bottom that we originally proposed it as a SIG. So the ether pad is actually Collar SIG. But it's actually just fine without being a SIG. And really what we're doing is trying to bridge the gap between operators and between contributors and everyone on that spectrum, people who just consume the projects without contributing at all all the way through to people who contribute every now and again, maybe raise the occasional bug and then call reviewers and the PTR, trying to get all those people to communicate better, to grow and improve the Collar community, and also use it as a way of reducing barriers to upstream contribution for operators. Because it isn't particularly easy to make that leap into contribution. But if you have an hour with someone who knows how to do it, going through the steps that are required, it's not actually rocket science. It just takes a bit of time. And we also want to increase the knowledge in the community. There's a lot of specialist knowledge that people have built up over time. If they're using particular combinations of services configured in a particular way, and we get requests often in IRC saying, oh, I'd like to use ServiceX in a particular way. How do I do it? And it would be nice to be able to say, well, actually, this person knows how to do that and point them in the right direction. So at the moment where this takes the form of a video call that we have every two weeks, just for an hour, it's still very experimental, really, where we'll keep changing the format until we get it right. But so far, it's been a pretty nice way to meet people and to share some ideas. That's all from me. Thank you. Awesome. Thank you very much for that. So next up, we have Octavia, the load balancing project of OpenStack. And Michael is here to talk a little bit about what they've accomplished in the user release. Thank you, Mohamed. I'm Michael Johnson. I'm a principal software engineer with Red Hat. And I will be the PTL for the Victoria series. But I'd like to thank the Octavia team and Adam Harwell, the current PTL for all their work in the user release. There we go. So this is another release where a small team did accomplish quite a bit. And so I do want to give my appreciation to the team for all their contributions and work. One thing that was unique this release for us is we mentored some college students. Again, I'd like to thank Kendall Nelson for helping facilitate that with the foundation. It was an interesting experience. We had them for a semester and to ramp them up on DevStack and OpenStack and how that works, and then get them into coding. And that was fairly successful. So let's jump into some of the highlights. One of the most anticipated features is load balancer availability zones. So this allows you to define availability zones inside Octavia that then map to the other services and allow you to deploy load balancers in different availability zones you may have defined in your overall OpenStack deployment. So for example, for the M4a provider driver, an availability zone is defined and has a compute availability zone. The management network that will be used for that AZ and then any valid VIP networks that users can use when deploying load balancers to that AZ. And so then once that AZ is defined by the operator and users can specify that at load balancer creation time and Octavia takes it from there and deploys the load balancer in that area. Really handy for things like cellular site deployments, retail locations was one of the use cases we've heard in the past. So fairly exciting feature. Enhancements to the client. We've added a weight command. So the Octavia API is an asynchronous service similar to Neutron. So we have some processes that will change the state of an object into an immutable status. So pending create, pending update, etc. Now you have a command line option that allows you to say let's wait to complete the command until that status has gone back into active or immutable status. This is great for automation scripting. You no longer have to pull the status yourself. The client could do that for you. Moving on. So speaking of those students, this was a partnership with North Dakota State University. We had four students for the semester and they worked on adding some enhancements for TLS in Octavia. So now in the USERI release you can specify the allowable cipher list for your listeners and for your back end pull connections to your back end members. So this allows you to tighten that security say only certain TLS ciphers are valid for that connection. And if somebody attempts to use a, you know, maybe lower security cipher that request will be denied. So this again is good for security compliance. Any of those requirements you may have in being able to enforce that at the load balancer level. The students also worked on restricting the TLS protocol lists. That is mostly complete and will be an early merge for Victoria. So great work from those students. Very impressive to see them come up to speed on open stack and be able to contribute a feature inside their semester. And then finally, one of the big work items for the team is pulling in a feature of Oslo Taskflow. So this is another open stack project called Job Board. And so as I mentioned earlier, the Octavia API is an asynchronous API and we go off and do many tasks to provision a load balancer on behalf of the user. Some of these can be 50 to 100 steps that we go through to complete that provisioning operation. And so in that flow, as we walk through those individual tasks with this new technology preview, we're checkpointing at each task transition and saving the state. So if that controller that is actively working on that provisioning, for some reason, goes down and loses power, etc., we can redeploy that in-progress provisioning to an alternate controller and pick up right where it left off. So this is kind of a sub provisioning controller resiliency. It goes a step beyond just having the HA controller environment that we've had for a few releases now, where you can, of course, deploy multiple controllers. This is going down to the individual provisioning layer. So that's a technology preview for you, Suri. You can't enable it through config settings, but we're looking to make this the default driver for the M4 provider in Victoria. So I think that pretty much covers it all. Thank you so much, Michael. I appreciate it. And that's an awesome work to kind of work with all the students and have something material after it all. So the last project to present for today is Manila. So I'd like to invite them to come up and talk a bit about what's been accomplished in the past cycle. Thanks, Mohamed. Hi, everyone. I'm Gautam Pacharavi. And I had the privilege to serve as the project team lead for the Manila community in the last cycle. So Manila is the shared file system service for OpenStack. It's a service that you can go ask for NAS shares and perform a bunch of operations, much like the block storage or the object storage service from OpenStack, right? And so the Suri cycle was the 10th official release for Manila. And it was a fairly busy cycle and a productive one. We're also especially proud of the work we did with some of our interns. And this cycle saw contributions from two outreach interns, Soledad Kuxala and Maury Tam, both of whom contributed significant changes to supporting the OpenStack client interface with Manila. And they also contributed to Manila UI and documentation through their time with us. We also continued to work this cycle with Robert Washek, a Google summer of code intern who began the Manila container storage interface driver effort and is the lead contributor and the maintainer for that project in the cloud provider OpenStack repository. And besides, I mean, there's a bunch of stuff that we committed in this cycle that you will find in our release notes, but I wanted to call out some of our highlights. The API versions have increased. And this is, I mean, with Suri you get the 2.55 as the latest API version. And what that brings in is at 2.53, we introduced quota controls for shared replicas. So shared application still remains to be an experimental feature through this release. We've been working on this for a few cycles now. The last cycle during train, we added support for shared drivers that could handle shared servers. And they can now do shared replication as well. And this cycle, we rounded that up with providing quota controls for the number of shared replicas or the capacity of shared replicas that you have across your backends and such. We also have credit APIs for groups, shared group types, shared group specifications, and shared group snapshots, all of them graduating from their experimental status. So this was yet another experimental feature that we introduced a few cycles ago, but we've polished it up with quite a few improvements. And they've graduated as of the last API version that we shipped with this release. And we made some improvements to the scheduler. I mean, an important one of which, although minor, but trips up some things will be that the capability filter, capabilities filter, will now perform case and sensitive comparisons. And this can actually affect your cloud. We did craft a release note and we've added some documentation to that effect. But we thought this was the best way forward based off of a couple of bug reports and some user feedback. And the provision capacity estimations are smarter in the scheduler. They're faster and they're not being done for all of the backends as they were in the previous cycles. And we now have the ability to clone snapshots across storage pools and availability zones. This was a, I mean, this is a brand new feature for at least a couple of backends that we have. But then, I mean, you could be turning this option on in your clouds if you use these backends, but then if in case you don't have support for this, we have a way to gracefully fail back to the previous behavior. But one other thing this brings to us is asynchronous creation of snapshot clones. And that workflow has been enhanced. And this paves a way for a few more backends that we're actually not implementing cloning from snapshots because Manila expected that support only instantaneous clones. We no longer have that expectation because we added a few processes to track snapshot creation that could take some time. And we also enhanced the share and extend APIs. These were silently failing sometimes and sometimes they would hard fail with a status set on them even when the back end shared file system was absolutely fine. And so we added some more intelligence in the share back end interaction, which would allow these shares to go back to being available for any more management path operations while also utilizing the asynchronous user messages feature to let the user know what really happened. So for instance, if somebody was just trying to shrink their shared file system to some, I mean, that and this shrink operation could potentially cause a data loss, we'd let them know why a user message rather than hard failing and setting the status and disloving them from performing anything else on that share. And likewise, we also improved the user messages API to now allow filtering based on time intervals and such. We had a number of driver improvements. I mean, a couple of drivers added support for snapshot clones across storage pools. This included NetApp and CFS on Linux. The Dell EMC Unity driver now supports managing and managing shares, share servers and snapshots. And we have preliminary support for OSC, thanks to the efforts of our interns and their mentor that you can now update, create, delete shares and access rules and share types. More functionality is planned through the next cycle. And the Manila UI is playing catch up game at this point. It now supports adding IPv6 access control rule lists to shares and also supports share group capabilities. Now this, I mean, we understand is still far behind the Manila API and there's much work to be done. And we do have some help in the last cycle, including making things easier for some of these Manila UI contributors to come in from other communities as well. So that's pretty much it as far as highlights for this cycle. There are a lot of things planned for the Victoria cycle and we're getting a head start on the community goals. So please join us at the PDG to learn more or to influence our direction. Thank you. Awesome. Well, thank you so much for that. So with that, we have a bunch of teams that presented the work they did. And I think something that's really cool is over the past hour that we've been going over this, the release team was hard at work at finalizing everything. And so the OpenStack Assert release is officially out. So if you navigate to openstack.org slash usury, you'll be able to see all the information about the new release. I just posted that in chat. So OpenStack Assert is officially out. I'm really excited about it. I'm hoping that people go in and check it out and thank all of the huge contributor list that we have on every release page, which I still always think is so cool and exciting. Also, on that note as well, Kendall Nelson from the Upside Foundation took it upon themselves to say, hey, you know, usually we have some sort of celebration together, celebrate the new release. Unfortunately, usually that happens at the PTG and, you know, we're not going to be at a PTG. So they've set up an event where we can all kind of get together online and celebrate the new Sury release. So I just posted a link for the mailing list post for that. And we'll probably attach that in the email with the recording and the slides. And so that's happening on this Friday at 20 UTC. So I think, you know, if you want to come in and, you know, chat with all of our other contributors and and, you know, see what everyone has been up to and talked about the past cycle or just, you know, catch up about anything, I think it'll be really fun for everybody. I think I don't really see any questions that were posted throughout throughout this. It seems like the only question had to do with the security issue that Cinder talked about. But what's amazing is the discussion has already started and it looks like some progress has already been made on that security question. So that's just amazing. Other than that, I think that's all. Thank you so much, everyone. Thanks for all the community, for all the hard work that they do. Regardless if you're contributing code or whether, you know, you're a TC member or a UC member or a board member, or if you're, you know, triaging bugs, any part of what our community does is super helpful. And also a huge shout out to the hardworking folks at the foundation. It's not easy these days, especially with all these event changes and they're working their hardest to, you know, enable all of us to be able to still continue to collaborate and build the software that we all like to build. So thanks, everybody. Congratulations and enjoy the rest of your day or your evening or night. Thank you.