 Hello, everyone hear me? Yes. Perfect. Well, first off, welcome. My name is Erin Disney. I lead events for the open info foundation. Wanted to thank everyone for joining today to help us celebrate wallaby, the 23rd on time open stack release. Thanks for joining. We have updates for from seven different projects today. So to keep us on schedule, if you can go ahead and put your questions into the chat, we'll try to answer them throughout the meeting as we can. And then we'll open it up to Q&A at the end and try to get to as many as possible. Before we get started, I do have kind of a fun update on the open info events front that I'd like to share with everyone. After gathering feedback. Last year after a full year of virtual events. We reached out the community, the board. We decided to get everyone's thoughts on the pros and cons of virtual. We've decided to, instead of just gathering once a year during a very, what can be a very overwhelming week of content. We decided to what I keep calling deconstruct the summit. So this meeting is actually, they kick off to a weekly series that we're calling open info live. It'll be every Thursday at 1400 UTC. We will be publishing the schedule in advance so that you can make sure you join the weeks that you're super passionate about the topic that we're presenting on. And then we'll also be streaming to YouTube like we are today. So that if you miss the meeting, you're still able to go out and check out the content there. Over the coming weeks, we're also going to be rolling out community content submissions. So definitely want to hear about topics you're interested in, if you're willing to present. So be on the lookout for that. We'd love to hear from you. We also are hoping we're just getting started, but we're hoping to roll this out in additional time zones and languages as well and in the future. So while we definitely miss seeing all in your faces at regular summits, you know, we can't fully replace all of, all of those awesome aspects of the personal in-person gatherings, but really hoping that meeting on a regular cadence, including people that maybe couldn't travel to the in-person events and still hearing from each other all around the world. Hopefully. It's a, it's a great way to continue meeting and sharing the community aspect. So I know I'm super excited about the series and seeing where it takes us this year. I'm hoping you'll join and share along the way, maybe even. So with that, I'd like to thank Gonshom and Kendall for volunteering to MC today. I want to go ahead and pass it over to them. Get started. Thanks everyone. Good morning. Good evening. Good afternoon. Welcome to our Wallaby Release Community Meeting. This is OpenStack Wallaby Release. It's the 23rd OpenStack All-Time Release. And it's been a six-month development cycle. Started from 19th April and was released yesterday on April 14th. And in this, in this meeting, we'll be talking over like what we have done, what work we have finished in the Wallaby cycle, and what all features has been implemented on the project side. What are the like contribution stats on project side. And in this meeting, we have seven project developers, which we'll be talking about their project specific features. So my name is Gonshom. I'm OpenStack Technical Committee Chair. And I have with me Kendall. Yeah. I'm Kendall Nelson. I work for the Open Infrastructure Foundation, but I have also been on the TC for a little over seven months now. I previously served as the TC chair or vice chair. And I am involved in a number of places. So we'll help introduce people today. But moving on to the technical committee update. Yeah. So these are our awesome number. And the one of the best part I really like is OpenStack is continuing into the top three most active project, along with the Linux kernel and Chromium open source in terms of the level of activity. You can see like 17,000 plus codes in this. It's not a small number. And in terms of diversity, we have 800 developers across 140 organizations and 45 countries. So this is just awesome work done by all the contributors and community members. Yeah. Next slide please. So let's talk about the Wallaby community wide call first. Community wide goals are the common effort we put across all the OpenStack projects to implement any like user facing or developer facing features. And we have one champions to drive this community wide call in Wallaby cycle. We had to first is migrating our back policy from Jason format to Weimel format. As you know, like we are doing a we are doing a lot of work on the secure our back thing in OpenStack. And this is one of the requirement for that, like doing the policy file in Weimel format make us easy to maintain the policy and the new impact reforms. Our champion for this call and this is almost done. One project heat is left. It's a which has some configuration issue, which I'm still looking. And next was migrating from Oslo route wrap to Oslo preserve route wrap was a library to make the code to run your commands under the pseudo pseudo role. And it has the performance and the security issue. So as a community, we thought of migrating this to preserve new library, which is more faster and secure. And this goal was championed by the role for and this work is also going on. And few of the highlights from technical committee, what we have done in Wallaby thing. Very one thing we did is we really stated that you see week limiting. We used to have the monthly meeting and the three office hours per week, but to make it more productive and doing the things more faster, we did it as a week limiting, which has been doing very good. And in terms of project update to major update is RPM packaging is now converted to SIG, SIG is special interest group, which they can own the repository and software, but they don't do the make any release for the new deliverables. But RPM packaging means still active and whatever work they were doing, it's still going on in the same way. And next is the move, move the placement deliverables under the NOAA governments. So placement stay as a separate deliverables, but under the NOAA government. So thanks to NOAA team and NOAA PTL for taking care of it. And we did a few project assignment due to like less activity or no maintainer for those. We named such light, Carver, Neutron, FWS and tempest horizon is actually merged into the tempest and we retired the tempest horizon as a deliverable. So for these, if like any of your company still using these and wanted to maintain, feel free to reapply those application under the open stack governance and PC will be help, will help you guiding on the next step if you'd like to maintain those. And next we have TC tech. So we introduce new tech standalone. It means like if your open stack services can run as a standalone without depending on any of other open stack services, you can apply this stack to your project and user will get to know like, okay, for this service, I can run in my cloud and take the one page of that. We remote zero down time upgrade tag. This was a little difficult in term of the testing and designing things. And we were not sure like how to implement it or test it in upstream and there was no project had this tag. So we thought of remove this and if we have clear picture in future, we can take it back. Next we did few clarification on support API in property tag. And we had now I think four or five project adopting this tag, which is very important for our interoperate working group also. Yeah, next slide please. And 2021 upstream into a spend opportunity we have defined, we I think going with the three opportunities. So if you are on the corporate side, or if you are on the university side, these are the very nice way you can get involved in our community and we have one or two contact person or mentor for each of the opportunities. So they can guide you and onboard you to help in, in term of code or in terms of any other help you would like to do. And last, I'd like to highlight OpenStack secure RBEC work. As you saw like the headline of the OpenStack Valability Release we have done a lot of work in term of the secure RBEC and this is still going on. It's not yet finished and we'll be talking a lot about it in our PTG, every project sessions, not in not only the implementation but about how to use them and obviously we are also looking forward from the operator's feedback and how we can improve those if it's something we need to care about their configuration or usage side. So feel free to join us in PTG about these secure RBEC discussions. Kind of letting your mute. Love mute. I'm good at technology. So moving into project updates, we'll dive a little deeper into some of the ongoing work that's happened during Walby. First up we have Cinder and Brian Rosmeada the PTL of the project will be presenting first. Hi everybody. I've got five minutes. I'm going to go kind of fast. Feel free to come back and ask questions or look at slides to be available somewhere. So yeah, Brian Rosmeada. I'm Cinder PTL. And first, what does Cinder do? So it's the block storage service. So it provides block storage to VMs or whatever needs it in a cloud. So we provide services and libraries to give you self service access to block storage and it's software defined block storage. We have an abstraction layer and we operate on top of traditional back end devices. There's a little picture there. So it's a pretty typical setup. What's missing is we also have a Cinder backup service that communicates with the message bus and then does stuff also. Next slide, please. All right. So as far as what we produce, we have a bunch of repositories. We got Cinder where the main code is OS brick, which is what does the attachments to devices. We've got a Cinder client and then the Cinder client brick extension. We've got the Cinder Tempus plugin, which provides specific scenario tests for things we want to test in CI for Cinder. We have Cinderlib, which is the basis for Embers CSI, which allows you to use the Cinder back end drivers with Kubernetes to do persistent volumes. And then a new piece of software we've got is the RBD isochastic client, which supports a new driver we added. Next slide. So who does it? Here's a happy picture of us at the last project team gathering. Cinder's been around since the fall some release of OpenStack. It was spun off out of Nova. As far as contributors go and you're sorry we had 102 contributors from about 32 companies and then you can see the last two releases would have been kind of stable around 70 contributors from about 25 companies or so. Next slide. User survey numbers. So this is from, I guess, the latest one that's available. So the number of people surveyed, or I guess people who responded was 158, and we're deploying 86% of production deployments. I just put the other members of the dirty sakes there, so you can see how that stacks up. So it's interesting to see more glance and Nova deployments, but you can do glance and stand alone, I guess. Next one, please. And as far as testing and POC deployments, we're all sort of the same there. So just Cinder's pretty widely used in OpenStack, I guess is the point. All right, as far as the back-end driver, so you notice our logo there, you're staring at the horse's back-end and that's because storage is what we do and so you got to put it someplace and that's called the back-end. Right now we've got A4 volume drivers, so that means we support an awful lot of different hardware that you can use. New ones that are introduced this cycle, we have the Ceph, ICS-E driver, we've got a Dell EMC added a PowerVault ME driver, we've got Piazza Kumoskal, OpenE, Jovian DSS and 2U ACS-5000. And we've got 6 backup drivers for when you want to use the backup service to backup volumes and the new one there is the S3 driver so it's S3 compatible you can use it with Swift or you can use it with actual S3. And then there are 2 fiber channels, zone manager drivers that are in the source code. And if you want to know about it, use DuckDuckGo or some search engine of your choice and you'll see it all about this in your drivers. So what's new in Wallaby? Two new micro versions so we've added the volume type ID to the volume detail response that was a request from Nokia because in some contexts you get the volume type name but you want the ID to make a request or something and so this way you've got them both there and it saves you a round trip to the API. Micro version 3.64 add the encryption key ID to the volume detail response which is nice it was an operator request but I would like to remind people just because you can see the encryption key ID don't mess with it. We added some new volume and backup drivers and just talked about current drivers added support for a lot of features so you can check out the support matrix to see. A lot of drivers added revert to snapshot and several drivers added back end QoS so that was nice and then we had a bunch of bug fixes right near the end around sender quotas which I know has been a big pain point for a lot of operators because the quotas have race conditions to get out of sync so we fixed some of that stuff and then there's a new quota category that was added to the sender managed tool that will allow you to check the situation of your quotas and re-sync them if you need to do that so check that out Alright, now an issue with Wallaby I just wanted to mention some anomalies with encrypted volumes this came in right near the end so we'll be working on it early in Zena but it seems to happen with set volumes when you've got an encrypted volume type and the volume is full so it's a two gig volume and it's jam-packed right to the last byte of data what could happen is that the data can be truncated if you do a snapshot so please look at the release notes and just be aware of that and keep an eye on our bug fixes it may also apply to NFS based back-end so you might want to experiment but there's a write-up in the Wallaby release notes so please check it out to see the details Next one Yeah, so if you're using sender throttling something's come up how do you know if you're using it? Well, check in your sender configuration while on the volume service nodes if volume copy BPS limit is not zero then you are using volume throttling if you are sender still using cgroups, control groups for Linux version 1 and some Linux distributions have switched to using B2 by default so check to see what your distribution is doing Again, there's a detail write-up on the sender Wallaby release notes and there's an example of one of the distributions that's changed and what you need to do so just something to be aware of Next Alright, what do we got planned for Xena? I was afraid of copyright infringement but I have to say, we're your princess there so we're going to remove version 2 of the BlockStore API it was dedicated in pyke we've been threatening to remove it but this time we are really really serious about it so what's I going to do for you? Hopefully nothing because it's, by the B3s been around so long probably everybody's using it but it's also 3.0 is just like 2.0 so you shouldn't miss it at all I work on the consistent and secure policies that Gonstrum mentioned. We got various internal improvements scheduled and we're gonna address stability issues in the upstream and third-party CI systems just for quality control. All right, contact the center team. He's going to do a tiny.cc slash cinder hyphen info. We'll take you to our contributor page that outlines all our meetings and how we do stuff. And also, I'd like to keep plugging this piece by Heiko Ropp about 10 ways to contribute to an open-source project without writing code. If you can't write code and want to, that's great. So it will come to this in the meetings and BTG and we can put it to work. But there's lots of other ways you can get involved to even if you don't want to write code. And I think that's it. So thanks for listening and feel free to ask questions. Thanks, Brian. I really like the Cinder team picture and a lot of beer growing there in the no travel time. But yeah, thanks, Brian and Cinder team for such awesome work. The next we have Neutron and Slavic, go ahead. Hi, my name is Slavic Kapuensky. I'm Neutron PTL since two cycles and I work for Red Hat. Today I will give you some short brief update about what we did in Wallaby Cycle in Neutron. So first of all, what is Neutron? Probably you'll know, but just to remind everyone, Neutron is networking service for OpenStack resources. It provides networking as a service for other OpenStack services like Nova, for example. Of course, more you can find in our documentation. So that's very quickly. Okay, in the, Neutron was founded in the Folsom Cycle of the OpenStack. So it's like Cinder pretty long time here. It's adoption is also pretty high. I checked in 2019 user survey. So it was 90% of clouds which were using Neutron in production in 2020. User survey number is even a bit higher. And as regarding Wallaby Cycle and development numbers, we had about almost 600 commits made by almost 90 contributors from 28 companies. So it's pretty, pretty a lot. And we have some pretty big number of contributors to the project. We also fixed more than 120 bucks and completed few blueprints. Next slide, please. Yeah, so speaking about new features in Neutron, in Wallaby, first of all, we added new subnet type which is network routed. IPs from that subnet can be advertised with BGP over provider network. This provider network can use segments. And basically this achieves something what we can call BGP to the rack where the layer to connectivity can be confined to rack only and external routing is done by the switches using BGP. More info about this is in the spec which is linked on the slide. Next, we also added a possibility to QoS policy which contains, we added a possibility to update QoS policy with minimum bandwidth rules there on the ports. Previously before Wallaby, this wasn't possible because allocation in placement wasn't updated. Now you can update your QoS policy on the existing port and placement allocation will be updated accordingly. We also added a new VNIC type called PowerDPA which is used for VHOS PowerDPA offload. This VNIC type works for now with ML2 OVS and ML2 OVN backends. More info about that is in this spec proposed to the Nova which is linked on the slide also. We added also address groups resource which can be used to group IP addresses which can be later used in security group rules. So this is something like IP set concept in IP tables, for example. Another new feature which we implemented in Neutron is port device profile, previous slides please. Port device profile attribute which was added to the port resource and this was requested by Nova and Cyborg and it is used by Nova to request smartNIC profile from Cyborg. Regarding this Nova Cyborg and Neutron integration, we also added a new VNIC type called AcceleratorDirect which means that such device will be device provided by Cyborg. And regarding role-based access policies, as Gansha mentioned before, in Neutron we also made pretty significant amount of work regarding this feature. We provided, we have rules, API rules for all policies, our API policies, we are marking it as experimental. It's, you have to enable it by default that all the rules are still used but it is possible to enable those new rules with system scope personas and project scope personas. But we are still working on testing of that and we have some, we are finding some bugs with these new policies and rules. So as I said it's kind of experimental support in Wallaby and we will be improving that during the next cycle. In the Wallaby cycle, we also spent a lot of time on closing another gaps between, mostly between ML2 OVS backend and ML2 OVN backend. In this cycle, we added support for VLAN transparent networks in OVN backend. We added support for security group logging API and we aligned API for agents with how it, for ML2 OVN with how it works with other backends where the real agents are used. So now it is possible to, for example, delete OVN controller agent or OVN metadata agent from Neutron database using Neutron API which wasn't possible before Wallaby cycle. Of course, there is also more new features and new things in Neutron done in Wallaby but there is no time to talk about all of that here. So you can find more info in release notes of the Neutron. Next slide please. Speaking about next cycle, which is Xenna, we have a couple of things which we will work on because we didn't make it in Wallaby. Such new things are, for example, distributed DHCP for ML2 OVS. It is proposal to implement DHCP service in OVS agent directly. So OVS through open flow rules will be able to provide DHCP for the instances which lives on the specific compute node. It will be fully distributed. There will be no need to use DHCP agent. Limitations of this solution will be that, for example, it will not provide DNS resolver as DNS mask can provide. But there are some use cases where this is not needed and then this new distributed DHCP can be used instead of DHCP agents. We will also work on support for ECMP routing in the L3 agent. And one of our important things is migration from IP tables legacy to NF tables. As NF tables are becoming more and more popular and use more distributions, some distributions already deprecated like IP tables legacy, some will do it soon. So we need to move forward and migrate to NF tables in Neutron. And next slide, please. How you can help if you want contribute to Neutron? Of course, there is list of open blueprints. Some of them needs new owners because they weren't updated since long time. Maybe you will want to work on some of them. There is link to Neutron, open Neutron bugs. There is also, we have some low hanging fruit bugs which should be good to start. If you want to start contributing to Neutron and we have team meeting every Tuesday at 14.00 UTC. So please join this meeting and speak if you have any ideas about Neutron. And that's all from my side. Thank you very much. Thank you, Slavic. So now we have Julia Krieger next who is the current PTL of Ironic to give us their updates. Greetings. Next slide, I guess. So I guess the first question always tends to be what does Ironic do? And Ironic is an API to manage bare metal. Hence, irony or the name of the project, Ironic. In a sense, we provide a set of tools to manage the life cycle of physical machines and facilitate deployment of these machines to end users. Next slide, please. So a little background. The project was formed during the Havana cycle and was first released during the Ice House cycle of OpenStack. The project actually joined OpenStack or became part of OpenStack, I should say, during the Kilo cycle. During this time, 903 individuals have contributed to the project, 85 of which contributed during the Wallaby development cycle. Next slide, please. So we always have a number of both small and large features. These are just a couple of highlights. In the Wallaby release, we added out-of-band raid configuration support for Redfish. In essence, we can use the BMC with the Redfish protocol to say we wanted to have this raid volume and facilitate that on deploy, which has been an ask for feature for a long time, which is now possible in Ironic. We also add support to understand project and systems-based role-based access controls. And this ultimately allows the administrators or the users of systems to further delineate and delegate access down to what we call an owner or a less say. Previously, this was only enabled for those that use custom policy files. This is now enabled in Wallaby as a standard feature. So that's a nice enhancement. We've also had a number of deploy time enhancements, including support for Anaconda, file injection, UV partition-based image deployment, and one-time interface overrides for deployments. In essence, you have the ability to say, I would like to use Anaconda for this deployment, not the direct deploy interface. Next slide, please. So when we talk about themes of what we've been working on, mainly we've been focused on the manageability, security, and user experience aspects with kind of a more minor focus on resiliency and scalability. They're not active, they're not the most present things in our mind, I guess is the way I put it. If you go to the next slide, please. One of the things that kind of tends to flip-flop is this, we address scalability and resiliency and manageability issues in an alternating fashion. So we kind of expect to work on scalability, resiliency, manageability, and user experience during the upcoming cycle. We are still discussing security. We aren't aware of any required changes right now, but we may do some additional work. Next slide, please. In regards to the Xena release, which will be the next release of Ironic, we expect to hopefully have support for recalling the air history of a node. Presently, we just report the last error, which is not ideal. We've also noticed that there have been some database performance issues that people have encountered with. Clusters of nodes exceeding 20,000 with 20 conductors or 30 conductors, which is a large deployment which people tend to operate in, but we wanna go ahead and address some of those scalability issues, especially because our back work has ultimately added a little bit more in the way of database calls, which is not ideal, but it was necessary. We also expect to possibly have some enhancements to standalone integrations. At present, we're discussing supporting ISC-KIA for DHCP address allocation. This is more for our standalone users, not necessarily the neutron integrated users. Next slide. We're also discussing some driver operator usability enhancements we expect to support the ability for either to find the happiest path or for us to try and at least make things a little easier. So operators need to know everything up front. There are some discussions regarding Redfish usability enhancements, specifically to being able to detect the virtual media as an option, and also being able to provide some insight into the setting registry if there is one on the MC. There's also a discussion of trying to have a one-time deployment API. This is something that's been in the works for a while now. We've laid some groundwork. However, it's just, we haven't done the deployment exposing it in an API. Hopefully we'll make it this cycle, but there's, again, with all these things, there's no guarantees. If you're interested in these things, please contact us and help. Next slide, please. So we do have some questions for people out there. One thing we would like to know is how people feel about virtual media, specifically, is it important or not? We're finding it's a very important thing, at least in certain industry verticals, but that is something that's going to vary by the industry vertical and the security requirements of that vertical. We also really love to know the average cluster size that people are operating with Ironic, the number of conductors as well. Without having a lot of this knowledge, it's hard to understand the scaling problems people might encounter in advance of them, actually finding them and reporting them. We'd also like to see if anyone's interested in collaborating on third-party CI to help support even more hardware. Next slide. And that's it for me, thank you. Thank you, Julia, and Ironic for such a nice work. Ironic has been always a very important part in OpenStack and especially from the usage perspective. So next we have Noa and we have Gibi with us. Gibi? Yeah, hi. I'm Balash Gibi, I'm the PTL of Nova, and I will give you a short update what the team delivered in Wallaby. Next slide, please. But first, what Nova is? Nova is the main computing component of OpenStack provides a possibility to create virtual machines or permittal machines by Ironic and manage the lifecycle of those servers. In Wallaby we had 92 individual contributors and all together we merged almost 500 commits and implemented 14 blueprints. Next, please. So I will highlight some of those features. We, in Wallaby, we continued the integration with Cyborg. Cyborg is the component in OpenStack that supports or gives support to accelerators like FPGAs and other physical devices, GPUs. And Nova has support creating servers with those accelerators, but not all the lifecycle operations has been implemented yet. In Wallaby, we added support for shelving and unshelving Nova servers with Cyborg accelerators. Also in Wallaby, we extended the support of the Neutron QS policies. Again, in the past, we gradually added support for Neutron ports, having minimum bandwidth QS policies into Nova. And as the last piece in Wallaby, we now support attaching such ports to running servers. And basically this concludes the necessary work to support all the lifecycle operations with such Neutron ports. Also, we extended the Nova scheduler to support the routed networks from the segment networks from Neutron. Now, if you have such networks and you are creating servers in the networks, then the Nova scheduler will make sure that your VM will be placed on a compute host that has connection to at least one of the segments in that multi-segment network. And also this allows the administrator to create separate IP address pools. And the scheduling will make sure that we still have free IP addresses in the pool. That is connected to the segment, that is connected to the compute node you are landing on. Also, we extended the Hyper-V word driver. And now with that hypervisor, you can create VMs and attach since there are really volumes to those VMs. Next slide, please. We also extended the Libre driver with driver with a couple of features. The first is that now you are able to safely change the machine type of your server. This is interesting in a sense that the newer machine types supported by Libre are allowing a lot more modern features to be implemented. But so far, we was not able to safely reconfigure a compute host to use the new machine type if there was already VMs running on the compute with the old machine type. Now what we do is that we report the machine type used by each individual server. And if you reconfigure the compute host, then the existing servers will still use the old machine type so the API towards the virtual machine will not change, but the new servers can use the new machine type. So you can gradually move your deployment to use modern hypervisor features. Also in Libre to now support UFU secure boot, I think this is a big step forward in security. Now you can have all the advantages that the UFU secure boot infrastructure provides so physical machines for your virtual machines as well. And last but not least, Slovakware dimension that the new trend did support for VDPA based the networking backends and was so adapted to this. And now in Libre driver, we support the VDPA ports. VDPA backed ports. And VDPA is basically give you a way to accelerate your networking without depending on the physical implementation of the exploration and your virtual machine will only see a generic virtual device. This is important because in the future, this will allow to have both the benefit of physical acceleration of your networking, but also a capability to seamlessly migrate, live migrate these virtual machines. This is not there yet. We only added the basic supports you can create and the servers and do some lifecycle operation with those servers, but live migration is not supported yet because we have to first make this implantations available in KVM and QMU, and then no one can start using that. And this is coming in the future. Besides these features, we had some important cleanups in the cycle. We managed to squash all the DB schema upgrade scripts top of until train. So this means that if you are deploying new deployments, then the deployments will be faster because the database creation scripts doesn't have to step over all the old schema migration first. If you are upgrading from a Stein to something newer than train, then you have to first stop a train to have the migration done. That is the only drawback of that. All the other stepwise upgrades works as expected. We also managed to introduce a new RPC version between the controller and the computer services 6.0. This gives us a possibility to clean up a lot of the old RPC craft. We gathered over the years so it's important for us, but it should be seamlessly supported by the upgrade. So if you upgrade and you still have all the computers in the system, we are still supporting the old 5.13 RPC version in Valubin. That's all what I wanted to say. Thank you for listening. And if you have questions, I'm trying to answer it in the chat. Thank you. Thank you, Kibi. So before we get rolling into Cyborg, I just wanted to mention again that we will have a little bit of time for Q&A. If you want to put your questions in the chat, we should be able to get to them at the end, hopefully. So yeah, next up we have Cyborg and presenting will be Xiran Wang. Take it away. Hello, I'm Xiran Wang, the PDL of Cyborg Project. And today I will give a quick introduction of Cyborg Project overview and what the team has done in Valubin release. So it's a quick introduction of Cyborg Project. Cyborg is a general management framework for accelerators and now Cyborg support to manage accelerators like PGA, GPU and SSD card, as well as AI chips. Next slide, please. As we know, NOAA has already support PCIY list, but there are some limitations. There are some difficulties to use it. For example, if a PCIY card have multiple functional, multiple profile and user wants to use only one of them and it's hard for users to distinguish which features the device has. So that's why Cyborg comes from Cyborg Prize, clear place for hardware drivers and all the management of accelerators is decoupled from NOAA. And Cyborg also provides device life cycle management like we can disable, enable or program a device. Here's some new features and enhancements in Valubin release. The first one as I just mentioned, we support on-shelf operation which allow users to shelf or on-shelf OVM with accelerators attached. And also we have introduced two new device driver. One is Intel NIC adapter driver and another is from INSPUR, which is SSD device driver. As we support more and more devices, the functionality of the devices become more and more complex. So Cyborg introduced a new configuration file which allow the main users to config the device with different features like we can config different function loaded on the NIC or config the VGPU type for GPU devices. Next slide, please. So here's something we plan to do for next release. The first one is to complete the SmartNIC support. We already support SmartNIC driver in Cyborg side and in Neutron side, we have already have some enhancements. And in next release, we plan to finish the implementation in NOAA so that in the next release, we plan to finish the implementation in NOAA so that users can boot up OVM with SmartNIC attached to a Neutron port. The second one is VGPU support. As the first one, the VGPU driver has implemented in Cyborg side and we need to complete the whole workflow in both Cyborg and Neutron side and we hope to finish it in next release. And also we plan to support suspending or resume operation for OVM with accelerator attached. That's also what we should change in NOAA side. And all of this is cross-projected enhancement I think. And for Cyborg own parts, we plan to add new APIs for disabled-enabled devices. And also we plan to add new APIs to show the program process for several devices like FPGA, et cetera. Next slide, please. Yeah, I think that's all from Cyborg side and if you have any questions, you can come to talk with us on our C-channel. I have also passed the answer part of PDG and if you have any topic want to discuss during PDG, please add it. Yeah, that's all from me. Thank you. Thank you, Arun and Cyborg team for such a nice work. I think most of the team is in Asia time zone. That's one of the also difficulties, but I really thanks our communities for such a good calibration across all the time zone, different countries and all. So next we have Masakri and I'll let Redusla to go ahead. Okay, hello everyone, can you hear me? Okay, great. Okay, so today I'm presenting Masakri and what we have done in the last cycle. Next slide, please. Okay, so quick recap. So it was already presented in the last cycle, but let's recap again. So what does the Masakri do? So the Masakri service is about high availability for the instances in an open sub-cloud. So basically what Nova creates is insured with high availability. And the way Masakri does it is by employing monitors that monitor various parts regarding the process of instantiation. So for example, monitoring the live vertices of things, monitoring the hosts themselves and triggering notifications and notifications, get into the Masakri API. And then Masakri enacts control over them by running recovery workloads. And as the slide mentions, the monitors actually rely on those external sources of truth, so like leverage or pacemaker in terms of house. Next slide, please. And the question could be why Masakri? Because when we think about clouds, we usually think about cloud native apps that have this built-in resilience so that if one instance, one replica of the app fails, then there are other replicas that can take over it. But basically not all the apps are cloud native apps still. So with those legacy apps, the only way to ensure that they are available is to employ some external high availability mechanisms. Either they would be on the client side, so like the client will be running the pacemaker for themselves. Or it could be external from that perspective of an instance, and then Masakri is one of the solutions for that. So in the case when this instance is for us, that would be this internal usage, so the first point. On the other hand, Masakri also found its usage in those public clouds when the offering includes some kind of SLA so that when the user asks for an instance, expects it to just live there. And that's what's being paid for basically. Okay, next slide please. So quick recap on the dependencies of Masakri. So it's a very simple service. It requires Keystone for authentication so far it cannot run standalone. And also Nova because that's the only solution that it cares for about this high availability for the instances. Next slide. And a quick recap on the components. So as I mentioned already, we've got the API that receives those notifications, the engine that runs the recovery workflows. So this is like like control services, control side services. On the client side, we've got the shell client, the CLI that's based on the OpenStack client, back to Tali by OpenStack SDK. So we got rid of the legacy client. We no longer have the legacy client. The other solution is to use Python directly. OpenStack SDK is fully supported from us. And we've got the dashboard plugin, Horizon plugin that's also 100% of OpenStack SDK now. And regarding monitors, the monitors that are available. So basically one could imagine that one just creates some monitor based on any source they wish. But we already provide four monitors. The first monitor is the instance monitor based on monitoring delivered, tested with QM and QMKVM. The other one is host monitor. This is based on pacemaker as mentioned before. There's also process monitor. So that's basically like running PS on the same machine. And the introspection monitor, so a monitor that actually goes and via the QEMU I guess agent runs some query against the machine itself. Okay, next slide. And regarding the wallaby release of the last one that we have presented today, we've got a few features to present. First is support for disabling and enabling failover segments. So Massacre works in the fashion that the operator. So basically Massacre is hidden from the end users of the cloud. So it's only up to operators to be configured on each level. And one of the concepts of configuration is the segment. And so far the segment was pretty static. So it would only exist and have hosts added to it. Now the whole segments can be enabled and disabled at will. So if there is an upgrade, there is some maintenance ongoing on that segment to prevent Massacre from acting on that. Because for example, you disable some instance of pacemaker and pacemaker suddenly reports that the host is down. But you know, you're going to just do some maintenance around that, then you just disable the whole segment and be fine with that. The other feature is the support for smoothing out the decision about whether to consider hosts done or not. This is for the edge use cases as INSPUR proposed this. So when the network connectivity is like expected to fail from time to time, then the monitors could actually tell that the instance the host is down. But basically assumption is that the network can go down. And then we choose to base our decision on whether the host is actually down or not on several samples from that state. So for example, during one minute, if the next five samples tell it that the machine is down, then the machine is down, otherwise the machine isn't done. Another thing is a bit smaller feature, just support for running the host monitors in a containerized environment. So without system D, because system D was previously a hard dependency in host monitor, it's no longer. And finally, the support for using systems code tokens when contacting Nova, now it's also possible from the massacre site. This cycle we've got also several bug fixes, even long standing ones. So we stabilized the project a bit. And I've already said that we drop the legacy client. So we are free from that as well. Okay, next slide please. And sadly, this is basically a copy from the last presentation. So we still have the same plans for Xena. We have only prepared for this and discuss this further during the last cycle, but we didn't progress really on the implementation side. So going quickly over them again, this is the evaluation of pacemaker alternatives. So imagining pacemaker wasn't designed to just monitor the host, but actually to be a high availability solution at this level. We are evaluating other solutions to that. So for example, console and DCD, they could be more lightweight to that. Another one moving the fencing and host status closer to massacre because at the moment massacre, it doesn't really know whether fencing has happened or hasn't as this information is in the pacemaker. It's not being queried at all. And it will be decided in the best if massacre it could just query it for itself. And finally, the restoration of the original state of things like they were before massacre to its actions. So imagining that if massacre removed the machine from the failed host to another host and other the host actually comes back online, some operators would like to have this original state restored so that if the machine lived on the host A and now it lives on host B, then they would prefer for it to be moved back to host A. And that's it from me. Thank you very much. Awesome. Thank you very much. Also congratulations on being elected as vice chair for the technical committee. All right. Lastly, we have OpenStack Manila which will be presented by Gotham, the current PTL. Take it away Gotham. Hello, hello. Hi there everyone. My name is Gotham Pacharavi and I'm the project team lead for the OpenStack Manila project team. And since I'm the last presentation, I'll try to wrap up quickly. So, but there's a lot of stuff that I'll be talking about and I'll be doing that fast. Sorry about that. So Manila provides self-service scalable serviceable shared file systems to end users. And these are the stats for the Wallaby cycle. Pause your video here, look through them. We had some great momentum coming into Wallaby and we maintain that. Thanks to all the project contributors, the users, developers from all across the world, members of the wider OpenStack community, everyone literally that made this release possible. We can move on. So the one thing that I am especially proud about is all of the new contributors that we gained and the number of bugs that we addressed in this cycle. Wallaby brings in a whole new set of features which is a great set and I encourage you to read the release notes to get the finer details for these features. So let's dive into some highlights. So we dove right into the refactoring the API R-Pack policies alongside the other OpenStack services. So we now support the default roles and user scopes that are defined in the OpenStack identity service. And this affects close to the 200 odd API methods that Manila was exposing to facilitate system-scoped interactions. We dropped the need to have project ID in the V2 API endpoints. So please adjust your service catalogs and take advantage of this change. All of these new RBack policies are not enabled by default in the Wallaby cycle. We're going to continue enhancing these through the Xena cycle and we'll be sure to backport any fixes to the Wallaby cycle as appropriate. So with the Wallaby cycle, it is possible to update security services on in-use shared networks. This was a sort after enhancement to the use of security services. So contributors work closely with large scale users and implemented this enhancement. So this should provide a huge UX change for day two operations and in the same self-service manner that's a promise that Manila provides. We now also support OS profiler and this can aid the deployers trace their requests through the service pack and triage issues. We're also turning on the health check endpoint by default. So your tooling that sits on top of the Manila API HAProxy, et cetera, can monitor the health of the API service with this endpoint. There are more day two quota control and provisioning sort of options that are enabled, that are provided with the Wallaby cycle. Deployers can now set per share gigabytes as a quota. And what this does is it can be set across the deployment or you can set it per project or per user or you can also set it per project per share type. And what this does is it prevents users from creating extremely large shares that you cannot really handle but gives them a more synchronous error response. And then if you'd like a more fine grained approach, we also support new provisioning options as extra specifications to your share types. This would be either as can be set as minimum share sizes or the maximum share size that can be used with a given share type. And alongside another feature that our contributors developed for large scale users was share server limits. And this is especially helpful in environments where the deployers would like to control the number of shares or the capacity of storage that is exported via a given share server. The next slide, please. So we had several driver improvements main of which was the inclusion of a new driver for Zadara VPSA storage. The CephFS driver saw a massive rewrite, a major overhaul that we're especially proud of. So Manila now interacts with the Ceph manager Damon instead of using a Python library called Ceph volume client. And it's able to work with multiple file systems on a Ceph cluster. So you can have multi backend on the same Ceph cluster and direct your each Ceph driver to work with an individual file system. And the Ceph side code actually does differ share deletions that increases the performance and time to get rid of datasets from Manila. And we also have support for snapshot cloning at this point, which is a set with the extra spec create share from snapshot support. And there's a lot of UX improvements that were done with respect to asynchronous failures in this driver. And the best part is that you can also reduce the capabilities that the security profile that's awarded to the Manila user that's interacting on your behalf on this driver. That's part of all of these enhancements. Then adapt driver saw a lot of enhancements as well. So it supports the in use security service update. It also supports F policies. It's a really cool feature like you check out the release notes and an upside documentation for this, which is all in line with the security enhancements that went in with the rest of the release. Alongside LDAP and Kerberos security services have been enhanced with this driver. And there are lots of performance and scaling improvements in the driver. And we also support LDAP with the container driver now. So the gamut of drivers that are supporting the security services is increasing at least after release. Next slide, please. So this is my last project update slide, I promise. But definitely my favorite one to share here. So Wallaby was a great cycle for this project because we managed to mentor and onboard seven interns through it. So I'd like to give a shout out to the interns themselves the outreach interns, Dina Saperbeva and Paul Ali. And they worked on Manila UI and also designed unified limits, something that we might be picking up in the future cycles. And also three students from Boston University, Ashley Rodriguez, Mark Tony and Nicole Chen. And they worked on the OpenStack SDK. And I'd like to give a shout out to their mentors, Victoria Martinez, Tela Cruz, Kendall Nelson, Mari Tam, Jeremy Floridog and everyone in the community that helped with this process. It was so much fun. We're gonna do some more of it in the next release. And we've achieved quite a bit of parity with the OpenStack client. And we're soon gonna look to complete that work and start deprecating the Python Manila client portion of this. Alongside Manila also is now part of the interop guideline. So there are several new tests that made it to the guideline. And these tests are applicable to the last three releases of OpenStack. So if you're running a cloud and you have Manila installed, I encourage you to run these interop tests and report your findings. And there are a few more things that we're planning to work on. So I can talk about that in the next slide. This is for the Zina release. These are just the top enhancements that we have in mind right now. We're looking at increasing the service resiliency and what goes on with the service reportage, the health checks that happen, et cetera, from time to time in the services. And we're also looking at enhancing the service recovery as far as failures are concerned. So if the cloud goes down or a particular node goes down, what are we doing the next time we come back up? And there's also further work that's going to go happen in secure our back. As I mentioned, we're going to be testing a lot more. And if there are any bug fixes that we need to take back the wallaby cycle, that would be a priority for us. We're looking to complete the PrivSep migration work that we got started on designing GlassCycle. And there are several driver enhancements. So if you'd like to work with us, please do join us at the project team gathering next week and share your feedback with us and work with us through the signal release. That's all I had. Thank you so much. And I'd be happy to answer any questions. Thank you, Gautam. So nice to see a lot of activity going on in Manila. And also, especially Manila is now engaged in for the interop working group and interoperative for shared file system also. So we have now two questions. Thanks, Jimmy, for putting up in the chat. One is for Neutron and one is for Schindler. So first question for Neutron. Is there going to be any integration with Neutron leading companies, like Fortinet, Cisco F5, to boost up the performance in various areas like forwarding, load balance, et cetera? So Slavic, I'll let you know. Okay, thanks for the question. So basically, there is nothing going on in this area currently. Also, historically in the previous releases, functions like firewalling, load balancing were not inside Neutron, called Neutron Project, but they were more outside to something we call, what we call, Neutron Stadium Project. There was Neutron firewall as a service in the past. There was also Neutron load balancer as a service project. This one was later superseded by Octavia, which is standalone separate OpenStack project. And so question about load balancing and this integration with hardware from those companies regarding load balancing should be asked to the Octavia team, probably. Regarding firewalling, unfortunately, there is nothing going on recently, like two cycles ago or something like that. We had to deprecate Neutron firewall as a service project due to lack of maintenance and lack of activity. So this project is not maintained anymore. There are no new releases. There is nothing regarding this currently in Neutron. Okay, thanks. One thing, it's probably worth mentioning. These vendors are not prevented from creating an ML2 driver and the ML2 drivers can support some of these functions as pass-throughs, it's just, it's not something the community can actively develop. Yeah, thanks. Not sure. Yeah, and we didn't have the Octavia presenters here, but we have a very active team for Octavia project. So feel free to contact them for load balancing enhancement and for firewalling, also as Slavic mentioned, Neutron, I probably was retired, but if your company is wanting to maintain it, feel free to come back to OpenStack Technical Committee and we can think about the plan. One more thing maybe worth to mention regarding firewall service. We deprecated this project as implementation, but we still keep and it's still kind of maintained the profile of the service API, which is definition is in Neutron Leap and this is still there because we had some requests. I don't remember from what company it was, but they had some custom implementation of this API and they wanted it to keep it like that. And so the definition of API is still there. If you want, if anyone is interested, the project can be probably released or you can have own implementation of this API. That's still something for this possible. So Slavic, when you say like own implementation, is it like can be done in with the upstream project or with the downstream implementation of those API? The one which we had request, if I remember correctly was kind of downstream project, but we said that it's doable for us and we can keep this API definition in Neutron Leap and it's still there, so. Okay, cool. So I'm sure thanks, Slavic, for the detailed answer. Next is for Shinderban already applied, but the question I'll repeat for you to viewer. The question for Shinderban is, as the cost of this kind of expensive for capacity purpose is their plan for integrating in the backup with the tape device? Brian, can you please highlight or just let them know like how? Sure, nobody's proposed that, but it makes a lot of sense. So we certainly wouldn't be opposed to it. At some point there was this driver IBM had called that, it was for the Tiddly storage management system. I don't know what the backend technology of that thing was, but it was removed in Victoria as there was nobody interested in continuing to maintain it. The backup driver interface is much simpler than the volume driver. So it would not be that difficult to do a tape device thing, but we'd be interested in a vendor contributing one. That would be great. So we don't have one. Nobody's brought it up until now. So if you have a vendor in mind or somebody you work with, you might want to discuss with them and they can get in contact with us because it would be a nice thing to have. Thanks. Thanks, Brian. So before I pass it to Kendall and Erin for wrap up, I'd like to say thanks to few of our team, especially the community member, even their contributors or operator, user, even your reporting bug, that is the very great help for us to release this OpenStack 23rd release. So and our release team who is doing 23rd on time release with like OpenStack is not a small project. It's like we have the 47 or 48 project within that and more than 300 available. So amazing job by release team again and all our supporting team also in front team, QA requirement team, they are being providing the running and up CICD for projects so that they can develop their feature, they can implement their feature without any delay and with the proper quality testing thing also. And even though it's a COVID-19 pandemic time, we all have the challenge, even personally and professionally, but seeing this OpenStack ball of release with so many contributions, so many features, it's really great for everyone. And at the end, thanks to our board member also and Foundation and also Foundation staff, they have been providing us every support for everywhere like to build the software to make this awesome community. Whenever we need anything, we just bring them and they provide us all the support and platform. So thank you guys, thank you everyone. So I'll pass it to now Kendall. Yeah, so a quick note about the PTG. Basically the event is running next week. It is all virtual, so registration is free. You are encouraged to attend if you have topics to discuss with the project teams. We have about 45 teams signed up to participate at various points throughout the day. And we hope to see you there and get involved. And then I guess Erin will do our final wrap up. Thank you so much to Gansham and Kendall for MCing today and all the PTLs that we're able to present. We did have one, the Glance team wasn't able to join us this morning, so we will be posting their video on YouTube in the next few days. There are any other projects that would like to join that. We're happy to take your videos and get those uploaded. I did wanna do one final plug for our new Open and for Live series. Be sure to mark your calendars and join us on Thursdays at 1400 UTC. The website's here, we're still getting it built out, but we do have the next few weeks listed there. Next week we're doing a behind the scenes live from the PTG of the TC meeting. So I would love to have you join us. Thanks everyone for joining. Have a great day. Thanks everyone, bye. Thank you, bye.