 and communities update and a lot of great topic. So my name is Ghanshyam Mal. I'm chair of OpenStack Technical Committee and your host for today episode. As you might know, OpenStack Yoga is officially released yesterday and there are a lot of great features code has been contributed by many contributors in Yoga release. So today episode we will be discussing about a few of the key features from hate projects and a yoga overview. And this is live episode and we would love to have a question from you all. So feel free to post it on the chat and we'll try to answer it in between or mostly at the end of the episode. So before we start, I would like to thank all the supporting member of Foundation without their help and support to our Foundation who is building our community to write software or even those in five episode is not possible. So here's thanks to all these members company. And before we start, I would like to give you a little bit overview of the OpenStack Yoga with stats and few updates from the Technical Committee or government site. So OpenStack Yoga, it's 25th on time release which is like around 12.12 and a half years. And yes, it's named Yoga. So OpenStack has completed yoga on time. And it's a lot of hardware support features like smart mic GPU and cloud-native tool, integration, Kubernetes prompts us and a lot of retires the technical depth to make it more stable and more maintainable. So a lot of features. And in terms of contributor stat, if we see there are 13,500 code changes or 680 developers from 125 resistant and 44 countries, it's still a one of the active open source project we have. And huge thanks to all the contributors project team, all the supporting team, QA, Infra, OpenDAV requirement and a special thanks to our release team doing a continuous on time release even the 25th. Next slide please. So a few of the highlights from Technical Committee we have the new project Skyline is a dashboard for OpenStack. This is very excited project. And if you'd like to like contribute, help and try and this is an official OpenStack project now. And next is our release cadence adjustment. Technical Committee passed a resolution on adjustment of release cadence. So TC will designate the major release in tick-tock arrangement. So such that like every release will be considered as a tick release. And along with the existing upgrade model like upgrading from one cycle to another cycle, we will be supporting a extra great support from tick-to-tick release will be like skipping the one cycle in between. So we have the resolution link and also like mail from the Dan Smith. They have the details and also we will be discussing it in the, PTZ in the TC plus leader interaction session. So join us on Monday, 4th of April, 14 to 16 ATC. If you'd like to know more details about this change, how it will impact your interim of project maintenance or the from the user and operator perspective. And next is our community wide goal updates. So one update in this community wide goal is previously community wide goal were tied to tied to the cycle release. But now we have uncoupled them from the cycle release. It's like we will select few of the community wide goal as a selected and active goal. We will be working on those continuously until we complete those work. So currently we have the two selected and active goal. One is migration from Oslo route wrap to PIVZO, which is a library from running the command in pseudo and the later one is more secure and faster. So that work is going on. And next is our secure or back. There are a lot of work going on since many cycles. So this yoga cycle, we have changed few of the direction which will be more helpful for the operators using the default. So that is also going on. And we have technical committee election. We have three newly elected technical members, Arne, Brian and Slavic. And thanks for them for joining us. And few of the other update is DC tag framework has been removed. And adjutant project need maintainer and PTL. So if you're using this project or you would like to maintain this, let us know on the mailing list or on the IRC. And OpenStack next release name is OpenStack Z. So we have eight project leaders here to discuss about their key feature. And the first is from Cinder, Brian Rosmita. Hi, Brian. Hi, thanks, Gantram. My name is Brian Rosmita. I was the Cinder PTL for yoga. And I have a subtitle on my slide because I couldn't help myself. Okay, so what happened during the cycle? So some good stuff. Got two new micro versions in the block storage API version three, 3.67 aligns us better with some of the other APIs and helps with a secure RBAC initiative. 3.68 is a very long requested feature. The Nova patches that use it missed the feature freeze but should be merged into Zed. So that'll become available for everyone. We got new drivers of various technologies NVME over TCP fiber channel and I SCSI and fiber channel. We also address some technical debt. I particularly want to name Steven Finukin as a hero of Cinder. He's an all round, I mean, he's been a long time Nova Corps sort of an all round gunslinger working on various open stack projects like Oslo and the documentation and stuff. But he helped us with migrate away from SQL alchemy migrate to Lembe which basically brings our database stuff into the 21st century. So that was very helpful. And I've declared him a hero of Cinder. And then upcoming in Zed, we've got four new drivers already submitted for NVME Rocky, NFS, FC and I SCSI. So all the basic backend technologies. We plan to complete phases one and two of the secure RBAC community goal. We plan to do a lot of other stuff too. I'm mentioning this because I'm personally working on it. And because I have some more free time because we've got a new PTL for Cinder, Rajat Dasmana who's been a long time Cinder core has spent a core longer than me actually and he'll bring some new enthusiasm and excitement to the project. So we're looking forward to what's gonna be going on in Zed, next slide please. All right, so the bad, this isn't quite bad but I mean, I decided the good the bad and the ugly so something's gotta be bad. So we have a good variety of contributors but the bad part is you notice there's an over dependence on Red Hat. So it's not necessarily bad. I mean, there are a lot of good people working at Red Hat, including me but it's always distressing to see too much over reliance on one particular company in an open source project. So I'm just sort of tossing that out there. The last statistic at the bottom on the patch sets, I mean, compare that to the commits and the lines of code, right? So almost twice as many contributors tried to get stuff in as was actually accepted. So that shows you that we've got reviewed BAM with problems and our cores are working as hard as they can. So what we really need is more community commitment to the project to kind of speed things up. Next slide please. All right, so the ugly, there's some known issues in the yoga release, nothing really horrible but I figure I should point it out. So see the release notes for details. One I wanted to point out is the NVMe over fabric issues. These were discovered in the OS Brick Library. After the yoga release, the connector for NVMeOF has been refactored over the past few cycles and some legacy issues were discovered that were probably always there but hadn't been detected but also some regressions. So we're gonna be working on that. I guess the reason I wanted to point that out though is we're making changes so that we can test the connector using the LVM backend in the gates but we need more and better testing on the actual hardware. So we're looking for people to contribute more tests that can be run by the vendors. So even if you don't have hardware you can help us write tests. I mean, one thing we discovered was that some of our tests currently don't go far enough so we may attach and detach something but the problem doesn't actually show up until you try to reattach or re-detach something. So we need to improve our testing. So that's someplace where there's a lot of possibilities for people to do some work with us. So getting involved. So one thing we've got is a resource count survey. Gorka's been working on trying to redesign the quotas system which has been a pain point for people but we're trying to get some data on what the size of deployments are. So if you could take a look at that I put these tiny URLs there so that you could write them down real quick. And I think Allison's gonna toss them in the chat at some point. But we've got a resource count survey. If you could take that please. It was supposed to close yesterday but I've extended it till Saturday. We just need to get some data before we start discussing stuff at the PTG. And then the PTG is happening next week. So you can go to our planning ether pad to check that out. TinyCC slash Cinder dash Z. Wednesday is driver's day. So if you're interested in Cinder drivers or in helping out with any of the testing I just mentioned, please show up. And then how to contribute and all that kind of stuff. TinyCC slash Cinder dash info takes you to our contributor page and has all the information. And thank you very much. Thank you, Brian. Stephen is obviously not just a Cinder hero but an open stack is contributing a lot in many other projects. And yeah, Brian you mentioned very clearly like testing about all these D-attacks possible scenario and all that very important things for us. So thanks Brian for all the updates. Next we have Mark Rudd from COLA. And it's for you Mark. Hi, sorry about my camera. I actually tried to change it but I think it's got even worse. No, I'm in a booth hiding from the noise in the office. So I think the camera is going to have to suffer. So apologies for that but let's carry on. So the current PTL Mikhail is he's had a trip to the dentist. So I'm standing in for him. I'm the previous PTL. So hopefully I know enough to keep you up to date. So what has we been doing in yoga? Well, because we're a cycle trailing projects we're not actually released yet but we're firming up the release so we've got a pretty good idea of what it's gonna look like. So the first thing to mention is that the binary images that we produce are now deprecated. So we have two types of images, source images which are built from Tables or Git repository source code and binary images which are built from distribution packages, RPMs or devs. So we're dropping the binary images and this is just to try and reduce our support matrix a bit because we've got various distributions and with these two types of images it makes it quite hard to support the project. So just trying to get that support matrix down. So these will be removed in Zed release. So we recommend that you switch to source-based images in yoga or Zed at the very latest when you'll be forced to. But it's a bit of a change in terms of how they're built particularly if you're customizing things. So if you do have any questions or concerns about this please do come to us in the upstream community whether that's through IRC or OpenStack Discuss Emails or join us at the PTG next week to talk about it. So the next point is, sorry, that was just the first line. So the next point is the collection which is new in yoga. It's a way of sharing answerable content between the COLA projects. So mostly between COLA, Ansible and Koby. So it's starting small but we're hoping to make more use of that in the future. We've added support for the Venus log management service. We've got two new host OS distribution supported, Rocky Linux 8 and OpenOiler 2004. We changed the default rabbit MQ policy to use non-mirrored transient cues because we are using a slightly strange set of mirrored transient cues, which Robert MQ claims to be not ideal, let's say, and could lead to some issues. So that will come in with yoga. We also back ported it but made it optional default into the previous behavior. We've got some better support for horizon custom themes. So you no longer have to rebuild your horizon image to get a theme in there. You can specify it in your COLA Ansible configuration. The ironic deployment now defaults to Y-Pixie. So that's in line with the changes in the ironic project. We've added a Livevert exporter image for Prometheus so we can now get metrics about Prometheus. Metrics about Livevert in our Prometheus monitoring. And finally, Zoom now has support for using Cinder-SEF volumes. Next slide, please. On the Koby side, we've got support for Rocky Linux as a host OS. We've got support for using building and using multiple different overcloud disk images. So if you wanted to have, say, your controllers using a different image to your compute nodes, that is now possible and certainly much easier than it was before. We've got support for deploying Livevert as a daemon on the host rather than in a container. And the reason we went down this path is to unlock mixing of host OS distribution and container distribution. So that might mean we could do something like having Rocky Linux on the host and sent us stream in containers or perhaps even something more extreme than that. We've got improved support for configuring apt and repositories on Ubuntu. And we've made some improvements to the proxy configuration. Next slide, please. So we're still in the planning stages for Z, of course, but the things that we're lining up is this dropping of the binary images that I talked about before. There's a bit of an open question around whether to continue supporting CentOS Stream 9 or whether to move to Rocky Linux 9. And that extends to both the host OS and the containers that we support. So we'll certainly be interested to hear people's opinions on that. We'll be discussing this at the BTG, of course. And then there are a few features that we've had in the pipeline for a little while now. And we really ought to get them over the line. So first is secure our back. We've got a few patches around that at the moment, but we're just kind of gonna keep an eye on how things progress in Keystone before landing them. There's support for SystemD and Podman. So we're going to start running the Docker containers as SystemD units. And then on top of that, add in support for Podman as an alternative to Docker. We've had support for let's encrypts on the, in progress for some time now. It's been through a few iterations and it would be really nice to get that landed in Z. And then finally, we'd like to support OpenSearch as an alternative to ElasticSearch. As we're basically, we're blocked at the version, at the last open source version of ElasticSearch that was available. So it's starting to get a little bit crusty now. So it's time to move to OpenSearch, I think. And of course, please do join us at the PTG to discuss these topics and more. And we look forward to seeing you there. Thank you. Thank you, Mark, for all the updates. Really good to see all the new disorders support in Kola. And definitely like the secure RBAC, we have a lot of open question and hopefully we'll be discussing and figuring those out in PTG. But thanks to you and Kola team for all your work. Great. So next we have Maiza Macedo from Courier. And it will be Maiza. Thank you. Thank you. So hello, everyone. I'm Maiza Macedo, Courier PTL. And during the yoga cycle, we focus mainly on two fronts in Courier which is improving the booking and the management of resources. So one of the ways that we use it to improve the debugging of a cluster which is a Kubernetes cluster which is using Courier is to provide Courier events as Kubernetes events. Then if the user, the operator wants to know what is happening with some specific Kubernetes resource like for example, if it's taking long for moving to a running state or anything in the sense, it's possible to use Kube-CTL command by describing the pod resource, the service resource or any other resource that Courier supports. And also by looking at the Kube-CTL events. This is a new improvement that we have and on the debugging side of things. Aside from that, we also focus a lot on reducing the neutron resources that Courier uses when you deploy a cluster with it because basically when you create any pod or any service or a namespace, Courier will create either a load balancer for the service and then ports for the pods and then security groups for the network policies. So in this sense, we analyze it and we discover certain fronts that we could reduce the amount of resources that we use from OpenSec and reduce the workload that we put especially on Neutron. In this sense, for example, pods that are completed, that had finalized their task, the parts are not available anymore and many other improvements in the sense were included. Aside from that, we try to focus on bringing the Kubernetes scale to OpenSec with Neutron and with Octavia. So we improve it in the sense that the way that Courier makes the parts creation to not allow many parts to be created at the same time restricts the number of book parts creation happening in parallel. So this way the cluster can scale in a more stable manner. Next slide, please. So for the Zed, as a plan, we want to keep working in the scalability improvements, especially with Neutron to improve with create book parts operation happening at the same time, multiple subnets being created at the same time and attached to a router and any of these operations. This is something that we are closely working with the Neutron team and we want to keep going on that front. Aside from that, we have had many improvements in their reconciliation of Kubernetes resources with OpenSec resources because for each Kubernetes resource, Courier maps to an OpenSec one. So whenever an operator decides to buy any chance to remove one of the OpenSec resources that Courier manages, Courier will try to reconciliate that resource and basically recreate to make sure that the Kubernetes resource is properly wired. So this is something that is already on progress and we will continue for the load balancer, it's progress and we will continue for the other resources. And aside from that, we want to improve in the health checks front of the Courier controller and this UNI to make it more stable and ensure that the container is not restarted for when it could continue to be considered alive. That's all and I really hope to see you all in the PTG session for Courier that will be happening on Monday and Tuesday. And if you have any topics that you would like to discuss with us, please do join us there. It would be great to see you. Thank you. Thanks, Paija, for all the bits and even good to see like all these to date integration with Kubernetes and your teamwork. Thank you. So next we have from the Neutron, Leos Ketono. To you. Yeah, hi everybody, it's Leos. So actually I try to collect a few interesting things from the yoga cycle, what I think should be really good to hear. So we have done a lot of things. I don't see I and then there are everything, but just a few features which should be interesting for everybody. So one is the NodeLocal Virtual IP address. That's actually a really cool stuff and you can have virtual IPs for your VMs which can share it and can access only on the same physical node or pod. So it's really useful for load balancing between services or similar. Another big and really interesting feature is the Offpads SmartNIC BPU with OVN. So actually it was a really multi-project effort because it was not Neutron, but it was also involved with NOVA, but even it was a project in OVN and that's outside of OpenStack. So it was really a cross-project feature and effort. So with this now you can manage SmartNICs which has their own operating system and OVS running on them and you can manage them from Neutron and you can schedule your VMs to use those with the help of TOA. Next slide please. Yeah, another OVN feature that actually we already have in the previous release is the RouterGit Gateway QS, but now you can use that feature and API with OVN also. So we have a list of gaps between OVN and OVS deployment for example and it was in the gap list and now we have a few things which we removed from this gap list and you can use OVN for more use cases and this was one of those features. There was another feature in this cycle which is actually for the quality of service and it was again a cross-project activity between NOVA and Neutron. So like in the previous cycles, we were done to have quality of service and scheduling available for VMs which use a minimum bandwidth. Now you can define quality of service with minimum packet per second and you can schedule your VMs to hosts which has enough bandwidth in packet per second for that VM. So it's really cool stuff. So with this now you can have even more granularity to schedule your VMs only to hosts where you have enough capacity and bandwidth. Yeah, actually that's it for the coolest stuff for the yoga and what we have done for yoga. I would like to mention a few things what we plan or what we see on the horizon for the Z cycle. So there are a few things which we can't finish during the previous cycle. One is again a QS feature. So the previous one was the minimum guaranteed packet per second but there is another that is the packet rate limit. So if we can say it's an analog for the bandwidth limit QS rule type but it's for packet per second. And actually the previous feature which I mentioned for yoga that was a scheduling feature but this one the packet rate limit this is a data plane enforcement feature. So there will be no scheduling guarantee but actually on the data plane the packet will be limited. There is plan to work on the distributed metadata feature that is actually will help to avoid the many agents which we have for neutron. So if we finish that there will be no need for the metadata agent because OVS agent will be responsible for handling the metadata traffic between your VM and the normal metadata agent the service. Yeah, so it will perhaps make easier the scheduling and the deployment of your OpenStack. Actually it's more like the next thing is more like a project internal thing but it can be really interesting for people who use not just core neutron features but other networking things like firewall as a service. So firewall as a service was retired a few cycles ago because there was nobody to maintain it. And I'm really happy to announce that we have a team now who started to work on firewall as a service. They started to maintain it. They started to fix CI jobs and actually they would like to have and add new features like it seems that there will be OVN compatibility. So we will use firewall as a service with OVN as well. So it's really interesting to see that there are new contributors to OpenStack and to Neutron, so it's really good to see. And actually another QoS quality of service thing so it's a really hot topic in the latest few cycles. And then that is to make the previously mentioned guaranteed minimum bandwidth feature work with OVN because actually it works now only with the OVS and the CRIOV backends. So with that, we will have again one less gap in the OVN gap list. So it's really good to see that we will have more backends which provide the full feature list of Neutron. Yeah, actually that's it from me for what we can expect. And I hope that we will meet during the PTG next week. We will have sessions from Monday, 14 o'clock or 13 o'clock UTCI, I can't remember, sorry. Yeah, thank you very much. See you next week. Thanks Leos for all the updates. The SmartNIC support is definitely the thing many of operator users are looking forward and also like good to see this FWS plan for adding the OVN support. Thanks to you and all the Neutron teams for all the work and the planning you are doing for JetCycle. So next we have from Ironic Yuri Gregory and you'll be providing all the updates from Ironic Project. Sure, thanks. Hello everyone, my name is Yuri Gregory. I'm the Ironic PTL since the yoga cycle. And here are some updates for what we did during yoga. Our goal basically every cycle is try to improve the experience for our end users and we are able to achieve some cool features that I hope they will enjoy. First of all, now we are default to deployment boot mode. It's changed from legacy BIOS to UFI. So if you are an operator and you only want to deploy nodes using BIOS you need after updating to yoga you need to apply some changes to have it working since now we are default to UFI. Booting final instance via networking is as opposed to the local boot loader is now deprecated. This doesn't cover the case from boot from volume and HONDISC deploy interface. And now we also have a new parameter in the instance info field that can be used to distinguish between partition and whole disk images as before we had to set the HONDISC and kernel parameters for it. If you want to take a look at more of the features that we have available we have all the details in our release notes. Next slide please. For the ZPTG the plans that we have to discuss we'll have sessions on Monday, Tuesday, Wednesday and Thursday most of them from 14 UTC to 17 and another one from 21 to 22 UTC so we can cover APAC time zones. We will discuss iron safeguards, custom timeouts per deploy steps. This is something that the community really want and is interested in and many more topics. We are also looking forward to have feedback from the community. We have an iron community feedback form that we will also be discussing the results during the PTG. So if you have some time, sometimes please take a look at the form that we have. It will be in the chat. And I hope you will be able to join the PTG next week and if you have topics that we'd like to discuss feel free to tell us. We still have some open slots for our agenda. So please do so. And if you have any questions feel free to reach out to us via the open stack discourse mailing list with the iron tag in the subject or the IRC channel open stack ironing in the OFTC network. Thank you very much. Thank you very for all the updates and definitely in PTG. I don't think it's one of the project, one of the busiest project. You'll be discussing the lot of things and great work. Thank you for all the updates and ironing team. So next we have Manila from and we have Gotam Ravi with us. To you, Gotam. Thanks, Kanchal. Hello, everyone. My name is Gotam Pacharavi and I was the PTL of the Open Stack Manila project for the yoga cycle. We had a pretty productive release cycle with the yoga release. So huge props to our core documentation to our review contributors to our outreach and college interns to the wider open stack community especially the foundation, the release, QA and infrastructure teams. I can't say this enough. They've helped us out a lot through this release and as usual to our project maintainers. Now for some of the highlights for the work that we did in the yoga release some significant improvements to shared networks with this release and users can manipulate in-use shared networks. And by that I mean those that have active mass servers exporting shared file systems off of them. So there's no longer also a limitation of having just one subnet for availability zone that a shared network is created within. So if the cloud administrator has the infrastructure to support it Manila users can plug in multiple subnets to their mass servers allowing for multi-path access to the data. Deletion of shares is another improvement area that we've made. So it used to be an irrecoverable activity and we had users from some large deployments ask us for a way to soft delete resources and keep them around for a configurable amount of time. And so we introduced that in this release and this configurable amount of time defaults to seven days. And so within that time shares can be recovered or they can be purchased permanently. And after this time expires the service automatically deletes these shares that are placed in this recycle bin. Further we made use of the affinity based scheduler hints that we introduced in the Wallaby cycle. This time for cloud administrators to enable directed placement for shares and for shared replicas to specific hosts. And these scheduler hints, we can move to the next slide Alison. These scheduler hints have granular RBAC to prevent manipulation over the lifetime of these resources. As the resources get migrated automatically or with cloud administrator intervention these scheduler hints are going to apply at that time as well. So they are now treated as shared metadata that cannot be deleted or manipulated by end users. So there were also several driver improvements that we committed during this release. Some important ones being the container and the net app drivers adding support to manipulate share networks submits. And we also began the work to phase out the wider use of Oslo route wrap and migrate many calls to Oslo Prifze. This is work we expect to complete in the Z cycle. Alongside our outreach interns and student interns from Northeastern University contributed Manila API support to OpenStack SDK and strive towards achieving feature parity in the OpenStack client for Manila CLI. So what are we gonna do for Z? Well, firstly, we have a new PTL. Carlos Eduardo Rasilva has been a core contributor for several releases and he'll be taken over and he'll be attempting to hurt the cats turn the ocean and what have you but keeping it fun generally. And besides in terms of features we're looking at improving the metadata APIs specifically adding support to add metadata to share snapshots to export locations to share replicas to share groups and other user-facing API resources. We're chasing FIPS compliance. There's a lot of work that was done to achieve FIPS compatibility in the last release. So we're looking to complete that work towards compliance in the Z cycle. We're also looking to adopt improvements that were made in the Ceph community to make the NFS gateway to Ceph FS natively more scalable and robust. And we're looking to enhance service recovery and polling operations within the share manager service. And we're also looking to deprecate the Python Manila client shell in favor of the OpenStack client. So there was a lot of work that was done to achieve feature parity here and this work should conclude in the Z cycle. At least that's our hope. So there's certainly a lot more that our enthusiastic contributors are looking to get done. So if you'd like to help us have a productive Z release or if you wanna influence our plans do join us at the upcoming Z and what we call the Zorela Cycle project team to gather in next week. That's it from me. Thank you. Thank you, Gautam. And doing that integration in OpenStack SDK is one of the long pending work in OpenStack and Manila finishing them is one of the great things. And definitely the secure RBAC is something we are trying to integrate in almost all of the project in Z cycle and good to see in that in your plan. Thank you. So next we have from Octavia, Greg Reed-Mongé to you. Thank you, gentlemen. So yeah, I'm going to give you an update on what is new in Octavia Yoga. So the highlights, Octavia does comply with FIPS. So FIPS is a standard from the US government that define default settings related to the security of cryptographic modules. In Octavia, it impacts the different parameter of the API that can be used in an HTTPS or a TLS-terminated listener. For instance, when FIPS is enabled, Octavia denies the creation of a TLS listener that uses weak encryption algorithm or legacy TLS protocol such as TLS 1.0 or 1.1. FIPS is only supported on CentOS but based load balancers. Then another cool feature in Octavia Yoga is a Prometheus exporter. This feature allows the user to create a new endpoint in a load balancer that can be used to collect statistics and metrics about the front end and the back end of a load balancer. We have more than 150 metrics exposed by this exporter. Then a user can use a tool like Grafana to display a dashboard with those metrics. If you want to test it, Michael Johnson from the Octavia team created and published an Octavia dashboard on the Grafana website. And please note that the Prometheus exporter works only with TCP-based load balancers, so HTTP and HTTPS, and the feature has not yet been implemented for UDP and SCTP load balancers. And the last highlight is the improvement to the persistence feature provided by Taskflow in the Yomfra V2 driver. So basically, the Yomfra driver is responsible for creating and configuring the resources created by Octavia for creating a load balancer. For instance, when a user requests the creation of a load balancer, the Yomfra driver creates another VM. It plugs different neutral networks into the VM, and it also configures HAProxy and keep a live view in the VM. So this persistent feature helps the Yomfra driver to be more resilient to failures by storing in a database the internal state of each task that is performed by Octavia. So if a controller crashes or loses its network connectivity when it is creating or updating a load balancer, Taskflow will reschedule the task on another controller that will resume the work. Please note that this feature is not enabled by default in Octavia. Next slide, please. For the Z-release, our plans are first the multi-VIP feature. So in Octavia, the VIP port or the front-end port of the load balancer is attached to only one neutral subnet, and a neutral subnet is either IPv4 or IPv6. It means that if a user wants to have both IPv4 and IPv6 addresses on its front-end of its load balancer, its load balancer needs to create two load balancers. This feature extends the Octavia API by allowing a user to specify multiple additional subnets from the same network when creating a load balancer. So with this feature, it enables dual stack load balancers. Then the failover stops threshold. So a failover is an ability in Octavia to recreate a load balancer if Octavia detects that the load balancer is unresponsive or it returns an inconsistent status. In case of major outages, or instance, a full rack going down, Octavia may detect a lot of failures in the load balancers. Thus, it will trigger many failovers at the same time. And those failover will create a lot of new load balancers and a lot of virtual machine, and as a result, it will increase the load of the cloud. So this feature, the failover stops threshold, is an optional mechanism that will prevent Octavia from recreating the load balancer if the number of load balancers in error exceeds the threshold. So Octavia will be more tolerant of database issues or network connectivity issues. For instance, in the case of HD planements where the load balancers are running on another site. And then the last item on for RG2, we need to have some discussion, we'll have some discussion on enabling the persistence by default. This is a longstanding topic for the Octavia team. The on for RG2 driver was introduced in Osori. It was enabled by default, but without persistence in Xena. And now the next step is to enable it by default, to enable persistent by default indeed. And that's all I have for Octavia. Thanks, Gregory. It's nice to see like Octavia primitives, endpoint support and like a lot of metrics, we can pull out from that. And that's always helpful. Thanks for all the updates. Next, we have Noah and Silva Bowser from Noah, to you. Hello, thanks. So as you can hear, I'm Silva, I'm working at Red Hat and I'm know the PTL since yoga. So let's discuss about what we have for yoga for this release. Next slide please, yeah. So I just provided the cycle highlights as you can see, but just in case, let me know that, let me tell me, let me tell that we have certain blueprints that were implemented during the yoga release. I was also super happy to see new contributors that were providing new changes. So it looks to me that no, we have new contributors. So thanks if you are there. And if you want to work on Nova by the way, we will be super happy to see you. So as you can see, as you can see, there are multiple new supports. Some of them are actually some kind of cross-project support features, I would say. We already discussed about that with for example, Nova and Sider, but let me try to explain about that as well. So as you see, we also modified some policies for Nova, for secure or back. Even if we were supporting secure or back, we saw that we are not having good roles before. So what we did during Nova was to provide new policies. So you can use them, but if you want to use them, you need to opt in them for the moment, they are not by default. You need to modify your options to use them. Also a new filter, and then I will discuss about unified limits for the last one. But another feature that we are supporting was for SmartNix, like, legend was already discussed. So basically what's possible now is to use those SmartNix. You can use them, and Nova will support them. So if you want to exactly know how to use those new hardware, you can look sorry about the release notes that you will see. Also, what's definitely nice is that we have no experimental tooling for emulator architecture, sorry. So what that means is that, for example, if you want to test some say PPC 16.4, architecture for guests, you can do it. We don't test them on CI, so of course it should be like some kind of production support, but if you want to test that, it's possible. We actually have ways to modify options for saying, for example, I just want to test that for some instances. That's possible for AARCH 64. It's also possible for MIPS. Basically, you can use that. And last one, it was the first one, but actually I'm discussing that by this. So maybe you only know about unified limits. It was a new Keystone feature. It's the fact that now you can provide limits by Keystone, and then Nova will look at that. So now it's possible to use those unified limits, but for the moment, this is experimental. So what we need to know is to, if, for example, you have an operator, I would like to know if you can test that, because we provided that as experimental support, but it would be super nice if as an operator you could be helping us by testing those unified limits. The problem that we have is that we don't know about the performance. So we know that it works, but we don't know about the performance. So if an operator can just help us, it would be nice. So basically, also saying about two other features that I didn't provide, we also had LightBix driver support, and we had VMware FCD support. So again, look at your release notes and then you will see them. Last point, I'm super happy to say that we haven't found any regression during your gas. So we're only having one resource, we're only having one RC, one resource candidate. So that means that when actually we provided new features and new bug fixes, we are having super nice support and we haven't seen any regression. So next slide please. Okay, so as you know, we don't have priorities for the next cycle, but I started to provide the PTG discussions that we'll have. So first, Gman started to explain about that, about new release candidates support. So as you can see, about tick release and talk releases, we'll try to understand, we'll try to discuss how to use that with Nova. What would mean, for example, for Compute? What would mean for upgrades? That's basically it. Another feature that we could be discussing would be about how to support the Manila shares by attaching them to an instance. So for the moment, we have a spec that was accepted for yoga, but hopefully it could be implemented by Zed. Another discussion we'll have was about how to support external CPU power management, meaning that maybe some folks will have some ways to verify the power for CPUs by their own ways and Nova could maybe use that. We need to discuss that by the PTG. It's a bit early to explain about that more. Maybe if you're saying, okay, what that means, actually, just go by the PTG, it would be super happy. I mean, again, if you're an operator, that would be super nice to have your thoughts. Another point is, I mean, the air secure impact policies that we discussed. Yeah, basically, maybe we should be discussing about if we could default them. For the moment, as I said, it's opt-in, but maybe it could be default by Zed. I don't know. That's probably something we could be discussing during the PTG. One point, again, is about the per-process internal health checks. What we could do is to have some health checks for internal services. And last one would be about how to provide a new API and macro version for instance creates by providing a new domain name. But again, go to the PTG and that's why I'm asking. Next slide, please. Okay. You've seen that. Maybe you have questions. Maybe you have concerns. You want to know more about that. So that's why I'm saying here that we will have the PTG sessions on Tuesday, Wednesday, Tuesday and Friday. You can see UTC times. Again, feedback is more than one come. Operators are welcome as well. I would like to see their faults. And you can see all the topics that we have during the PTG at the interpad that you can see in the slide. So thanks. Definitely appreciate to explain about yoga. So that's it for me. Thanks a lot of things in one. Yeah, our back is one of the things like we have completed in your for for the new direction and definitely smart integration along with neutron is a great cross project. Work with it. Thank you for all the updates. So these are a bit from eight project and definitely they are not just limited. We have around 50 projects and there are links for the release highlights from those projects and the complete release nodes link also and definitely the documentation. So if you would like to know about any other project highlights or their details feature they have delivered, please visit those link. Next slide please. And yes, as like many of our project leaders have mentioned, we are going to have a project team gathering from next week, April 4 to 8. It's a virtual event and free to register or join. So you can register it at the link and we have the schedule also up and all the ether pads from different project also up. So where you will see all the topics we are going to discuss and you can always add the topics to those if you would like to discuss in any of the project. And yes, everyone is excited about the Berlin summit. We are back in face-to-face summit in June 7 to 9 and definitely hits one of the great things we are hoping after all these pandemic things and all. So we have a schedule features from global users including Bulburg, China Mobile, BMW, Volvo and Kekoi and a lot of like great talks and sessions from different companies and different areas, speakers we have. So schedule is also there. You can check out that and sponsorship are available. Don't delay that. And also like you can register it before price increase. You can go to openinfra.gov slash summit and get all the details about sponsorship, registration, schedule, everything. And we have the next openinfra live episode. Next week will be in the PTG, but after that, April 14 is our one of the exciting session coming on large scale of deep dive. And we have the guest from Yahoo. We'll be talking about all the use case and on this topic. And don't delay or if you have any topics to be submitted for our next openinfra live episode, you can submit in the ideas.openinfra.live site and we'll be excited to feature you in one of the future episodes. So thank you all our great speakers and foundation and all the sporting members of foundation who is like helping us to have this one of the great show on every week. And we'll see you on April 14, Thursday, 14 UTC. Thank you, everyone.