 Thanks for joining us and welcome to Open Infra Live, the Open Infra Foundation's hour-long interactive show sharing production case studies, open source demos, industry conversations and the latest updates from the global open infrastructure community. We are live here on Thursdays at 14UTC streaming on YouTube, LinkedIn and Facebook. My name is Kendall Nelson from the Open Infra Foundation and I'll be your host for today's episode. Like I mentioned, we're streaming live so we'll be saving some time at the end of the episode for Q&A so if you have any questions throughout the episode on anybody's topics, please feel free to drop questions in the comments section throughout the show and we'll answer as many as we can at the end. First, I want to give a huge thanks to the Open Infra Foundation members who make the Open Infra Live series and so much more like today's release that we'll be talking about possible. And today we're gonna be talking about OpenStack Z which is the 26th version that was actually released only yesterday. So we have some awesome guests today that I will introduce in a moment here but first data, who doesn't love data, right? So the 26th release, OpenStack Z will introduce 15,500 changes. We really couldn't do it without our amazing community so 710 contributors this release from 140 different organizations which is awesome diversity and even better is the 44 countries that get represented by those 710 contributors across the 27 weeks that we worked on this release. So super, super excited to share everything that happened in Z with you today but obviously we love data so let's keep going. Over the last 12 years and 26 releases we've seen insane amounts of growth of OpenStack usage. So right now we just pulled data from our most recent user survey which shows that there are over 40 million cores in production which is 166% growth since 2020 even let alone the 12 years that we've been working together as a community but since 2012 our community has merged over 576,000 changes from over 8,900 contributors. It's crazy how global our community is and enormous. And we couldn't do it without our sponsors like I said before so we can call out global and growing users of OpenStack like Bloomberg and Walmart, CERN, who doesn't love CERN? I love a good science experiment, OVH cloud, Line, Workday, China Unicom, China Mobile the list goes on and on. So many different kinds of users from all different industries so it's very cool to see all the applications and uses of OpenStack. So moving on, we have an updated map here. OpenStack is obviously a conglomeration of a lot of services for different services. It's a very big complicated sometimes project. So we have updated our OpenStack map to remove some projects that have retired and we actually have added one of the projects that we'll be talking about later today, the Venus project. So we're hopeful that we'll be able to add more in the future but we do our best to keep this up to date so people can have a nice overview of what an OpenStack cloud could look like and all the different options of services available. First up today, we have the Manila project and we have the PTL actually, Carlos Silva to tell us about what happened in Manila during Zed. Take it away. Thank you, Kendall. Yeah, hello, I'm Carlos. The current PTL for OpenStack Manila, the OpenStack shared file system as a service and today I walk you through the highlights of Manila during the Zed cycle and we'll share some of our plans for the 23.1. Also known as NLOP cycle. So starting with some highlights from Zed, the first one to mention is that Manila reached feature parity between the native client and OpenStack client and now users and administrators are able to use OpenStack shared commands to communicate with Manila. The only missing bit is the API micro version auto negotiation and this means that to use the release, this release of the OpenStack client with an older shared file system API service, users will need to set the API version in their environments. This could be done by configuring the cloud config specifying the shared file systems API version or via shell environment using the US shared API version or even via CLI override. After this last bit is covered, we intend to add a delegation warning to the native Manila client comments and last on this topic, I would like to thank all of the incredible Manila contributors for this achievement. It's a long time since we first started this effort and it's great to see the results we managed to achieve. Another feature to be consumed, available to be consumed after the Zed release is the metadata API for shared snapshots. While creating these snapshots, now it's possible to specify metadata. This is a recently introduced feature in Manila but the workflow might already be known to Manila consumers. It works in a similar way as the Manila shares metadata and administrators are able to update snapshots with some metadata keys and values and delete the metadata and filter the snapshots that contain specific metadata. And we have also improved scalability with this FFS NFS driver. This FFS NFS driver received a couple of updates in order to enhance scalability. The NFS cluster protocol helper has been added to allow users to consume and export FFS shares over a cluster NFS gateway. This presents many advantages since the operator no longer needs to maintain their own instances of NFS Ganesha apart of the SAP cluster. For this, we now communicate with SAP manager using the NFS plugin. For new driver additions, we have a new driver that was wedded to Manila and the players are now able to use their MacroSAN storage systems through the recently added MacroSAN driver in Manila. If you want to check the availability of features, please check the support matrix that's linked on the slides. And over the Z-cycle, we have also seen some replication enhancements for the NetApp driver and the NetApp on-top driver for Manila is now doing more accurate checks to determine if our replica is in sync with their parent share or not. And please check the release notes for more information on this. Another big update for Manila was that the RBAC defaults of the shared file systems as a service Manila APIs have been updated to remove system scope personas. This has been done in concert with other OpenStack services and in reaction to operator feedback that they use of the system scope introduces backward compatibilities in existing workflows. The new defaults support the use of scope. However, no RBAC rule by default includes the system scope. At this time, we do not recommend the use of system scope personas to interact with the shared file systems as a service API and because it's largely untested. And last but not least on the Z-cycle highlights, we have managed to tackle a bunch of bugs and there are also some other upgrade bits. So you can check all of those updates in the release notes that's linked on this tag. And for 23.1, also known as NLOP, there are some maintenance updates for us. The first one would be the OpenStack client updates which is getting the version auto negotiation working so we can add the deprecation warning to our native client. Also covering features on Manila UI to get closer to feature parity since Manila UI is falling a bit behind of the features we currently can provide to consumers. Adding more coverage to the OpenStack SDK, continuing the migration we have been doing from root rep to preps app, get more tempest test coverage for RBAC since we currently have some testing and a job for the GHSS pulse mode but we want to get more testing for the GHSS rule. If you'd like to hear more about those plans or even bring up your own, we'll be glad to have you in our PTG sessions. So please, if you have some or if you would like to give some feedback, we will also have one operator hour. So feel free to join us and if you want to add your tops to the PTG planning, either page linked or both. So yeah, that's the updates for Manila. Thank you for having me. Thank you Carlos, yes. It's awesome to see all of the hardware enablement that's happening in Manila, adding new drivers and this and that. Speaking of hardware enablement, next up we actually have the Ironic project and the newly elected PTL Jay Popner. Hey, thank you so much, Kendall. And that sounds exciting for Manila. I look forward to seeing that in the world. So first of all, these updates are usually like celebratory things because we got something done but I have the sad news of being able to present that we're gonna be honoring Ilya Tingoff with this open stack release of Zed. He was a long time Ironic contributor. We've dedicated the release to his memory to be specific with Ironic. Ilya put a lot of hard work in making Sushi happen which is the library that Ironic provides for communication with the Redfish hardware standard. And if you're using any of those drivers you owe a debt of thanks to Ilya and that's why we are gonna honor him with this Zed release. And so we've got the sad stuff out of the way. Let's talk a little bit about what we've done during the release. So first of all, just a few numbers to sort of talk about. This is the 18th release of OpenStack containing Ironic. We were first incubated in Ice House and we're all the way through to Zed. Ironic have 43 different contributors across all our projects. And one thing I wanted to take the opportunity to plug here is our intermediate cycle bug fix releases. We push a release every two months in the cycle even though most open stack projects don't because we do have some Ironic users who like using Ironic standalone such as our integration with Metal 3. So this is sort of a fun thing if you're someone who's using or thinking about using Ironic standalone you can also check out one of our bug fix releases and become one of the over 6,000 people who've consumed them since the Zed cycle started. But let's talk about what we've done. The first thing, and this is sort of a not an endpoint but a pretty far milestone in a journey we've been making for a while where we have support for self-service bare metal as a service. And what that means is Ironic traditionally was an administrative API used by administrators and by no but a provision servers and we're slowly moving that toward being an API that can be exposed to more people to more projects and have some multi-tenant awareness. And along that way, we added the project scope manager role to our back model, project scope admins and managers can create and delete nodes which can be tagged with ownership to their project. And when you provision a node as a member Ironic can be configured to automatically mark that node as leased by that project. These are all sort of important things for reflecting the state of who has what checked out in a multi-tenant Ironic world. And this is really exciting. I can't wait to see what operators are able to do with this in the real world. The other thing and the next thing on the slide and I'll say that this is sort of a constant for Ironic. This sort of work is never done that we do in the background to make hardware work seamlessly when provisioning. In this case, a few highlights that I found specifically where we've got some more support for SNMP-controlled PDUs. These are particularly good for development but we do have some people who use those in production for power control. We've got some security enhancement mainly certificate validation work and SSL for some of our BMC back ends and some of that security work also includes making it so that if you change the password on a rev fish BMC that your Ironic connection is now able to figure that out and keep access going. So this is pretty exciting and I'm just, I just pulled out a few things here but we're always making the hardware support better and Ironic really appreciates our hardware partners who help test these features in CI with the real hardware and help us develop them. And this is another one where we're constantly working on this. This is constantly at top of mind for Ironic developers and that sort of operator quality of life. So there's a few things here we've done to try to make your life better if you're running OpenStack Ironic. You can now take our configuration where you can set specific kernel command line options based on your environment and set those per node. You can now actually template in the default values there. So if you've ever had to add a string to make a Linux installer boot or one of our RAM disk agents boot, you'll see that this is gonna make it a lot easier to do that sort of tweaking. We added a new denial of service prevention mechanism which can be configured to limit concurrent deployments and cleaning to prevent a disgruntled user, disgruntled user admin with access or malicious user from chewing through all your hardware really quickly. That comes out of the box configured with very high limits. If this is a feature you're concerned in, I strongly recommend you go read the release notes and make sure you tune that to a value that makes sense for your environment. Otherwise, the limits are high enough, probably is not gonna impact your day to day use. Another one is our kickstart driver which allows you to, instead of deploying an image, you can deploy a kickstart installer. It's been greatly improved. We're now testing parts of it in CI and we support deploying directly from a repository instead of having to use an image directly from there. So that's pretty exciting as well. And sort of the final thing we're calling out here and there's a lot of enhancements around this area. I sort of pulled one out, but ironic cleaning can now be configured to skip devices. So this means if you have a persistent bare metal machine that has a lot of data on it, you don't want that data to be erased, but you wanna make sure you can clear off the operating system and prepare it for the next install, we support that. And you can sort of see that we're trying to reach for both sides of spectrum, the people who need that enhanced support for a single tenant and some people who need that multi-tenant support. So we're trying to make it work for everyone. The last thing I'm gonna talk about in terms of highlights are we've made some improvements for those standalone users I spoke of before. Ironic now supports controlling a DNS mask DHCP server directly. Before this, you always, even if you were using Ironic and standalone need to deploy a neutron or DHCP server of your own, now Ironic's happy to manage that if you're using a standalone. As I mentioned before, we did release two bug fix releases at the two and four month mark of the Zed cycle given the standalone users some early access some of the features we've talked about. And we've also taken some time with those Kickstarter improvements to document how to use it if you're a standalone user. So that's pretty exciting if you're someone who's interested in the hardware provisioning side but is not looking for a whole cloud stack. So finally, we have to kind of talk about things that we removed and deprecated. Sometimes you've got to prune the tree back a little bit to make sure that we're able to support things. Long term and I'm just gonna reference a few of these. We had some things that were deprecated and some things that have been deprecated that have now been removed. So when you're upgrading your Ironic installation think about this. We've deprecated support for Syslinux based bootloaders that includes ISO Linux and Pixie Linux. These are basically becoming unsupported by upstream they haven't been touched since 2019 and they only see BIOS methods booting. And so what this also means is that some of our support for legacy style BIOS booting is gonna have to go away with those tools going out of support. And that means that if you want to use legacy style BIOS booting with virtual media you're not gonna be able to do that for very long as we've deprecated that in Zed. Some things that we previously deprecated that we've now removed. We don't support instance network booting. So to be clear about what that is, Ironic used to support a feature where after you deploy an instance you're able to continue Pixie booting into the operating system you've deployed onto that node that feature's been removed. And now when you provision machines with Ironic they're always gonna be configured to support local boot. And this was already our recommended and default configuration. So unless you specifically ever turned on or relied on this feature it's not something you have to worry about. One of the big reasons that existed was actually support for trusted boot. But with the removal of instance network booting we are gonna remove trusted boot. And again, this isn't to be confused with things like UEFI secure boot or similar technologies. This is a very specific trusted boot implementation. I'll note that we've got some of the folks here who have slides on their topics for PTG or what's coming up next. With Ironic we haven't quite nailed that down yet but I expect next cycle will continue to focus heavily on operator quality of life and particularly making sure our failure scenario around conductors are improved greatly. But thank you very much and hope to see you around if you've got any questions. Thank you so much for all of that information. It's a really sad open sack will feel the loss of Ilya. So dedicating the release to him I think was a good gesture. We appreciate all of the effort and work he put in and we'll miss him for sure. So our next topic, next awesome open sack service is Nova and we have Sylvain the PTL here to talk about that. Hey, thanks Kendall. So very briefly, thanks for joining us. So next slide please. Before discussing about the features that we implemented for that specific cycle, a few numbers. I guess you'll be interested in knowing about how many blueprints we merge. So as you can see, we merge six blueprints from 14 that were accepted. To be clear, the other ones were either not having new changes from the owners or we also had some problems for them. So basically it was not a review issue. It was more like the point that some changes were needed some more time. So about the bugs, I also looked at Garrett. How many bug fixes we merged during this cycle? As you can see 45. And I also looked at Garrett for the previous release which was yoga. And as you can see, we merged more bug fixes. So that's actually very good, I think. Also, maybe you don't know, but every week someone from the Nova community tries to look at the new bug reports that are created at the first week of Zed, we are having 28 of untreated bug reports. And thanks from all the folks that we are looking at them every week, at the end of the cycle, we only have five of them. So that doesn't mean that we merged only a few bugs. No, it just means that when you create a bug report by Nova, you're pretty sure that every week we look at it. And either we say to you, actually you need to provide more points or basically we say, okay, it's valid. So at least you know whether what you ask for Nova was okay or not. So if you find a problem with Nova, just provide a bug report and we look at that. Thanks. As you can see, even if some contributors haven't provided implementation for Zed, we have the same number of contributors. And as I wrote also, thanks folks that, thanks Nova folks, because thanks to you, as you see, the metrics were very good for Zed. Next slide, please. So what we did for Zed, as you can see, one of them was for being able to have Nova supporting new virtual IOMMU devices. So what you can do now is to be able when you create an instance to ask by a flavor or by image to have a virtual IOMU device. And basically the guest will have it. Also, windows instances could have better behavior because now we use new enlightments from Hyper-V. Maybe you also, maybe you already know, in yoga, it was possible to create an instance that was using VDPA port, VDP port, sorry. Now with Zed, eventually it's possible to live migrate that specific instance that will outplug the VDPA port. Or suspend the instance, which was not possible before, or to attach or detach the VDPA port. We also merged something which is nice. When you're rebuilt, it's not possible to rebuild a B.A.V. instance. What we named a B.A.V. instance is basically an instance having a volume attached as a root disk. Now you can basically create a virtual IOMU device now you can basically rebuild it and it will basically ask Cinder to re-image the volume. Now you can also unshell an instance by passing a new parameter, which is basically a host. And eventually also now it's no longer possible to generate a keeper because we had problems with some OSs so we preferred to deprecate and to stop supporting to generate the keepers. You can only import a public heap by now. Next slide please. So we will be discussing at the PTG which is in two weeks about a few topics. I wrote a few of them. We will be also discussing about other topics that are not here by now. But as you see at least what I would like to discuss is about some systemity efforts that we could work for this cycle. At least for example, if it will be possible to modify a CPU state like for example having a core to pass offline. If it will be also possible to integrate SkaFound, you probably you maybe haven't seen SkaFound the SkaFound project at some Berlin summit that we had six months before. It will be providing power metrics from host and for the instance. But that's not only on the only topic that we'll be discussing. We really want to help ironic about the problems they have when other computers are going down. We also want to continue to work on the next steps for the TC effort upstream about secure air back. Hopefully we could be starting phase three. We don't know yet. We still need to discuss about what will be the next steps. We'll also want to discuss how to basically just to be clear by yoga, you haven't seen it, but now we have PCI devices in inventories in placement API. By envelope, what we'd like to have is to have the scheduler asking placement for basically scheduling instances asking for PCI devices. And we'll also be wanting to discuss about a few network related features like FQDNINs metadata, mutable MTU for guest, and so on. Also, PTG is not only for this thing about features. PTG is also for discussing about other stuff like you see. As a PTL, I really want to see how we could help contributors to join us because we know that Nova is a huge problem, but if we can help contributors to ramp up, it will be definitely nice. Also, last point, that's the first PTG that will have some specific time slots with operators. So let me discuss that next slide, please. So as you see, we have sessions in between Tuesday and Friday in two weeks from now. But if you're an operator and you really want to discuss with the Nova community, you are more than welcome to attend our sessions because we basically have two of those, one on Tuesday and one on Wednesday. You can find the logistics in the Tepad or you can also look at the PTG schedule. But by the way, Condal, we discussed about the PTG at the end of the presentation. So thanks, folks. If you have questions, just ping me on the, just ping me. Yeah, awesome. Thank you so much, Solan. We will have time for questions at the end. So if you have any, drop them in your chats. Otherwise, all of the project representatives have tried to include their IRC nicknames if they have those for contact and there's always the public mailing list as well. So we look forward to your questions. Thank you. Next up, we have the Neutron project and Laos to tell us about the Z-improvements that happened in Neutron. Yeah, thank you very much, Condal. So hello, everybody. I'm Laos Skotona. I was the PTL of Neutron in the last two cycles. And first of all, I would like to use this opportunity to say thank you for everybody who participated in this release and of course the last few dozens. We really encourage everybody to participate in our weekly meeting on IRC or just come and ask on the Neutron channel or write a mail to the mail list. We try to answer every question or just file a bug or jump into the PTG, of course, as Silvan and the guys previously also mentioned that the PTG is a really good opportunity to meet with the team and discuss any questions which you have. Yeah, so let's go to see what interesting things happened in that cycle. I would like to first highlight that we have again a Neutron stadium project back from retirement. This time it's Neutron fire was a service a team jumped in and they are willing to maintain this project and actually they would like to make it compatible with OVL, so it's really good news that we have life in these old projects and these are still useful for customers. After that, I would like to highlight two news, interesting news from OVL. So from there, the OVL supports minimum bandwidth QS rule and placement allocation for it. So it's now not just SRIOV OVLs, but you can use placement allocation for QS ports with QS policy with OVL. And from Z, OVL support bare metal provisioning with OVLs built in DHCP server for IPv4. So it's another possibility for you to use your deployments for your users. Some SRI news, we have NDP proxy for LZ routers in Neutron. That's actually interesting IPv6 feature and it can be useful to replay as the IPv6 prefix delegation feature in Neutron. Sadly IPv6 prefix delegation is not maintained and actually we set it deprecated in the Z cycle because of lack of testing, lack of and maintain the backend for it. So please check NDP proxy if you need some IPv6 feature like that. From Z, you can use port ranges for floating IP, port forwarding API. So that makes your life much easier if you would like to have port forwarding for your floating IPs. For QS, we have a new rule type, packet per second that's again a kind of a trend. So we have now a new QS rule type in every cycle. This one actually, this packet per second that's for OVS. So if you use OVS driver, you can check out this new rule type. Yeah, I would like to mention here a few things which are not really features, but anyway we've heard on it and it will be useful in the future. So it seems that we finished the SQL Alchemy 2 adoption, not just in Neutron, but in the stadium projects also. So I hope that that will make our projects more future-proof. And actually I just would like to mention here a set news and that perhaps will not be really surprising for you that Linux Bridge driver is not maintained. So we decided in the last BTG that we create a new flag in the config and that we mark unsupported, not maintain the features in Neutron 3.0, with the experimental flag, that's actually a config option. So if you would like to use Linux Bridge driver, you have to explicitly say in your config that, hey, I know that this is experimental, but I want to use this. So that's actually we have no really maintainers for it. There was no contributor for Linux Bridge driver in the last cycles. The CI is lacking resources. So actually there is nobody to fix Linux Bridge bugs. So it's a good opportunity to, if you would like to use this driver, Linux Bridge driver, please come and we are open to help you with the experience and advices and even resources, but come and try to maintain it if you really need this driver. Yeah, but as I continued the beta with data, but I tried to be short with this. So just a few things from the life of Neutron team. So we discussed a little more than 13 RFEs, RFEs for request for and husband, that's a lightweight feature proposal. And for some, we even requested the author to push a specification if the feature was something more difficult. So if you have anything in mind and it seems much more difficult than a bug, please comment and file an RFE on the launchpad so it's basically a bug with the tag RFE and we discussed that during the driver's meeting. We have every week a driver's meeting where we discuss these proposals. We reviewed and accepted five specifications. We have more under review for the coming cycles. We also worked on the secular bug thing in this cycle. So based on the user feedback, we adopted as it was also mentioned by Sylvain and Carlos. So it was also a job for us. And we have the never-ending story of CI rationalization and the improvement. So that's just the background of what we are doing. We are not reviewing your features. Yeah, perhaps a few words for what we are planning for the next cycle. Yeah, so as you heard, we will have the PTG in two weeks. Please come and you can find the Etherpad of Neutron if you visit the Popendewa PTG site. So please check the Etherpad at any lines if you would like to discuss something or just jump in and we can discuss your questions. Actually, we have a few interesting topics already which can be useful for the next cycles. So it seems that Neutron will work on this quota class implementation which already done more or less with Nova and Cinder. So we continue with their experiences to have this feature also in Neutron. There will be discussions for DNS improvements. That's an interesting topic in the last cycles and it's useful for everybody, not just for big but small deployments and edge sites also. And actually we will discuss or we plan to work on the migration from Python Neutron client. Actually, we deprecated the CLI part of Python Neutron client and we use Python OpenStack client but the Python bindings are still there in Python Neutron client and our plan is to move that totally and deprecate that also in the future. I don't know and use OpenStack SDK instead of that. So a lot of projects still use the bindings from Python Neutron client like Heat, Horizon, I think Nova also and perhaps there's others and we would like to start at least in this cycle to move these projects and make them use OpenStack SDK. Yeah, that's it from me. Thanks for your attention. Lots of work to be done but lots of things got accomplished too. So it's awesome to see everything that Neutron did during Zed and will continue during Antelope. Thank you so much for joining us today. Bye. So next up we have the Skyline project which is actually a newer service in OpenStack. It's a modern dashboard that we're working on getting in place to replace Horizon actually. But at this point it's listed as an emerging technology by the technical committee. So it's not quite ready to use in production yet but it is full of features already and ready for testing and feedback and more. I'll let Woohoo dive into everything that they accomplished in Zed. Okay, could you back to last page? Okay, I'm from China. My name is Wen Sheng and thank you all. Now Skyline has the first official version released in Zed, very exciting. At the very beginning Skyline is very, very strange to OpenStack. It uses async.io, use sub-modules, use multiple modules in one repo, use Poetry rather than Pipe. And it also didn't use the Oslo libraries. It's hard to say good or bad. It's just different. Different means not simple. So we reflect and reflect. And finally it seems it looks like other OpenStack modules. Okay, so it grew up from a young nine color deer to adult one and next page. Okay, thanks. This is the new logo, thanks for the design and in Zed, we support OpenStack trim and plus way integration with required OpenStack module is keystones, Nova, Neutral, Glance. Without this, you cannot use Skyline. And it also supports optional OpenStack module like single, Octavia, Manila, Ironic, Heat, Zun, Magnum, and Chou. You can enable or disable with the configuration. And we also support integration with the Prom API. We support OpenID SSO logins and we support other system config, let the APIs for Skyline. Skyline is separate to pass a console and an API server. The console is right by React, simple React without Node.js, it's very simple. And the API server is the first API I think, I think APIs. And we removed the adapter layers in Horizon and make it, it could be more simple and more light. Okay, next page, thanks. And for the plans, thanks very much for our developers and all the users, especially we thanks for the TOKY contributors for us. They compute the Zen, Magnum, and Chou modules. And in next release, we will do the colliance for integration. Maybe also have the HM chat, maybe, maybe not. And we want to do some refactor for the Skyline console, make it more easier to develop for other OpenStack modules support like Sahara or others. And we also will do some stress testing different with Horizon. Horizon is very, very successful. And Skyline is, we still move on. And I think, I don't know what to say maybe the more user, the more contributor and the better Skyline can achieve. And welcome for all of the contributors. Welcome for all of the users. Every comments is the help for us. And thank you all, that's all from me. Thank you. Thank you so much. Yeah, it's really exciting to see this new dashboard and we're excited to see when it gets moved out of the emerging technology phase. It'll be really, really exciting. So if you have any feedback after you go and play around with the Skyline dashboard, please share it with the team. I'm sure they would love all the feedback that they can get. Awesome. So next up today, we have the Venus project, which is kind of your one-stop shop for logging. It's also a new service in OpenSack, but it's been fully released. And it was one of the projects that we added to that OpenSack map back at the very beginning of our show today, if you remember that. So I will let Liye take it away for Venus and Zed. Thank you. And hello everyone. I'm Pong Liye. I'm from Isabel, China. It's a pleasure to introduce the project of Venus. You'll need to go retrieval, storage, and analysis logs on the OpenSack platform. We developed the OpenSack Log Management Model Venus, which provides the one-step solution for log collection, cleaning, indexing, analysis, and alarm with addition, repart generation, and other requirements to help operator and mentor it quickly, so retrieval problems. Zed is the first release of the Venus. The replacement of Venus in Zed is as follows. It supports developing that div stock are classable. When it is deployed, it can be assessed and used in horizon project. We added the horizon display plug here for Venus that named Venus dashboard. We can choose whether to deploy it or not. It supports multi-dimensionals, retrieval, or logs of stack components, such as host level, service, tenant, request the ID, and so on. We add some operation and maintenance ones page, such as the rotation time of the ES data. The background task will be automatically executed according to the config setting on the page. We add the Venus API document to OpenSack document so that users can be better integrated with the Venus project. We have also developed some other future, such as developed the program to show some typical error statistics through the page, such as MariaDB connection errors, Lebanon Q connection errors, and other errors. Another page, the highlight for Antelope. In Antelope, we will focus more on error log in the query and analysis, scenario log analysis, get the error log model for each model and form them into templates, so that it can be prompt when these errors occur. The anomaly detection logs. Algorithms based on rows, such as regular expressions, keyword width, and AC, all to automata are used to realize automatic data query or cloud platform error logs. That's all. Welcome to Venus and looking forward to your commitments and the strategy for the project. My spoken English is poor. If you have any question about Venus, please email me at any time. That's all, thank you. Wow, yeah, that was a lot of information. Lots of things happening in Venus. And obviously everybody is gonna be moving on to the next release, Antelope. So there's a lot of work to still be done. Thank you so much, Leah, for sharing all these updates today. So unfortunately, our next presenter wasn't able to be here today. So I'll just kind of give a little bit of an overview and we will be sharing these slides after the show as well. So any links that people provided in their slides, you'll be able to access and look over what all we covered here today. So the interoperability working group sets a base number of requirements, capabilities, code, and tests that have to pass to be open-sac compatible. So this is something that a lot of our vendor companies are really interested in. But the number one ask from the group is that they need help because this is like an add-on to the regular open-sac services, kind of a different sort of group of people and there's a lot of things that need to be maintained. So if you're interested in getting involved, please contact Martin Kopeck, he wasn't able to be here today, but we appreciate everything he put together for us. And like I said before, the slides will be available so you'll be able to read through everything we just quickly tapped through. But as just about everybody mentioned here today, the project team's gathering is happening in less than two weeks now. It's crazy, it's a virtual event and it's free to attend. I do really wanna encourage all operators watching this stream to attend, particularly those specific operator sessions that Sylvain mentioned during his session or during his Nova update. So all of like anybody involved in any project or open-sac in general, there are definitely going to be things that we would love your feedback on and you can influence the future of open-sac. So come and represent your use case, give feedback and let's forge ahead into Anilope and we look forward to seeing you all there. So I think we're just about out of time today. I don't know that we got a whole lot of questions, but everybody is available offline on the project mailing list. Otherwise, all of the PTLs are available. Please go check out the Zed release. We all worked so hard on it and we're very proud of what we've accomplished. If you have an idea for a future episode of Open Infra Live, we definitely wanna hear from you. Submit your ideas at ideas.openinfra.live. And maybe we'll see you here on a future show. Thanks again to today's guests and we'll see you all in the next Open Infra Live. Thank you everyone.