 Hello, everyone. Thanks for joining us today and welcome to Open InfraLive. Open InfraLive is an interactive show sharing production case studies, open source demos, industry conversations, and the latest updates from the global open infrastructure community, which is our focus here today. This show is made possible by our valued Open InfraMembers, so we thank them for all of their support. My name is Kendall Nelson and I will be your host for today's show. We're streaming live on YouTube and LinkedIn, which gives us the opportunity to answer your questions throughout the show. So please feel free to drop whatever questions or comments you have into whatever platform you happen to be watching us today and we'll answer as many as we can. Now, as you hopefully know, a couple of weeks ago we held yet another successful project teams gathering and we thought to ourselves about doing something to keep folks in the loop that we're not able to join. So without further ado, let's jump right in. First up, we have Steve Geary here to represent StarlingX and what they discussed at the PTG. Welcome Steve. Thanks, Kendall. Happy to be here. I've been a member of the StarlingX community for about three years and really happy to be here to kind of give an update on what we discussed at the PTG. I know I personally don't always take the time myself to understand what other projects are all about. So I wanted to just spend a few moments to frame up StarlingX for you. If you could, there we are. Now we're synced up. The project's objective is to maintain a deployment-ready, scalable, and highly reliable distributed edge software infrastructure platform that's easy to deploy, has low touch manageability, a rapid response to events and fast recovery for bare metal, VM, and container workloads. That is a long title, but it does sum up that we are a software platform targeted for distributed and edge environments. And telco or telecommunications is the beachhead use case that we like to think is hard in StarlingX for other potential use cases. And some of those new use case possibilities are captured here on the slide. So in the limited time I have on the next slide, I'll only be able to touch on some highlights of the PTG. But I'll focus on, quickly on some use case discussion, conversation, release some project updates, and finally kind of wrap it up with a security initiative discussion summary. Looking at the use cases, Optum and Wind River both have commercial offerings of StarlingX and both shared insights into some of the commercially deployed and underdeveloped use cases that they're familiar with with the rest of the community there. And one member, Maju Rupani, who recently joined the community at the Vancouver Summit is exploring controlling forest fires as a potential StarlingX use case. So great ideas come out of communities. And those of us that think we know all the answers continue to be impressed with new ideas that the community just continues to come up with. So it's pretty interesting to hear that. And Maju's notion is to use StarlingX to deploy and manage distributed remote servers and sensors and forests to capture data and provide early fire detection. Edge sensors would be set up in a grid with a 10 kilometer spacing and form a long range wireless network and transmit back to servers in the StarlingX cloud. So look for a StarlingX blog post from Maju with more information and potentially a proposal soon. So looking at some of the highlights, historically, community infrastructure was hosted by Canada's Center of Excellence in Next Generation Networks. Sendgem was unable to continue hosting and Wind River stepped up and is volunteered to host the community infrastructure. So the existing infrastructure and services were successfully moved and are servicing the community thanks to significant work, significant initial work and ongoing support from multiple project teams. But work is now underway in the new hosting environment to enable more graceful and diverse participation in build, test and other community contributions. OpenStack services are updated to OpenStack Anelope and release 9.0 and changes were made in the StarlingX build infrastructure for both StarlingX platform and OpenStack manifests that simplify this and future StarlingX OpenStack services up versioning. I wanna give special recognition to the documentation team for outstanding work and project contributions highlighted in the next slide. The team made great strides in improving accuracy, usefulness and usability for a broad and hopefully growing audience. And they shared a lot of the contribution and updates during the PTG and they improved the documentation for a better StarlingX and user experience. They focused on improvements to CLI instructions, establishing and using basic terminology and tab usage for installation scenarios. The contributors guide was another area of focus and it was updated with details on community use contribution processes. Historically contributing to StarlingX has been challenging for new users. We're starting gauging frequently defaulted to sending out a post to the email lists where the community helped guide them to a reasonable starting point for them. Having a well-maintained contributors guide is a significant achievement for the entire project. How do I install StarlingX is a frequently heard question and the inclusion of community created installation videos is another fantastic improvement from the documentation team. And I'd be remiss if I didn't take the opportunity to acknowledge the team's effort to improve the quality of their own project content. The documentation team fixed some 60 documentation defects as well. All right, so now if we take a look at release and project status, the remaining three milestone dates for the StarlingX 9.0 release were changed. Milestone three, the freeze point, the feature freeze point is forecasted for January 2024. The RC1 milestone is when a final test begins and code changes are limited. Debug defects is now forecasted for February of 2024 and the final release milestone when binaries and build artifacts and documentation are posted is now forecasted for March of 2024. Three features, container infrastructure, hardware support and distributed cloud dropped content from the 9.0 release. I wanna draw your attention to that. Container infrastructure dropped Armada Remove, Kubernetes control plane scalability work and Codd container support for Debian. The hardware support was dropped for Marvell's Oction NIC accelerator and distributed cloud dropped long latency support between the system controller and sub clouds and sub clouds install and restore to the previous release. All remaining 9.x content is either complete and there's some 17 items there or still in progress and forecasted to be in release and there's some nine items there. All right, so that's the update for the 9.0 release. And if we look a little bit to the future, let's take a look at some highlights of future work being discussed and planned. Work includes improving build performance and eliminating the need for monolithic builds, Kubernetes upgrade optimizations for multi-node, vertical pod scaling and implementing a more traditional HA control plane for scalability and adding kernel support for Arm and updating to a new kernel version 6.6. Enhancing upgrade and backup and restore experiences and simplifying management methods and installing services. OpenStack service up versioning to OpenStack Bobcat in place non-disruptive N plus one application up versioning and continuing PTP improvements by the networking team and certificate management improvements, signature validation support and encryption for application and platform services. And finally, just kind of a quick look at the security initiative discussion. The discussion explore how to advance any existing initiatives or new security related ones or proposals I should say. The group identified both host and container related candidates to focus on for potential collaborations. However, no clear critical mass interest for starting points was identified during the PTG. So more discussion is needed and is expected. Of course, new ideas, collaborators and contributors are always welcome and appreciated. Thank you for the opportunity to share the Starling X PTG summary with you. Thank you for coming. We really appreciate Starling X being represented and I know it's your first time here. So welcome, thank you. We hope to see you back again. Thank you. Yeah, doesn't look like we have any questions about Starling X at this point, but if anybody in the audience listening does have a question, feel free to drop it in the comment section of wherever you're watching our stream and we can definitely circle back to it towards the end of the episode. So thank you, Steve. Thank you. And next up, we will be hearing about the technical committee discussions from Jay Faulkner. Welcome, Jay. Yeah, so I'm Jay Faulkner. I'm the chair for this cycle for the OpenStack Technical Committee and just wanted to share a little bit of our discussion from the virtual PTG. We had a few kind of overarching topics and I've combined several hours of discussion into a couple of slides. So I'm not doing any of this justice. We have full notes that we keep on all of this stuff if you're interested. But the first topic was sort of elections and leadership. We found that we have a very low overall participation in elections out of the total overall hundreds of people who've contributed to OpenStack and are eligible to vote for technical committee or for their project leaders. Only a very small percentage of those people are voting and we're trying to figure out if that's because they haven't joined the Open Infra Foundation, which is a requirement or because they haven't signed up to receive the ballot. But in general, if you're watching this and you've contributed code to OpenStack and you've not been voting in our elections, you have that right, you know, you have a right and kind of a duty to help direct the direction of the project. So if you think you should be able to vote in OpenStack elections and you've not been able to yet, please do reach out to someone. You know, reach out to me personally if you need to and we'll help you figure out what the problem is and kind of in a related way, there's kind of been a consistent need for some projects to have leaders appointed and in OpenStack, we have, you know, literally over a hundred different deliverables that might be brought into a given release. So this is, you know, not necessarily a systematic problem, but when you have, you know, five or 10 different small projects each release that might only have one or two contributors and maybe they haven't, you know, elected a leader using our processes, that is a significant taxing on the leadership structure and, you know, maybe it's a red flag that there's not interest. So like again, I would say if there's OpenStack projects that you're interested in, that you're invested in, that you know, your environment's working on, you know, take a look at how those projects are being run, invest a little bit of time in helping to keep them running smoothly. The other related thing is we talked a little bit about some shadowing programs that have been run by the Open Infrastructure Foundation and in collaboration with one of our SIGs in OpenStack. And what we found is that there are a lot of mentors willing to do mentorship, technical committee members, PTLs who've offered to say that, you know, they're happy to be shadowed by people who want to learn how to take on leadership responsibilities in OpenStack, but there's been very few people who've come forward with the time and the willingness to come forward and participate in that leadership. And, you know, that's something again that I would say if you're invested in OpenStack, you know, if you're a decision maker at companies that invest in OpenStack, you know, make sure you're investing time in getting, you know, junior leadership elevated inside of the open source groups so that we can get this new blood into these leadership positions and ensure that the project will be healthy for years and years to come. So this also sort of led into some chat about communication and technical committee workflows. There was a bit of a concern, you know, you might see us talking about PTG or VPTG. Well, the V is meaningful because it's virtual. It's indicating that we used to do these always in person. We would try to get all the contributors into a room together. We would talk about issues. If there was a, you know, a concern that I as an ironic contributor might have had with something in NOVA, I could literally walk over to the next room, tap someone on the shoulder and go pull them into that conversation or have it with them. And in a virtual environment, that sort of cross-project collaboration is kind of a lesson by just the design of the venue. And so we've talked a little bit into how to rekindle that sort of cross-project communication. And one of the hiccups we found with that is that synchronous video meetings are generally less accessible, both to people who might not speak English as a first language or who might be in a time zone that's not represented by a majority of open set contributors. So it might get less favorable scheduling. So in general, those are just tough problems that you have to deal with in an international community. And the honest answer is that we don't necessarily have a good solution for them, but oftentimes acknowledging them is a good path toward making a solution because if we're all aware that these issues exist, we can try to work around them as we go. And the final one is something that I really just wanted to make sure we stay here out loud. There was a misunderstanding, you know, we all use an exact language sometimes over, you know, like the technical committee had decided to do something in a meeting. And really the technical committee, our decisions are made, governance and open stack, just like, you know, code changes are made with commits to code repos, technical committee policies, technical committee governance is committed via commits to governance repos. So this is a neat thing if you've not ever had to interact with our governance before that you may not know. We have these meetings, we have these in-person PTG sessions, but those are generally just to rapidly gain context to talk about things, but official decisions are always made in code review in Garrett. So if you wanna stay up to date on things that might be changing in technical committee, but, you know, you might not have time to go to the meetings or you might not want to keep up with that level, just track the reviews on the governance repository, track the reviews on the team config repository. These are actually going to dictate the shape of open stack and are gonna reflect the governance. And so hopefully that might help guide a little bit as to where things go. So the other thing that I wanted to say a little bit, and this is not necessarily something that came out of the VPTG, this is something we've been working on for a while that we're about to press the button on. So I wanted to make sure to use this opportunity to talk about this to the group, about our new branch support policies. And so for a number of years, open stack has had the concept of extended maintenance. And what that would mean is that we would have a, we would have a branch after it was released, you know, let's say Zed, for example, we released Zed for 18 months, Zed was maintained. The project teams would backport bug fixes, security fixes to it, issue releases as needed. And that was considered maintained. Now after that 18 months, the way it used to be is we would switch that over to extended maintenance, which was just a label and a document somewhere, you know, the branch still remained stable slash, you know, Zed in that case. And essentially the branch stayed in that state with a best effort sort of support for an indefinite period of time, which represented a bit of a problem because it was difficult for operators or for users to really know how well supported something was from the outside looking in, because from that perspective, it's all just a stable slash branch and extended maintenance on one project might not mean the same thing as it does on another project because how far back that support goes might be variant. So we wanted to change this, we wanted to make it more clear about what's going on. And so this is something that we expect to roll out likely sometime around the new year, like within the next month or two, which is that we're changing this older concept of extended maintenance to unmaintained branches. And so after that 18 month maintenance period is over, if it's a non-slurp release, so that means if it's not a release that we're allowing you to upgrade, we're allowing you to skip a release on, then that's only gonna be supported for 18 months. After that 18 months is over, we're gonna mark it as end of life and that branch goes away. If it's one of our slurp releases, that's one of those every other releases that you can skip a release on, then those are going to be eligible to go into our longer unmaintained branch mode. And what that allows is projects or users or operators of those projects can opt in to having those branches get an unmaintained branch created. We're gonna tear down the stable branch, put a tag on that to say EOM, to say it's end of maintenance, create that unmaintained branch, and then that unmaintained branch will stay open as long as there are volunteers who are willing to review and land changes for that. There's gonna be no promises as to guarantee of what support means. It's literally just the branch is gonna be there open and there will be someone there who is willing to merge patches. And we hope that by calling it an unmaintained branch instead of extended maintenance and having stable in the name and things like that that hopefully we better communicate what that means to you as our users. And the other big change that comes with this is that it used to be that fat porting changes, these extended maintenance changes was very much a project by project activity. And now with part of this change by default at least we're going to be having an open stack wide group with the policies to land changes in these unmaintained branches. I'll note that in the short term, the good folks at Erics and among other people in our community have volunteered to keep releases back to Victoria alive and unmaintained fashion for now. So this will not have a significant impact in the short term on what branches are in support but you will see those names changing over from stable Victoria for instance to unmaintained slash Victoria in the next couple of months. Lots of information that was really helpful, Jay. To kind of ask my own question since I don't see anything from the audience here just yet kind of building off the last bullet point you were talking about like having an open stack wide group what do you think about switching to like the maintainer model for all of open stack rather than continuing to maintain project specific leadership? Because I know like you were saying on the previous slide as well it's hard to find people to stand up and be a PTL for every individual project. So I would suggest this was actually a pretty hot topic at the VPTG. So I will preface this by saying you're getting a little bit of Jay's opinion. I'm gonna try to give you a layout of the whole landscape but you're definitely gonna see this through my lens. There's sort of a couple of things here. First of all is that this sort of periodic check-in that the elections and leadership processes is sort of acts as a project health check. It gives the technical committee a chance to say if there's a project that has such little activity that it's not got a leader elected then it gives us a chance to go look and say is this project being maintained? Is it supporting the latest Python releases? Is it CI passing properly? And that's a healthy checkpoint to have. Agreed. That's one downside. The other is that there's kind of a baked in assumption that there is a group of some universal maintainers that if we said this is a universal group that this will all get covered and I don't necessarily think that's the case. We're already working in a situation when we have a very low number of people maintaining some open stack projects and in the cases of these projects which might have lower levels of participation really the solution to getting those projects while maintained is not for the existing contributors to spread themselves more thinly or gain more power over those projects. It's for the people who are invested in those projects and who want those projects to succeed to really invest time in them and get them going because what we're seeing is as we maybe take action to mark a project as an active as a signal to say this project may be on its way out it may not have the activity it needs to remain an open stack that people will stand up and say, wait a second I'm using that. And so I'm trying to use this opportunity at opening for a live to say this might be coming down the path for a project that you're using. If you're not paying attention if you're not involved in the community and taking that time to make sure that the projects you're using are healthy and invest in that time. So that's kind of my answer. There's definitely been ideas like that tossed around there's some support for it in the technical committee some support against it but I think that the core idea is that if you're using one of these projects that's not a part of like the core open stack projects that's maybe one of the ones on the side that are not used by this frequently and that's something you're invested in you're building a business in you're building an infrastructure in then maybe 0.10% of an engineer's time at it as well. And we'd be happy to help onboard those folks and get them going like I said even reach out to me personally if that's something you want to do and you're not sure where to start. Yeah, awesome. Good answer. I know it's a tough one we'll probably continue to be a heated conversation anytime it comes up. But yeah, awesome. Well, next up we have ironic. What's Jay Faulkner? So I get to change hats here. I'm also the project team lead for ironic this cycle and we're going to talk a little bit about what we talked about at the VPTG. I will note we sort of give our good news in the release notes version of this talk of this open info live session which was a couple months ago when we were talking about all the cool stuff we released in Bobcat. This is mostly deprecations. So if you want to hear more about the cool new stuff go watch our Bobcat release. We're a little careful about promising what things what things are coming. So we only like to announce when they're done but we do have some things that if you're a user of ironic that we talked about at the PTG that's useful for you to know that's coming down the pipe. First of all is sort of an apology as someone who's been a leader and a senior member of the ironic team for a while I personally feel pretty responsible for this. We've not done a good job of prioritizing bug triage at all on the team. We acknowledge that as part of the PTG and created a bug deputy role. We're rotating that among the cores something that many other open stack projects have done to success to make sure that we're actually laying eyes on bugs when they're being filed. If you've had a bug that was filed in the past and it wasn't responded to, we're sorry about that. We know that's not ideal but we're gonna try to fix that. And just, you know, trust in that folks will look at the bugs is not the best way to do it. So we're going about it in a more systematic way to hopefully that it doesn't get lost in the future. You've probably seen this if you've watched these updates before but we are in the process of migrating the separate ironic inspector service to the ironic service that we're sort of taking away that extra need to run an additional service. One thing that's exciting though is that even though we don't expect it to be 100% migrated we're gonna be to a point where we believe we'll be able to flip on some of that functionality so that users will be able to get partial inspection functionality without running a separate inspector as early as next release. Keep your eye on the release notes and on our documentation when that release comes as well as the open and for live where me or someone else probably be talking about it if that's the case but we're aiming for that and it looks good. It actually looks like metal three might be able to utilize that as well, which is exciting. And the last one is we're putting metal Smith into maintenance mode. If you're not familiar with it, metal Smith was a client that was written for open stack ironic sort of with a triple O use case in mind to make it easier to spawn single bare metal machines using the ironic API directly without other tools. This is a good service. It still works. The client is there. We just looked at it and evaluated it and said, this is functionality that needs to be in the primary ironic services needs to be in our clients and our APIs. It doesn't need to be a special purpose client. So we're gonna be working toward taking those features and metal Smith the lessons that we learned from writing that that client and we're gonna be applying those into creating new APIs for ironic new syntax in our clients to make it easier to use and make it easier to deploy bare metal directly with ironic without any additional pieces involved. I would not expect this to be a fast moving change but more of an acknowledgement of a directional change that we do wanna make deployments easy to do directly in ironic. So one other thing that we wanna talk about is kind of an update to our drivers and careful about the language here because this is a lot of times when you talk about something being deprecated it's about support going away or about a community falling aside or something. This is not really the case. This is almost like a victory. This is a situation where when ironic first started out we had almost all deployments of it were using a driver that talked to IPMI which is a very bad protocol for BMCs. It doesn't work very well, it's unreliable and we fought against it for several years and one of the ways out of that was implementing some of these vendor drivers like the original ILO driver, the original iDRAC or now iDRAC Wisman driver where we were talking to these advanced BMCs using their own language and their own APIs to get more advanced features like piloting the initial support for virtual media in ironic but through the good work of these harbor vendors the DMTF, ironic contributors and vendors who've been working together to standardize this we've got Redfish now. Redfish is a great standard for BMCs that has advanced features and is easy to use as a programmer. It's great stuff but what this means is that some of these vendor specific drivers now are maybe not the best choice anymore and so what we've decided to do is we've marked many of these vendor specific drivers as deprecated for like technical purposes in the ironic code so they'll print a message on startup if you're using them but mainly that's because we wanna make sure that operators know that the Redfish driver is the one to use for new servers. So let's sort of go down the list here. If you have an ILO based server, a server from HPE that has an ILO based BMC and that BMC is an ILO five or older then you can continue to use the ILO driver even if we mark that as deprecated. We're gonna leave that driver in ironic as long as we need to until those ILO five based servers go into life. We do wanna thank the team up at HPE they've worked with us for several years and making sure that ILO driver's good and they've expressed a desire to continue validating that in ironic. So I wouldn't expect that to stop working. I would just expect it to we're reflecting that it's no longer the preferred choice for brand new hardware because if you've got a brand new ILO machine and you're gonna be running the next release of ironic well you're gonna wanna use the Redfish driver on that for those ILO six and newer servers which is very exciting. Similarly, if you're using a Dell server with an I-DRAC in it and you've got an I-DRAC six or older you're gonna still have to use the I-DRAC Wisman or formally know to simply the I-DRAC driver before we added the Wisman and Redfish variants. And that again, we are gonna mark it as deprecated that might print a message on startup but we're not gonna remove that driver at all until after, it's our belief that all of the hardware that uses it has gone into life. Again, thank you to the folks at Dell who's helped in developing that driver over the years and testing it and keeping it working and for developing the I-DRAC Redfish driver which is what you would use in this case it's our Redfish based driver with a couple of Dell specific tweaks to make it work properly for DRAC based servers. And then these other two are drivers that are honestly maybe a more direct deprecation in the I-Lo and DRAC cases this is simply like a phasing out removing from the old standard to the new open standard and that's pretty exciting. In this case, the X-Clarity driver was originally written to talk to a cluster manager software sold by Lenovo. This doesn't really fit in I-RONIX model, we are designed to talk directly to servers and not to cluster managers. In this case, we're gonna schedule this for removal and what we're gonna say is our expectation that you should be able to point our Redfish driver at most Lenovo servers and have them work. We certainly have not tested them to the extent as some of the others have been but if there's any bugs there for free to reach out to us and we can try to get them fixed. Similarly, the IBM C driver is being scheduled for removal. This actually does talk to the Huawei based IBM Cs but due to just sort of global situations, the team that worked on the IBM C driver has been separated from the open source open stack I-RONIX team for a couple of years now and that drivers not really gotten any additional work. It is our understanding that newer Huawei hardware should work fine on Redfish but again, we don't have access to the hardware we haven't tested it. If you have access and you have any issues with it feel free to file a bug, we'll be happy to fix it for you but again, the new standards pretty great. Thank you very much to the DMTF, the hardware vendors who's worked on that, the I-RONIX contributors who's worked on pushing that because this is a slide that I think every I-RONIX contributors dreamed of being up here for 10 years plus of having an open standard that's gonna provide the fully featured I-RONIX experience without needing to talk to vendor specific protocols but the one other thing I wanna mention is as part of practice for I-RONIX this is just a, we always publish detailed edited notes from our PTG and our plans for the thing in our documentation. It's the long URL if you're scared of the tiny URL or QR codes just go to specs.openstack.org hit bare metal specifications and then you should see a project plan link and that's what this is. So you can go here and look and you'll see the full curated list of all this including the details on the driver deprecations that I just spoke about. So thank you. Thank you. Thank you for covering both I-RONIX and the technical committee and there's a lot of content there and thank you for all of what you do for both of them as a chair and a PTL and all of the other things. It's a very long list. Yeah, awesome. I do wanna remind the audience that if you have any questions about any of what we've covered so far please drop them in the comment section of LinkedIn or YouTube is where we're streaming so we would love to hear from you. Otherwise I'll just keep coming up with my own questions as we go. Next up we have the Neutron project that's represented by Brian Haley. Hi Kendall. Hi, I'm Brian Haley. I am the Neutron PTL for the Karakal cycle. I just wanted to highlight some of the work we did during Bobcat talk about some of the things we plan to complete in the Karakal cycle. But first I wanted to thank all the other core reviewers for all their support as I took on the role of PTL this cycle. While I've been a long time contributor since I think the Diablo cycle, I haven't had the time to take on the PTL role until now and I'm happy to have a great team of developers behind me helping me every step of the way. So let's get to it. So first our highlights we were able to complete all of our SQL Alchemy 2 work this cycle. It's something that was started discussing in OpenStack in about August of 2021, I believe. And it took about two years and over 30 patches to complete the work in Neutron. Thanks to the work of Rodolfo, the former PTL. So thank you for all the reviewers and all the contributors for that. Second, it is another multi-cycle effort. It was led by another contributor, Slava Kopansky. I think I counted over about 100 patches in Neutron from a large number of contributors. We were also able to start on Secure RBAC getting phase one completed last cycle in Bobcat and we have started to work on phase two, which is the service role. In addition, we changed our default gating values to enforce these new defaults from Bobcat forward so that now we can't merge code that will break any RBAC. The next item is OVN expansion. OVN support was added to Neutron in the train cycle and made the defaults back-end shortly thereafter. It's taken a number of cycles, but support for OVN has been expanding into the other stadium projects under the Neutron umbrella. Last cycle, both a firewall as a service and TAP as a service completed. OVN support, BPN as a service just missed the Bobcat window. But will hopefully be merged soon in the Caracal cycle. Our next large thing was led by Laos. He has been leading the deprecation of the Neutron client library code and replacing it with the Neutron, or the OSC-Lib equivalents. And last cycle, there were a number of projects updated including firewall, VPN, SFC, and BGP, VPN. That work is ongoing and hopefully this cycle will be able to complete horizon migration to the new OSC-Lib code. And then finally, I'll enunciate Slurp. So Slurp is skip level upgrade release process. It allows us to skip basically one cycle in between and do an upgrade say from Antelope to Caracal. We worked pretty hard on making our gate supported over the past cycle, both between DevStack changes and Neutron changes. And I think on day one, we had a patch to switch our voting jobs to support Slurp and enforce it. That way we won't introduce any code that breaks it. So that's it for our, I guess, our highlights. I'll try to discuss some of the plans we have for the next cycle. So one of the biggest things on our roadmap that we've been working on for a couple of cycles now is OVN Active Active L3 gateways and multi-homing. This is a feature that some of our telco customers had been asking for, and it allows Neutron routers to have multiple upstream external gateways. It adds support for multiple equal cost, multi-path default routes without the need to install additional static routes as was previously required from operators. It also adds bi-directional forwarding detection in order to detect dead gateways and remove their corresponding routes. Work on this was actually started in the Antelope cycle and the plan is to complete the work in Karakau. And it's been led by, I didn't put their names here, but it's led by two engineers from Canonical, Froda and Dimitri. So thank you guys for all that work. In addition to something that we work on every cycle is to shrink the gap between ML2 OVS and OVN. So this cycle we plan on getting one more item off that list, which is support for IPv6 metadata, which leaves us with only six remaining documented gaps. So if anyone has any more gaps they think exist, please contact us and we'll add them to the document. And then finally on this list, during the PTG we had a Nova Cross project meeting and we really focused on a single topic this cycle and that is how we can make the optional multiple forward-binding extension mandatory. As OpenStack has matured live migration has become a required feature for clouds with tenants using it to move a VM across an AZ or operators using it for evacuation when trying to take a note offline for upgrade. Unfortunately, it's not always possible to tell if it's enabled and not all the network drivers outside of the core neutron umbrella have enabled it. So we're working on updating this to be non-optional in a future cycle. So we'll work with the Nova team on that. And then I'll just add one, I didn't have it on this slide but it doesn't seem like we have a lot of future work but one of the things the neutron team does is we work very dynamically every week we have meetings to discuss RFEs. So I encourage anyone that wants to contribute if they haven't seen something on this list that maybe excites them, reach out to us and every Friday we meet to discuss new things that you can add. So thank you for letting me share our neutron PTG update. Yeah, thank you for joining us. I didn't realize that you'd been involved since Diablo, that's a longer than I've been around. So congratulations on being PTL and leading the project and having so many excellent people behind you get work done. Yeah, I don't see any questions still. Man, I'm gonna have to think up some doozies for the end. Thank you, Brian. Next up we have Nova here and here to talk about Nova today we have Sylvain Fossa. Hey, nice to meet you all guys. So let's discuss about what the Nova community told about the previous PTG, virtual PTG. As a reminder, you can see the previous Open Infra-Live episode about what we merged for the Bobcat cycle. As a reminder, what I will tell is not exactly a promise that we'll actually have that in the cycle. This is more like saying, okay, this is like a project priority and we really would like to have that. And basically we'll try to put as much as we can or efforts on making it happen. But yeah, time will see. So what are we discussed at the PTG? First, in case you don't know and for us this is pretty crucial. During the virtual PTG, you as the operators have the possibility to discuss with the projects teams. Particularly with Nova, we had one specific hour for discussing with the operators that we named operator hour. And where we got a bunch of operators joining is joining and basically telling the stories. And particularly those were other pain points or feature requests. But it was actually a very productive meetup in between contributors and users of Nova. During that specific hour, we discussed about two specific topics. One being the fact that when you're a public cloud, in general you have like a huge number of flavors. And sometimes for your users, this is pretty tricky to exactly know which specific flavor to use. So we discussed about potential solutions we may do. The problem with the operator hour is that in general we have good ideas but we need ends on. Eventually we came up with some potential solutions about filtering flavors based on some query strings. I won't explain about that because that's not yet proposed. But yeah, basically the idea is, so basically till the air, you're a public cloud, you're being impacted by the flavor explosion and you would like it to have it merged. You may have some time and you know what to do and how to do it, reach out to me and then I will explain you the exact process on working on that. But yeah, I think we probably have a solution we just need ends on. Another feature, another nice feature we discussed was about like, so with Nova we have a concept of Nova cells that helps you to group your computes into some kind of logical set. And that's used mostly for kind of scaling possibilities because then you shared your RPC bus and that's nice. The idea was that in general some people want to move compute nodes from some regions or some environments to some other environments. One way to tackle that case could be to let Nova cells to be movable between environments. Again, that's probably a solution we identified but we basically need ends on that. No promise again, but again, if you think it's a story you like and you would like to contribute to it, again, reach out to me. We also discussed at the PTG about, we also discussed the PTG with other projects, mostly with neutron and cinder. Brian already explained two minutes before about the issue we had with optional neutron extensions, particularly with live migrations. So that's why during Caracal, particularly with multiple port binding, we'll require the neutron extensions to support multiple port bindings. And for the ones who can do it, then we won't support them. We've seen their team. We also discussed about two different things, one more than that, but I just briefly mentioned, I just briefly wanted to talk about those two main things. So yeah, so we discussed about, for example, about the attachment issues that we were having. We did some efforts on cleaning up some kind of orphaned attachments before, there were some residues and we discussed all the different things that we could do. I know that Rajat will talk about that in a second as well. And we also discussed about some new features that may come by, sorry, which would be, I would like to encrypt my NFS volumes. We discussed about that specific feature. Next slide, please. So apart from those specific topics, we also discussed some specific Novabits. One important effort that will be stressed on this cycle will be about the current virtual GPU support that Nova has and how to improve it. We heard stories from operators about bugs and like new GPU support, like for example, the Amper architecture from Nvidia. We heard about that. We are currently working on supporting those new GPU architectures in our projects. And we would like also to work on the possibility for Nova to live migrate instances having GPUs between computes. There is an open spec that's being proposed that not yet accepted, but I really hope to do some kind of efforts this cycle because that's a very, that's a huge priority for us. We also discussed about Nova supporting a new API endpoint that would be called like, say, slash health that would give the operators the possibility to know about the state basically of the RPC bus and the DB requests. There is a spec that got approved. So if you look below the slide, if you look at the slide below, you will see the link. If you click on the link that will happen on the chat, you will see the spec and the description of the feature for the Ilse check. We will be also working this cycle up fully on a new concept of, for example, say you have different PCI devices that are related. For example, you have a PCI card that provides both nodio PCI device plus some computing device, say, you like to group those two PCI devices into one set and make sure that basically those two PCI devices are coming to the same, for example, NUMA node. That specific spec, which was also approved and that you can find on the list of approved spec in the link below, that one is approved and I just hope that we could go to some implementation during this cycle. Another discussion we got at the virtual PTG, sorry, that's still the same slide, sorry. Yeah, another discussion was the soft affinity and soft anti-affinity for instances about ability zones. The discussion, the design discussion is still occurring. So please keep on that, please look at that. And one of our discussion we got where the spec is currently again on review is about making sure for security reasons that if you expire your token then basically your VNC remote consoles got expired. Voila, that's basically the most feature bits that we discussed at the virtual PTG next slide, please. At the PTG we also discussed a few policy things. One was about the stable branches. Jay told about the unmaintained status for the stable branches. That hasn't been yet fully accepted but we prepare to it meaning that we really want to get rid of our stable branches before Z. We also discussed the state of our dirt drivers that we are not having proper third party CI in order to make sure that we don't regress every time we merge a change. For us, this is very crucial that we can test our changes against all of our drivers. For that specific reason, we decided to duplicate the support for VMware and Hyper-V. For Hyper-V, we decided to remove the support on Caracol. For VMware, what's interesting is that some company came up and said that was a user of the VMware VM driver and they said, okay, we actually are using the VMware API driver. For us, this is crucial or can we help? Basically, no, we stopped saying, okay, we're gonna remove support for VMware API and we proposed to that specific company because they were having resources that could help us. So basically, no, the state is that on a weekly basis, contributors from that company are reporting to us their efforts on providing a third party CI for VMware API. So that could mean, if that becomes a thing, that could mean that the VMware API VM driver could no longer be experimental in some day. I think this is really a good story to tell if you are a user and you really care about that feature, we are listening to you and we can try to find a way to help you. Two other things very quickly because time is flying, we are trying to find a good way for our contributors and course to find bugs and features to look at and we may improve our CI to have some checks done by some Python tools just to ease our lives. That's basically it for me. Awesome, thank you so much, Savan. I do have a really good question, but I'm gonna hold it to the end because we have a lot more to get through and we're getting close to the top of the hour. So let's jump right into Cinder. Today, we have Rajat joining us to talk about what Cinder discussed at the PTG. Thanks, Cinder. So yeah, I'm the current PTG of Cinder and I will just discuss some of the highlights that we discussed in the last character virtual PTG. So as Silvan mentioned, like we had some attachment cleanup issues. So this generally shows up in operations like live migration, cold migration, shelf on shelf. And what happens is like the volume might be stuck in reserve state or there might be duplicate entries in the attachments on the Cinder side. And basically we end up with inconsistencies and we are unable to use either the volume or the instance. So we discussed with the NOVA team about any potential solutions for this case. And the solution that we came up with was like, whenever the NOVA compute service will in it, it will check for these inconsistencies and clean that up. And that will be possibly implemented in the upcoming character cycle. Next, one of the topics we discussed was the image metadata inconsistency that is there in Cinder, NOVA and Glance. So basically image metadata is extra properties of the image and currently Glance doesn't have any limit on it. Like it allows it to I think 64K size. But Cinder and NOVA had some limitations to it. Like for Cinder, we have the API validation. So whenever we try to like create a volume from image and the metadata field exceeds to 50 characters, then we reject the request because Cinder doesn't accept those values. And I think similar is the case with NOVA. Like either it rejects the request or it truncates the characters. So the basic discussion was around like keeping the behavior consistent, either it's 256 characters or 64K all around open stack. And there is a meeting thread going on regarding that discussion as well. So just be aware to keep in mind if you're like creating instances from images or creating volumes from images, keep sure to check the value field of the metadata like the size of it. Next up we had the new quota system discussion. So this is not a new topic. It has been discussed before but it is kind of a revival of it. Like we are again planning to work on it. So basically Cinder has a history of issues with quotas. Like they go out of sync from time to time like with operation, like if the Cinder service fails during an operation because everything is stored in the database and sometimes the reservations or the quotas doesn't get cleared up properly when an operation fails. So yeah, we have been receiving issues related to that. So what we decided to do was introduce a new quota driver that will do dynamic resource counting. So instead of storing everything in the database we will dynamically count all the resources. Like if there are 10 volumes we just go to the volumes table and count the resources. So that gives us an accurate view of the deployment like the resources that we have. One issue noted in that approach was like with higher number of resources sometimes this dynamic counting could take more time and then it might hit the performance. So like we also planned to introduce another quota driver that is like stores things in the database. I mean it will be better than the existing one we have because we will be rewriting it so it will avoid a lot of bugs and issues there. But basically the plan is to implement the dynamic mechanism in this cycle and then this cycle or the yeah, how much time it takes depends and after that we can introduce the stored mechanism as well. So these are the two proposals to address the quota system issue. Next slide please. Yeah, another discussion we had is regarding the OSCE and SDK work. So in the antelope cycle we were able to get parity between OpenStack Client and Cinder Client and all the commands that existed in Cinder Client also were there in OpenStack Client. After that in the last Bobcat cycle we focused on getting SDK support for certain APIs like the whole attachment stack like attachment create update list all those and transfers. So some of the effort we were able to get merged. This cycle we will be again continuing with the same and we have some interns, volunteers that are happy to help in this effort. So basically we'll be continuing to add more API support to SDK and finally like migrate all the commands from OSCE to point to SDK. So currently they call Cinder Client they will be calling SDK then. So from a user point of view like they can go ahead and just use OSCE because we have I think all the support and if there is any support missing do log a bug for it but from our standpoint like we have all the support so you can just use OSCE you won't see any difference. The later efforts are just mostly on the backend side of things. Lastly there was a discussion around SQL Alchemy 2.0 so we want to get that support in for the new SQL Alchemy version and there were some things that was blocking it like LMBIC work or I think there was a reader writer log work as well. So basically all of the work in Cinder is mostly done. There were around 10 patches remaining but I think right now all of them have merged as well. So Cinder is good to go on the SQL Alchemy 2.0 effort and yeah so we don't have any issues blocking it at this point. That was pretty much it from the Cinder's standpoint. Awesome. Thank you so much for having all the good discussions and then sharing them with us. I know not everybody was represented here today but we really appreciate you taking the time and everyone else that has spoken so far to share all of the awesome discussions that you had at the last PTG. We still have two more projects to get through. So quickly, sorry James, today to talk about Sunbeam we have James Page. Okay, hi, thanks Kendall. Okay, so as we're quite a new open stack project I just wanted to take the opportunity to kind of introduce the project itself but also to talk a little bit about the history and the origins of the Sunbeam project. So coming to the next slide please. And this actually follows my involvement in OpenStack over the years quite closely. So I've been a previous PTL of the OpenStack Challenge Project which has been a successful deployment and operational tooling project for OpenStack. It's been around for sort of 10, 12 years. So almost as long as OpenStack has itself built strongly on top of OpenStack using an application orchestration and modeling tool called JuJu integrates with a machine provisioning system called MAS which is some container technology in the form of LexD and is very much based on Ubuntu. But, and has been proven in environments from tens to hundreds and potentially thousands of servers in some of the larger deployments that we've seen using the OpenStack Challenge Project. If you can fit to the next slide please. Where we've really struggled with the OpenStack Challenge Project is making it smaller. So putting it onto a single machine, putting it onto a very small micro cluster of machines has been a real challenge. And we had a little bit of a punt at doing this in the form of a project called Microstack. Next slide please. So Microstack was built on some of the same foundations as the OpenStack Challenge Project. Again, a solid core of OpenStack and based on Ubuntu and using the SNAP packaging format to produce very much an image-based delivery of OpenStack onto smaller footprint devices. But this kind of presented some challenges in its own right in that although we had some of the same core shared components. So next slide please. We had a lot of different operational practice in terms of how we deploy and manage OpenStack through both of these solutions. So same core prints, same core technology but very different principles and practice in terms of deployment and operation. So alongside this story of success and failure in deployment, we've got some other things going on in the broader open source community around us. So next slide please. And that really revolved around Cuban 80s appearing as a different way to think about application deployment and management over time. And Juju then growing support for deploying charms which are the encapsulation of each component of a deployment to Cuban 80s as a substrate for running applications on. And that's kind of why we're doing Sunbeam. So next slide please. Sunbeam is a kind of reboot, a refactoring and taking the best practices that we've learned from the OpenStack charms project and the new deployment and management practices that Cuban 80s as a platform provides us to build something that is smaller footprint. It's more repeatable from a deployment perspective. It's easier and more lightweight to manage over the duration of his lifetime. And that is what the Sunbeam project is about. So we've been incubating this in the OpenStack charms project for about 18 months, two years and we finally applied for independent project status sort of at the end of quarter one, quarter two this year, achieving that just before the summit in June where we did our initial launch. If we could flip to next slide please. This is kind of the high-level overview of the architecture of Sunbeam. So this shows a multi-node deployment where we have some Cuban 80s workers which are running the control plane of the cloud. So all of those state-of-the-services and database is a messaging that support the control plane of an OpenStack cloud. We place those all onto into the Cuban 80 substrate to leverage all of the kind of rolling updates, image-based workflows, staple sets, all of the nice features that we're able to get from Cuban 80s to help with to manage those parts of the OpenStack control plane over its lifecycle. But we're still able to place things directly onto bare-match machines and things like the OpenStack hypervisor, and overcompute components, storage components, OVN and Open vSwitch for the software defined networking, all sit directly on the bare metal still and we're able to integrate both machine-based deployment components and Cuban 80s-based deployment components all into a single set of reusable components. And that is what Sunbeam as a project encapsulates. So in my very, very quick elevator pitch as to why we're doing Sunbeam and what it is, we did do our first VTG participation this cycle. So we had quite an in-depth review of our June launch and our route to becoming a formal project. We had some challenges. We've been very much walking the edge of possible in terms of some of the stuff that we've been working on in the last year, but we made a reasonable, I guess 1.0, we would call it for our June launch and got some, got that in front of quite a lot of the community at the summit in June. We spent quite a lot of time talking about how we manage the repositories associated with all the charms that form Sunbeam. In the past, in the Upstate Charms project and for the start of the Sunbeam project, we've taken an approach of having a single Git repository per component. So we end up with, actually, a lot of repositories to manage for an actual OpenStack deployment. I think the OpenStack Charms project, for example, has nearly 60, 65 repositories that they manage for the various different components. That's actually quite a lot of overhead in terms of managing the configurations for Zool, managing the branching strategy, managing all the individual components. So we spent quite a lot of time talking about the concept of a Mono repo for the Sunbeam Charms, a single repository that contains all of the charm code for all of the individual components of the OpenStack control plane and the data plane as well. And we worked that through. We got quite a lot of feedback from people who've worked both on the OpenStack Charms project and the core team working on Sunbeam. And we've since pivoted and just completed the work to move to a Mono repo approach. It gives us a much more integrated and lower maintenance approach to managing the set of charms that form Sunbeam and actually will reduce our testing and gating footprint quite significantly as we do more integrated testing of changes rather than independent testing of changes for the individual charms. And that kind of dovetails into the full integration testing in the gate. We've been able to kind of do partial functional testing but we've never been able to do a full integration test and that's something that we can now feel we can achieve with a Mono repo and by reducing our overall testing resource footprint in our check and gate capabilities. We've also spent quite a lot of time thinking about how we manage the OCI resources that we deploy for the control plane and how we feed in updates to those OCI resources into the kind of testing pipelines that we know new published container works with existing published charm and how we manage that through a testing and release pipeline to get that to end users. And we've also spent again, quite a bit of time about reducing the footprint of Sunbeam as much as possible. We were a bit uncomfortable on our 1.0 release about the disk footprint we were consuming. Quite a lot of OCI containers coming down and being deployed into Kubernetes. And we've taken a couple of approaches there. We've had a short-term focus on reducing the number of different containers that we use which has worked quite effectively and longer term we'll be looking at reducing the size of the individual containers even further so that we really get that down to a smaller focused footprint as possible. We also spent quite a bit of time having had some, quite a bit of initial interaction with first-time users who were trying out Sunbeam and seeing how it worked and asking lots of questions and debugging problems they found, whatever it might be, on how we help those users help themselves as much as possible. And that kind of dovetails into A, working more on our documentation to make sure people understand the concepts and secondly improving our kind of bug reporting and triage process and how we engage with the broader community of potential users out there. Anyway, I think that's all I needed to say so I'll hand back over to Kendall now. So thank you. Well, don't leave just yet because we actually have a question from the audience. So, Laszlo who is watching on YouTube today asked if Sunbeam would be supporting a migration path between traditional juju terms, based deployments and Sunbeam. So the short answer is yes to what's actually a very complicated problem. So yes, the plan is that we will, we will be looking at having a migration path from an OpenStack Charm based deployment to a Sunbeam based deployment. That might not necessarily be 100% in place. We may need to go through a process of for example, evacuating hypervisors to new Sunbeam ones and then taking old OpenStack Charm hypervisors out of process. So although there's some kind of technical code that needs to be written to support that, a lot of that will be around the workflow to actually migrate an OpenStack Charm space cloud to a Sunbeam cloud. That's not like coming in the next six months but probably nine, 12 months out. It's a bigger problem but we've definitely been designing and thinking about Sunbeam with that as a longer term objective. So I don't want to ask anybody on something they can't upgrade from. So that's something I'm really keen on. Well, excellent. Thank you for that. And thank you for the overview as well. I think that was really helpful for the audience to kind of know a little bit of the history and the focus and the goals of why Sunbeam has grown out of Juju Charm. So thank you very much. And we have one more today. And actually, so this last one will be a recording because Carlos de Silva was not able to make it today. So he took the time to record a summary of what Manila talked about at the PTG that just happened. So we'll play that. And that's our last one, I think. Hello, everyone. I'm Carlos, the OpenStack Manila PTL. And today I would like to go through a brief summary of what we discussed during the Manila PTG and talk about some features we're targeting for the cycle, some things we have been doing for a couple of releases. And we have some updates to share that related to discussions that happened during the PTG. And also talk about some tech that or documentation enhancements that we discussed during the PTG too. So it was a very productive PTG and we had people from Red Hat, NetApp, CERN, and SAP and HPC Cloud and a couple of other companies as well. And we also had an operator hour where we gave operators the opportunity to talk about features. They would like to see Manila or our suggestions for us, things that could be in the future. So it was a moment to connect with the operators. So thanks to those that participated in both the PTG and the operator hour, it was really great. And yeah, I'm happy to share with you now some updates. So first thing that I would like to give an update is Human Readable Expert Locations. This is a feature that had a spec and the spec was more at a couple of cycles ago. And the export locations in essence, they can be difficult to memorize as some of the backends might use the shared CUID. And there was a spec documenting a change for this which is pretty much consisting of allowing users to configure the export path to something that will be easier for them to memorize and use as mount path when mounting their shares. So NetApp is picking up the work and we also discussed an additional use case which is the possibility of allowing export location updates. And we agree that this could be very disruptive as if you change the mount path of something that is already mounted by someone, you could pretty much remove like the access or you could take down someone's connection. So we wouldn't like to do that unless there is a very, very strong use case. And this could be something that we can discuss again in the future. And the spec will be updated with additional details. For shared backups, the generic implementation of shared backups was implemented in the previous cycle. And we discussed a shared manager, driven advantage, shared backup driver implementation as well as the data service backup driver interface. And CERN had some interesting thoughts to share. They would like to introduce a new driver for their own backup solution. And they found out that the backup driver interface called back into the data service and that the driver interface needs some changes. So they are planning to work on those items during this cycle. A couple more features we are looking for to get implemented is one of them is barbecue integration for managing at rest encryption keys with Manila. So SAP is working to integrate Manila with barbecue and in summary, the proposal consists of making the admins create shared types with the encryption information and the share would be created with using such shared types but Manila will generate the key with the help of barbecue. The users will also be able to provide their own encryption keys managed by barbecue and the keys provided to Manila will be sent to the shares back end to encrypt the data. So in case the user provided the key and the share type also has the configuration to set in barbecue, then we would use the user's key in favor of the share type one just because I mean, we would respect the user's decision in this. Manila will not control the actual encryption of the data and this would be done only by the backend driver and its proprietary methods. If you'd like to read more about this back and share some feedback, please take a look at the proposed batch. For deferred elition, actually the next two features I'm mentioning are coming from bug fixes or bugs that were reported against Manila and one of them like was deferred elitions. So users might want to delete multiple shares or shares snapshots and then immediately create multiple shares again. But I mean, this deletion could take a while as the backend like if the shares replicated there's a lot of things happening then the deletion could take a while. So automations and users would have to wait until the shares deleted so the quotas are released or they could take for directions with shares or creating more shares. So the idea is to have a deferred deletion which would allow automations to proceed with their tasks without waiting for the drivers. But also at the same time, we agreed that when we identified that it is a deferred deletion, we should release the quotas and then if there is an issue with the deletion then we would handle the average internally. And the last item for features plan for 2024.1 is ensure shares on API. So basically we didn't currently have a way to recalculate the mountpads of the shares. And so for example, if an IP configuration of a backend or something has changed, in this case in case of CERN, it was because the monIP address has changed and they wanted a way to update the expert pets without restarting the money lab share service. So they would like to have a possibility to do that without needing to restart the service. And we are coming up with an idea of providing new APIs that will allow people to trigger that piece of the code that is only called when we restart the service which is the ensure shares. And then this will recalculate the expert pets or mountpads and it would get the fresh information and write it into the money lab database. So I'm looking to writing aspect for this and yeah, that's something we could, we can pick up for this next cycle too. For multi-release effort updates, we have some updates for SAP NFS drivers. The enhancements for upgrading to a SAP ADM deployed NFS Ganesha were merged and we are continuing to test with the CI and also planning to enable the ingress service with SAP ADM. We will add a deprecation warning to the entry NFS Ganesha manager model that uses a deep bus to communicate with NFS Ganesha and the deprecation will begin in this cycle with the deprecation warning and then the model will be actually removed in the next dot one cycle so that we respect the tick-tock deprecation thing. First you call commit to 0.0 Stephens been doing an amazing job. We have still some bits missing bits in the money lab code and we agreed with him that we could provide some help for testing. So we are pretty much trying to help him to speed up testing in the money lab and in the money lab repositories and yeah, we've been working with him with that. For multi-release efforts, these are some topics. Shared transfers is a feature that we started working on but we don't have an assignee. So if you'd like to learn more about that, please ping us, we'll be happy to walk you through that. It's a very nice feature to work on. And SDK is something that we have in progress. We have been doing a lot of work with university interns and they've been doing an amazing job and we are hopeful that in the near future we'll get full coverage. We already have a couple of APIs that we needed for interacting with other services but there's still some bits missing. For OSC, it's pretty much a matter of adding more functional tests like negative functional tests and some testing that we should do but we've reached party and we already added even a deprecation warning to the money lab native client. As for metadata spec, it needs some updates but we are doing some work with the export location metadata for the cycle. So that's another feature we plan to work on. And last but not least, some documentation changes we are planning. First of them is called HallCasts and it's an idea we have been cooking for a while which consists of short videos to help newcomers. So short videos on how to set up DevStack, what's Manila, what's the structure, code walkthrough, things like that that can be really useful for people like running unit tests and things that we have been asked repeatedly by interns or by newcomers starting Manila. So we want to condense that into the YouTube channel and make that into short videos of 5, 10 minutes so that we have that information handy. And the other documentation thing would be new contributor docs which would be pretty much a checklist for new contributors to ensure that they are complying with what the core team or what the Manila team expects from them while submitting new features. So timelines, what we expect in terms of functional tests, unit tests, features, APIs and everything. So this would be kind of like an information hub for all of the other documentation we have already for Manila. So yeah, that's all I had. Thank you everyone. Thank you everyone that managed to join us during the PTG was really great. And thanks for the space to share the OpenStack Manila recap. So that's it. That was probably one of the most convincing recordings I've ever seen. That was so like real that he was here. But that was our last update. So that wraps things up for last PTG, the last one of 2023. And we have selected the dates for the next PTG which will be the first one in 2024. So mark your calendars and get involved April 8th through 12th. If everybody wants to come back on video, I know we had some people that had to drop because we've definitely got over in time here. But I really want to thank all of our awesome speakers here today. Thank you, Jay, Sylvain, James, Carlos, Steve, Brian, Rajat, Carlos, even though he wasn't here. I know I said it, but thank you so much for joining us today and taking time to share everything that you discussed at the PTG. And a special thanks to our excellent audience out there for joining the stream and participating in our show. I think we are pretty much out of time, but I want to make sure that you all know to tune in next week again for another episode of Open In for Live. Our large scale ops deep dive series is back. And this time they're inviting special guest, Dan Paik, from Samsung SDS to discuss their deployments in operation. So you definitely don't want to miss out. Also, don't forget, if you have an idea for a future episode, we want to hear from you. Submit your ideas at ideas.openinfra.live, and maybe we will see you on a future show. Thanks again to today's guests, and we'll see you on our next episode of Open In for Live.