 Thank you for joining us and welcome to Open Infra Live, the Open Infra Foundation's hour long interactive show, sharing production case studies, open source demos, industry conversations and the latest updates from the global open infrastructure community. We are live here most Thursdays at 14UTC, streaming on YouTube and LinkedIn. My name is Kendall Nelson. I am a senior upstream developer advocate at the Open Infrastructure Foundation and I'll be your host for today. So as we mentioned, or I mentioned, we're streaming live and we'll be saving time at the end of the episode for questions that you might have throughout the episode as we introduce all of our excellent speakers today. So if you have questions, please drop them in the comment section throughout the show, wherever you happen to be watching us and we will answer as many as we can. So today is all about Antelope, the 27th release of OpenStack. We're back at the start of the alphabet. It's crazy to think that we've had 26 on time releases and now we're back where we started. But as always, we couldn't have accomplished this without the help of our member companies and our worldwide community. And as a result, we have an excellent lineup today of community members here, many from our member companies to talk to you about some of the new features and fixes that have just been released yesterday as a part of the 2023.1 Antelope release. First up, we have Salon here to talk about updates to Nova. Hey Kendall, thanks. So hello, I'm Sylvain, I'm the Nova PTL. Also the placement PTL, because no placement is in the Nova, is also using the Nova team. So basically, let's discuss first about what we did for Antelope and then we'll discuss for the next release. So basically, can I have the next slide please? Yeah, okay. So basically what we did for Antelope or Antelope, I don't know what we prefer. So basically we have six specs or six blueprints that were merged. 18 were accepted before. To be honest, we only had six, not because basically we did not have time to review them but more because I would say some priorities were different in between the contributors. So that's why unfortunately we didn't have time to merge more of them. So about bugs, 27 bug fix merge, maybe more. I just try to think about all the, I would say all the changes that were merged for the bugs. Also, one important bug fix was also merged. It was about some CVE bug that we had. If you don't really know about that CVE, please look at that or please ask us. And then you will see, make sure that you can have a fix if you have this CVE. That being said, for Nova, that fix is only upstream for stable usury. But when I discussed with most of, I would say with most of the people that creates some specific products, I think they should be okay to have the fix for their own products. We had less number of contributors as you see than last cycle, last cycle where I think 50, 56. So that's maybe something we could be discussing on next virtual PTG, I will explain that after. But anyway, thanks the team, thanks Nova team for working. I know it's something difficult to work on both, I would say the project and other stuff that you have. So I'm definitely happy to see you. As one thing also that operators also need to know, this is our first what we name SLURP release, which means that at the moment, given is the first SLURP release, you can all upgrade your compute services directly from yoga. That means that, for example, you can upgrade, you can upgrade your services to Antelope. But you can continue to have yoga computes and that should work and I say that should work because we tested that works, but it's more like an experimental feature. So please test it. Anyway, we'll discuss about SLURP releases maybe later. Next slide please. So what we actually had for Antelope was at different features. One is at least nothing is really changing, I would say for PCI devices, but no, I mean, the use cases are still the same, but no, the scheduler and the placement API no verify the PCI devices, which is no better because we had a few, I would say we had a few problems with the previous PCI device scheduling and no, given that new feature, it will help us to only, for example, to also, to not only, sorry, having more allocations, I would say, sorry about that if you don't really know what the location is, but also to basically have new features for PCI devices. One also feature that we created is how to power manage the dedicated CPUs, meaning that if you have dedicated CPUs, now with Antelope, you can either ask Nova to stop the CPU. For example, when the instance is either deleted or stopped, then basically the CPUs, the physical CPUs that are used for that instance will then be stopped as well, or you can, I would say, modify the power for those CPUs. That's a new thing that we supported by Antelope, as a reminder that only works for dedicated CPUs. Another feature that we got is making sure that, for example, when you rename, we don't support renaming a compute service, but even if we don't support that, sometimes it happens, basically, it happens that the computes are renamed, and then you have problems. So what we created is we created a feature that basically verifies the specific compute node UID that's persisted, and if it changes, then we say, sorry, it won't work because if we then start the compute, then you will have more problems. Another feature is that, for example, for space consoles, now you have new compression settings. One important feature that was accepted for Antelope and that was done is that by default, we enabled the new roles and the new policies. That means that for you, it is not only about admins and end-users, this is more, I mean, you will see the new policies, I won't explain that by now. And two other features. One was about, for example, you can provide an instance house name as an FQDN if you use that specific microversion I said. And the last one is about, for example, know that you evacuate an instance, it won't automatically restart your instance. We did that because sometimes operators prefer to not automatically restart the instance because, for example, they have other services to run before the instance has to run itself. So we prefer to stop using that specific microversion. Next slide, please. So know that they discussed about the Antelope features. As a reminder, next week, we'll have a virtual PTG. The NOVA team will be discussing during that virtual PTG. I provided an email on the mailing list. You can see it. But again, I will explain that now. So if you want to discuss with the NOVA contributors who are more than happy to discuss with you all, for example, if you want to discuss about your bugs, I would say the use cases that you have are the features that you miss. That's one way. That's one simple way. Just go and discuss with us. You will see we have multiple topics to discuss. The topics are at the left. But you have also a specific operator hour where you can basically go and discuss with us. So as a reminder, this is on next Tuesday at 3 p.m. UTC. As I remember, for Europe, we will have daylight modifications next weekend. So keep that in mind. But yeah, 3 p.m. UTC next Tuesday, you will be able to go and discuss with the NOVA community. I think that's basically it for me. Thanks, all. Thank you so much for sharing all of those updates and telling us all about the hardware enablement and all the bug fixes. We really appreciate everything that you and NOVA does for OpenStack. So thank you so much. If anybody in the audience has questions, please make sure to drop them in the comment section of wherever you're watching this. And we'll make sure to get to them at the end. Next up, we have Carlos de Silva to talk about Manila. Thanks, Kendall. Hello, everyone. So I'm Carlos, the OpenStack Manila PtL. And I would like to share with you today some of the great things that happened in Manila during the Antelope release cycle, but also share some of the good things that we intend to do for the Bobcat Cycle. First, congratulations to everyone for this amazing release. I think it's great to see all of the work coming together. So yeah, let's get started with some of the good things we've done. So what's new in 2023.1, or also known as Antelope? The first of one of the major features we have worked on during this release is the shared transfers. And it is already available for this release. You can use it for transferring shares between projects. The behavior and purpose is quite like Cinder's. And you can start a transfer of a share from one project to another and then accept the shared transfer in the destination project. There is some more documentation related to it and how to use this feature. And it is available only if you are using GHSS Pulse. And there are also some more discussions happening over the next PTG, which is going to be next week, the virtual PTG, where we intend to talk about some enhancements for it and possibly extending it for GHSS True as well. So yeah, if you're interested on that, please join us at the PTG. The next big thing I would like to talk about is the metadata for shared network subnets. And basically, let's put it away. If you have a deployment and it's using, for example, NetApp, and it has a network layer that does not provide the traditional segmentation details out of the Neutron API, with this release, you'll have the ability to let Manila know about those details. So you can set the metadata to the shared network subnet and that will be read during the network configuration process for shared servers. So when you're creating shares and you would like the back end to know about some specific networking details, you can do that through the shared network subnet metadata. When another feature introduced was, in case you're using CIFS in your deployment and you missed adding a default Active Directory site to your security services, we got you covered. So now you would be able to have a default AD site and add it to your security services while you're creating it or when you are updating it in case it doesn't have like shared servers. On the next big things, like the next slide, we'll see that we had also some RBAC updates. Basically, the RBAC updates for this release consist of ensuring that we have enough testing coverage and that all of the things that we have been implementing for the role-based access control are working just fine. So one of the things that we like about doing these things with the Manila community is that we are able to promote some events where we basically try to have all of the contributors coming together and for this cycle we had a hackathon where we had lots of contributors from different affiliations focused for a couple of days just trying to write more test cases or enhancing testing coverage for RBAC. So we had lots of new test cases and new test scenarios merged in the functional test for Manila and this is like an amazing work. We couldn't be happier with the outcome for this. So on the next cycle, we also intend to have a couple more of those events coming. So that's also something that we will be discussing at the PTT. If you are interested and you would like to know more about that or have some thoughts about it, please join us. Also we had several bug fixes and enhancements to the features we have. We also have some good plans for the 2023.2 release which is Bobcat and for 2023.2 we intend to introduce share backups for Manila. It would be a kind of a generic backup approach for Manila shares and you would be able to either use the generic backup approach which is using the Manila data service as there is the generic share migration approach but also if you are using a third party driver vendor and if they implement this feature you would be able to use the native feature from their storage. So yeah, that's one of the nice things we intend to merge as well. The specification for this was accepted during the NLOP cycle and now we are quite focusing on the implementation for this. Also, we want to add the possibility to specify metadata for share export locations which is like we have been gradually adding metadata to our resources according to the need. We've been changing the setting metadata mechanism and APIs so we could make it more generic so it's a couple of cycles effort already and for the cycle we want to extend it for share export locations as well. This is one of the projects that we are proposing to Outreach so we want to have an Outreach intern working on this. So if you are an Outreach applicant and if you know an applicant that is interested in working on a nice feature to work on with great mentors in a great community please let them know about this or please get in touch with us. And also a couple more things like introducing more coverage to the money APIs in OpenStack SDK so we can have more interaction with other projects and we can make those APIs available. CI stability and different approaches for scenario testing is also something we are discussing now so next week we will have a couple of topics where we will talk about this and as I said these plans for Bobcat are being discussed during the next week's view to PTG so if you are an operator or a contributor and you are interested in one of these please join us. The sessions, time slots and the topic EtherPAD is already available in the PTG page which everyone is sharing at this point so yeah, please check it out and join us next week. So yeah, that's pretty much it for money updates. Awesome, thank you. It's really cool hearing about how you had the focused hackathon day dedicated to RBAC. I think that that's a really good way to make progress on the community goal so maybe something other projects can try for Bobcat as well. Thank you so much. Thank you. And next up we are going to dive into what Cinder accomplished during the antelope cycle and with us today to talk about Cinder is Rajat. Thanks, Kendall. Yeah. So I'm Rajat Dasparam. I'm Cinder PTL and I'm going to discuss a few updates about Cinder. So what's new? So we have three new Folding Backend Drivers added. The first one, which provides support for both iSCSI and Piberchannel protocol. We have two drivers that support NVMe with DCP. So it's great to see like new drivers adopting new technologies and we have more and more NVMe drivers which is really good for the project. Apart from that we added the missing commands which were there in OpenStack client. So we had this parity between Cinder client and OpenStack client where few of the commands were missing from OpenStack client. So we added eight new commands in the last cycle and now all the commands that Cinder client support OpenStack client also supports. One of the future work we have for the next cycle is OpenStack SDK support which we will be discussing in the Bobcat cycle but for now you can use OpenStack CLI for all syndrome operations. Next slide please. So what's changed? So we have removed support for creating multi-attached volumes in the legacy way. So just to briefly explain in Queen's release we supported creating multi-attached volumes but the way was a non-admin user could specify the multi-attached parameter in the volume create request which we soon realized was not a very good idea because non-admins could create multi-attached volumes and could cause data corruption without having the right file system set there. So we delegated that task to admin users. So an admin user now creates a multi-attached volume type and a non-admin user uses that type to create the volume. So we still supported the legacy way of doing it but recently there were a lot of customers having issues with the legacy way. So in the cycle we finally removed it. If there are any deployments using the legacy way of creating multi-attached volumes please switch to the new way as we don't support it after anti-loop. Another thing was one major issue we had when we restored backups into thin volumes. So whenever we restored a backup into a thin volume it got converted into a thick one which was really bad because it caused more space consumption. But now that is finally fixed and if you restore a backup to a newly created thin volume then it would preserve the sparseness of it. Next slide please. Okay, so what's fixed? So we had a CVE Sylvan also mentioned the same CVE. So I will just briefly describe what that is so just to get to the context. So if we have a VMDK image file we can inject the host, controller host configuration files path into it and when we do the conversion then to a raw file then we can fetch the config data from that raw file. So it was an attack vector which we had in VMDK files. It doesn't matter if you are using a VM where driver or not any VMDK file could be exploited in the same way. So VMDK has two sub formats. So we have two sub formats that we support that don't cause this vulnerability. So they are listed on the screen, stream optimized and monolithic sparse. So these are the only two sub formats we support with VMDK other formats we reject as of now. We have a config option to set it but to fix the CVE we had to limit the number of sub formats we support. This changes backported until train. So we have already released all the stable branches containing this fix but we backported it train because train supported python 2.7 and I know some of our deployments have old supporting old versions. So we have the fix in train. So if you want to fix your deployment and since we are not releasing train you could take a look at that and do the required changes to fix it in your deployment. Next slide please. So we have some stats from StackLytics. We have as you can see on the screen we have 70 contributors for reviews, 47 for commits 53 for which file bugs and 11 which resolve the bugs. So out of 150 bugs for resolve which is a good number we had good amount of commits and reviews. It is not as good as the last release. Last release we had higher stats but again it is still a good number. So yeah I am really proud that we have very active contribution in our project. And moving to the last slide we have Bobcat, PTG upcoming so these are the rates for sender we are conducted from 28 to 31 March which is Tuesday to Friday. The timings are 1300 to 1700 UTC. You can find in the PTG schedule I have booked the cactus room for all the days from Tuesday to Friday. We also have operator hours. So any operators who are interested you can join us on 29th March when I stay from 1400 to 1500 UTC. It would be great to have you there. And finally we have this contribution guide if you want to contribute to sender you can go to this link. Awesome. Thank you so much for sharing all of that. I am really excited to see the open set client getting parody and the SDK being your focus and the next release. It has been a long time coming trying to get all of the services to parody. We really appreciate all of the effort you and sender have made towards that goal. Awesome. Cool. Thank you. So our last presenter for today will be talking about the updates in Ironic during the antelope release. Take it away, Jay. Hey, so thanks for coming and watching and just going to talk a little bit about what we have gotten done over the course of the antelope release with Ironic. We are going to start out by talking a little bit about some of the numbers that talk about what we did and part of the point of this is Ironic is a pretty big selection of projects. It's not just the one Ironic service. We provide a whole suite of things that can help you deploy bare metal or test it or whatever and across all those projects we had 251 commits from 53 different people over 23 different companies contributing for over 25 thousand lines changed across 22 repositories. We also fixed two CVEs that were in CI tooling that we use so I'm not going to go too much into detail for it if you're running virtual VMC and Sushi tools in production you should stop anyway because it's a CI tool. And I did want to mention we adopted a project this cycle as well. Some good folks who are kind of in the community or adjacent to the Ironic community have been maintaining virtual PDU which emulates a PDU on one side and powers on Libvert VMs on the other side and it's a great CI tool on the same lines as virtual VMC and Sushi tools and we've adopted it now so that's now a part of Ironic and it's being maintained by us so if you're watching this even if you don't choose Ironic if you're working on another bare metal software come talk to us about our CI tool and we've got some good stuff there. Let's move on to talking about what we did last cycle in actual detail that you care about and part of the reason I show the numbers before is that a lot of the work we continue to do on Ironic is stability work at CI making sure that things stay stable and tested and behind the scenes things you might not always care about right now but here's some of the things that you can see. One of the big things we added is the ability to export Ironic conductor metrics to Prometheus so we used we already supported sending hardware based metrics to Prometheus about the hardware that Ironic's managing and we supported sending application metrics from the conductor in the API into stats D what we've done is we've sort of mashed those together now so you're able to send the application metrics from Ironic conductor about the performance of Ironic services to Prometheus alongside all that hardware information which hopefully is going to make that a lot more useful. I will note that we're not currently sending application metrics from API just was a technical issue and you should be able to metric everything on the conductor that you would care about as our APIs are pretty thin in addition to that kind of on the route of operability we've added support for service role in our default policy it's intended to be for service to service communication and what we've what we've done here is basically finished up our feature support for our back so Ironic should support all the roles you would expect and you can configure that so we might do more testing in the future but as far as features they're there it exists and you should be able to use our back in your Ironic environment fully including the service role on the third thing is this is more of a preview we actually did add a feature in Ironic and I'll talk about that but this is one of those features that I don't expect to be immediately useful to you all but it enables us to do better things in the future and that's adding support for shard to our node API endpoint this permits an operator to set a shard value on a node and then later to query just a subset of nodes by using shard in the query string the reason this is valuable is a lot of times many of the services that integrate with Ironic such as some of the networking agents we work with and even the Nova compute service directly have sort of a scaling bar that set it about as high as you could scale up a single virtual hypervisor but because Ironic is dealing in real bare metal we could scale much higher than that in a single Ironic installation but we can't always expect everything around us to sort of adapt to the specialties of bare metal so what we've done is allowed you to chop those up and then we'll be working on a feature in Bobcat maybe further on as well to add support for those various extra tools the Nova compute driver some of our networking agents to be shard aware that you can limit them to a subset of your nodes and that's going to greatly increase performance and our hope is that by the time this entire process is done that the Ironic driver in Nova will look and act a lot more like the other drivers which should lead to fewer bugs and a more easily understandable high availability model for people who are already familiar with how Nova works with virtual machines so that's really exciting but we're only a step there I've got one more thing on there though that's pretty cool which is as part of this we can sort of see we're trying to increase performance of the API well last cycle we with nodes we took an approach and we greatly improved the API response time for those this cycle it was the port and the port group APIs and I say on this slide it's 20 times faster we have benchmarks that are higher than that I wanted to be safe and say 20 times that I can feel pretty confident you're going to get a 20 times benefit in your environment but in the real world you're likely to see even better performance gains than this and so we're happy to provide that performance gain and I hope it makes your environments run more cleanly as you upgrade through so that's it for our features now I did want to mention that we also are having a virtual project teams gathering here Tuesday and Wednesday although we do have two different sessions for Wednesday for two different you know for North America and Europe contributors versus APAC that'll be in the Falsum room at the same PTG website all the other ones are listed we're planning features for Bobcat and I've got the etherpad schedule up the etherpad up with the schedule in it and some of the comments one of the things that we can struggle with sometimes in these events is having real good concrete feedback from operators so if you're running a relatively modern ironic you know feel free to come by and talk to us about what features you're using what's working the way you like what's not but even more especially we have a next slide we have a session set up just for you and that's a special addition of the bare metal SIG is going to be our operator hour at the PTG and this is specifically for anyone who's operating bare metal provisioning software doesn't have to be open stack ironic or ironic adjacent you know if you're using metal three that has ironic for using mass if using any bare metal provisioning come to our operator hour let's commiserate we have similar problems we have similar use cases and we all want to talk about it so I want to have you there with us and chatting that's going to be Wednesday morning at 1300 well I say morning because of where I am in the world it might be an afternoon or an evening for you but Wednesday at 1300 UTC in Folsom and we really do encourage you come to the bare metal SIG come talk to us that'll be a great place to give your feedback to us and hey I just if you're running the software I think we'd all love to meet you anyway so thanks for listening awesome thank you for all those updates and all the invites and stuff the detail behind everything is just so cool I feel like there's something new that I learned about ironic anytime I hear somebody talk about it so very very cool alright well I would like to invite everybody to come back on screen and see if we have any questions from the audience I haven't seen any come in yet exactly but I know I have like one or two questions for you all so we're still waiting on Sylvan and oh hey perfect we've got the whole group okay so I have two questions that are kind of unrelated to one another so the first one so obviously the virtual PTG is next week but we are also having an in-person PTG at the same time that we'll be hosting the open info summit in Vancouver and I was wondering how many of your teams are planning on RSVPing and signing up to meet in person I'm happy to start and speak for ironic I know that we do have some contributors who are going to be there I will be there I know some others I don't necessarily think we're going to end up having a quorum for making decisions in person due to we just have such a distributed team so there probably will still be a virtual element but I always do appreciate the in-person sessions and getting to meet people face to face and that can be especially helpful for cross project things so I mean I talked earlier about getting sharding implemented and some of the other things that integrate with ironic I suspect that that in-person PTG will be a huge help if we hit any speed bumps along that implementation path and helping it out so that's sort of our approach I don't think that's that might not be our primary planning this year for ironic because everyone's not going to be there in person but I still would love to have you there would love to meet you if you're running ironic you know come by say hello even if it's not in an official session Yeah I mean to continue on that on that point I mean for us it would be probably the same we won't have quorum because because I mean not everyone can just travel but we started to discuss about it we'll continue to discuss it during the virtual PTG from another point of view but the first thoughts that we have as a team is basically to at least have time to do this kind of I would say discussions, topic discussions exactly like we do for virtual PTG's you know in general we don't really accept we don't really need to have a consensus during a virtual PTG this is more time for having a kind of a direction and I think for the for the physical PTG that will be quite the same that will help people that are there to be able to find some some discussions and to have like some directions to have that's the first thing also as J say for example about the CI I have some concerns about the upstream CI and if we can work as a team as an open stack team in between the in between multiple projects for example say the issues we know about the CI that will be loved from my point of view that plus also I know that in general it's difficult for operators to understand PTG's but given this time we'll have both the forum the summits and the PTG I think it's very important for engage with the operators and like maybe have one day in between the operators and us to look about their pain points and maybe look at the code maybe look at something maybe look at some bug fixes I mean I know but that would be also nice that's maybe like the three main priorities I would like to have for the for the physical PTG yeah in the vanilla case we also intend to be there we intend to have some sessions for the PTG and we thought about this other two like we might not have like to take the decisions there but the thing is we will we intend to combine this virtually as well and we think that this could be a good timing as well to meet the operators and to listen to them so those would be like some of the focus we intend to have we know there are a couple of vanilla people coming and it will be it's a great to see people in person and to meet and yeah definitely so from a center perspective some of the people are really interested in going they still haven't confirmed their status but yeah apart from the core center team also we have storage vendors like NetApp and Pure we are also going to be there so in center we have this mid cycles in between releases so we do two mid cycles so I was planning we could do something like a mid cycle one there and conduct some discussions but it depends on the number of people but yeah we have interested people going to Vancouver yeah I think there will be a lot of good opportunities to mix with operators and see new faces that we probably haven't seen because we've only had one other in person event as a community since the pandemic so getting teams back together and rebuilding trust and like being able to look at each other and not through a computer will be so good I'm so excited I hope to see you all there and make sure to get your team signed up by April 2nd so after right after the PTG is the deadline any other teams not represented in our panel today are also invited and yeah we hope to see you all there in person at the Vancouver Summit so I think we still have time for some more questions let's see if we have any I don't see any from the audience so get your questions in and if we don't get to them by the end of the episode we can definitely circle back and answer them on mailing lists and other ways of communicating as well just to make sure that everybody knows everything about what happened in Antelope so my other question was if there is one thing in particular you would like feedback on from Antelope at the Vancouver Summit and during the forum what would that one feature be it's a hard question probably we already have one question for the operators for the operator hour that will have on the virtual PTG but I guess we'll have the same for Vancouver so for us, for NOVA we started to have unified limits we started to implement unified limits basically what is unified limits is that with Keystone you can provide some limits and then basically NOVA verifies that instead of having the quotas that we are probably previously having so that was one question and that's one open question that were for operators we said two cycles before that people can start to use it but we'll know, we know it takes time for operators to get the latest releases but given it was like I have to remember exactly in which specific release we had unified limits I think we have a few operators that can start to test it that's one of the things I would like to engage with the operators one I mean what we also did at the Berlin summit during the forum and that I would also want to have that during the forum in Vancouver is to engage with them with their pain points that's what in general we do and what we did with Berlin is that they sell us their pain points and after the forum we try to see whether we are having specific bug reports that were related to their pain points because in general some are there, some are not so if we are not seeing an existing bug report then we are creating it and we try to fix them after that so I hope that if we can continue to do it this way that will probably help our operators better than just waiting for them to to test who else for Ironic it's interesting right I don't necessarily have a specific question I'd like to ask for operators but what I've been seeing is I've been going out to conferences and talking to people in the open stack in the Kubernetes communities is that there's a lot of people running Ironic who don't even realize it people who are using things like Metal Cubed who don't know that Ironic is underpinning it people who might be using a vendor branded product that's powered by Ironic under the hood and so for me it's less about what I specifically want to ask these operators and more about bringing more of them under the roof here right like we're here to support them we don't care how you consume Ironic if you use as part of the open stack integrated release if you use it bolted on to some other thing you know we want to hear about you want to hear about those use cases because that's going to help us determine what things to focus on as we move forward so like that's what I that's what I sort of think about when I think of what I want to hear from operators is I want to grab those operators who don't necessarily know that they're running open stack and say yes you are welcome to open stack congratulations this is what your stuff's been running on a while and then also to see what they like what they dislike about it and and either you know take that feedback and to improve us or maybe even talk to one of our partners in another community and help them integrate with us better to help resolve those operator issues yeah to all of you that think open stack is dead you've been using it all the time you just didn't know I had more than one of those conversations at the southern california linux expo a couple weeks ago literally that whole thing of someone talking about how why would they use open stack for server provisioning that they use metal 3 yeah I get to to tell them all about how ironic is actually what they're using there which is exciting and sad at the same time so I hope we get more folks to realize what's actually running in the substrate that their environments are built on yeah yeah there's a lot of visibility that can be gained I think awesome yeah I think we we would possibly focus on listening to the operators as well we have taken like approaches like variable that ask questions during like previous operator hours we have been working on enhancing those pain points but I think we will be more likely be focusing on listening to what they have to say and trying to come up with solutions or trying to listen about the deployments that are using monilla so that's pretty much what we would be focusing on awesome so we have this user service feedback every year and in every PTG when that falls alliance with that we ask people like so the responses are pretty vague they say backup needs improvement or volume create needs to be fast or something like that but they are very vague questions so I would really like to ask them like proper details like how what is the problem exactly and how we can tackle that issue so yeah just a few questions from the server feedback if I can add one last thing about what I would like to discuss with the operators for me it's important to remind them that the community is not only the developers but that's also the operators and nothing can happen if they don't report us what the problems that they have so that's one important thing we know that we have launchpad for providing the bugs we know that sometimes they sometimes some operators try to fix their own problems by themselves but I would like to explain that whatever happens the community is there and engaging with the community can also help them more better than they think it's not only discussing like every six months or every one year it's more like trying to find a way to and we started to discuss that in Berlin but we didn't have time trying to find a way to have those kind of productive discussions not yearly but I would say weekly and that's one of I understand that not everyone can work on a project for the world week that is totally understandable but if we can have a way for the operators to know what Nova works at the moment and a way for them to report their problems that they have that would be awesome so basically across all of the projects are presented here and I'm sure all of the projects in general more feedback from operators and it doesn't have to be on Antelope either because I know like we've talked about it's not easy to upgrade right away when the new release comes out so any feedback on any release really to make sure that the issues that operators and users are hitting are being addressed because if we don't know about them we can't fix them we can't make your experience better so formal invitation for all users and operators to please get involved you are a part of the community just as much as any developer you're using the software and interacting with it every day so we want to make it a better experience for you so it looks like we did get one question from the audience yay and it is for J I believe so persistent coherent memory models and example of CXL have the potential to radically simplify and optimize virtualization stacks even when running on legacy hardware any thoughts or work towards this so I don't have specific knowledge of this to be clear I was one of the track chairs for hardware enablement and I can tell you we definitely got talks submitted about CXL I don't know of any specific movement happening in that direction this is the sort of thing where your voice needs to be heard like if I had heard this question before going through some of the hardware enablement talks I might have been more apt to pick one of those talks to make it I honestly don't remember if we picked any of them through but that is something that is likely only going to happen if people show interest in an open source based solution to something like that go ahead I know I was just going to the same direction I think we discussed about CXL a couple of times during previous forums that's not really something we want to enable by NOVA at least but we know the hardware if for example that person wants to get with us about what they would like to have as a use case I would more than happy to hear about the use case and one way of engaging with us as we say is that you know probably the best time to say it we have the virtual PTG next week and we have the operator so if someone wants to to use some CXL hardware we say I see he's asking for virtualization stacks I'm more than I'm more than happy to hear about the use cases and what they would like to have yeah well hopefully we answered your question a little bit if not we would love to chat with you more about it like Salon mentioned we have the virtual PTG next week and we have the forum and the open infrasummit coming up in June which I will talk a little bit more about in a moment here I don't think we have any more questions for the audience was there any other things that any of you panelists wanted to mention today before we close out thanks for moderating this Kendall you've done a great job thank you all right well thank you so much for coming today huge huge thank you to Sylvan and Carlos and Rajat and Jay for giving us the lowdown on antelope today and thank you to the audience for asking some questions and hopefully we'll get more and see more development in the future so I think we have a couple slides for you with information about the summit coming up so please don't forget to join us June 13th through 15th for the open infrasummit in Vancouver registration is currently live and prices will go up on May 5th so make sure to register before then we have a lot of excellent content in store CFP for the forum is actually still open so if there are topics like CXL or anything else that came up today please get those submitted by it's like mid to late April so you still have a little bit of time but we have a lot of awesome members joining us there and participating telling us about their use cases just hope to see you there and if you're interested in being a sponsor of the event please contact Jimmy at openinfra.dev then obviously we have the project teams gathering so it might have gotten a little fuzzy because we jumped back and forth but we are having a virtual project teams gathering next week March 27th through 31st registration is free and we would love to see you there we have over 35 teams participating I think or maybe exactly 35 teams and they are every one from Starling X and Kata to specific open stack services like Nova and Cinder and Ironic and Manila like you all heard from today and many many more so please register and join us there next week and then even more exciting we will have the in-person PTG during the open infrastructure summit and registration for the PTG for that is included within the summit so when prices go up May 5th it goes up for everything so April 6th in two weeks the week after the PTG we will have our next open infralive episode on large-scale operations a deep dive on society general so please join us for that and remember that if you have an idea for a future episode we want to hear from you please submit your ideas to ideas .openinfra.live and hopefully we will see you on a future show here and finally I would really like to thank the Open Infra Foundation members again because we couldn't host shows like this without you we wouldn't have awesome open stack releases and releases of all of our other amazing Open Infra projects without you if your organization would like to join the Open Infra Foundation take a look at openinfra.dev slash join and we look forward to seeing you next time on Open Infra Live have a good day everybody bye