 Thanks for joining us today and welcome to Open InfraLive, the Open Infrastructure Foundation's weekly, hour-long interactive show where we share production case studies, open-source demos, industry conversations, and the latest updates from the global Open Infrastructure community. We are live here every Thursday at 14UTC streaming live to YouTube, LinkedIn, and Facebook. My name is Kendall Nelson and I will be your host for the day, again, to talk about all things Xena, the 24th on-time OpenStack release. As I mentioned, we're streaming live and we'll be answering questions throughout the show and we will try to save some time at the end of the episode for Q&A, so feel free to drop questions into the comment section throughout the show wherever you happen to be watching us and we'll answer as many as we can. But before we get started, I've got big news this week that I am so excited to share. Yesterday, we announced that we're going back to Berlin next year for our first in-person summit since Shanghai, actually. Mark your calendars for June 7th through 9th, 2022, because we hope to see you there. Registration and sponsorships will be available next month and we're looking forward to hopefully seeing everyone soon. Stay tuned for more details. Now, let's get going. We have a very packed episode today with lots of people and we're very happy to have you all here kicking things off. We have Gonsham Man doing an overview of Xena. Welcome, Gonsham. Thanks, Kendall. Hi, everyone. My name is Gonsham Man. I'm OpenStack Technical Committee member and chair. I'll be giving a brief overview of OpenStack Xena release. So it's a 24th on-time release and on-time is really a great achievement by all the contributors involved in this release as well as our awesome release team, because we have around 381 deliverables and taking care of their release and all these things on time is really a great achievement. It will powerful hardware support and project integration among OpenStack, which we have a lot of community leaders joining in this episode and they'll be talking details about this. And if we see the contribution stats, we have 15,000-plus code changes in Xena or 680 developers from 125 organizations. It's a huge number even in the pandemic situation and having support and involvement from 125 organizations is really great for our community and that's why OpenStack is one of the most active open source projects in the world. So thank you, everyone, all the contributors, developers, and especially organization helping in every release of this OpenStack software. In term of community-wide call, just like what community-wide call is, it's common changes we do across all the OpenStack projects. Those can be there for the code consistency which help us to maintain the code or it can be the user-beneficial features. So in Xena Cycle, due to pandemic things and all, we didn't have a community-wide call as such, but we did spend some of the time on the previous community goal finishing like contribution guide and all or also like project, they concentrated on their feature, the plan for Xena Cycle. But in Yoga Cycle, we have one of the important goal defined and also selected, which is consistent and secure default Rback. You might know like many projects have finished the secure default Rback but still there are a few projects or many projects still pending, so we are targeting to finish those things because once we, all the OpenStack services have this new Rback, then it will be easy for operator to migrate from old Rback to new Rback. And yeah, a few of the challenge also, we are even discussing day-to-day discussion and how operator can see the challenge in using the new Rback. So those things also will be improving in the next cycle. And if we select more goal or not, that we'll be discussing in our PTG. So join us if you are interested in helping or discussing about the community goal. And a few of the things to note down what happened in Xena. One is we have the new project called VINUS. It's a unified log management service which like collect the logs into the indexing, alert reports generating and all. So if you are interested to join or use this project, join community on OpenStack.vinus IRC channel or you can start the discussion on OpenStack.discuss mailing list. And OpenStack IRC network is on OFTC. You might know, but still if you are wondering where our community gone from OpenStack IRC which was on free network, we have the same channel names and almost same registered there. So join us on OFTC network, how to join there and all settings you can find in our contributor invite. And one more thing help needed. So we have published this or communicated this in many places like in OpenStack, OpenInfra newsletter and in OpenStack Discuss mailing list. So our infra services ELK Elastic Research Log Stack and Kibana which OpenStack is heavily dependent on for their day-to-day software development. So we need help there. And if you or your company would like to help please bring us or mail us on OpenStack.discuss mailing list. Yeah, and I'll hand over to Kendall. Yeah, I can't believe we're already at another release. They just keep sneaking up on us, but we'll move forward now with 2021 user survey information from Allison Price. Yeah, I'm really excited. We have an early preview of some of the 2021 OpenStack user survey data. We are planning to publish a full report in November. So stay tuned for a deeper dive into OpenStack trends and what OpenStack users are up to now. But what I'm really excited for is that we have seen incredible growth this year. And just year over year, we have seen a 66% increase in cores and we now have 25 million cores in production of OpenStack. That is incredible. Last year I was excited to announce 15 million cores and this year we're here at 25 million. And just a few weeks ago, China Mobile talked on OpenInfoLive about their 6 million cores in production and we have seven users now who are running over 1 million cores in production. And we're really excited because we're going to be publishing case studies with them and they're going to be featured at an event later this year that I'll be talking about in a minute. But I just can't get over how incredible this growth is. We've seen over 100% growth among organizations with deployments of all sizes. So some with less than 10,000 cores and some up to 6 million. So I'm really excited to share some more information around that in November. But for now I think that with this 24th milestone, I think we can celebrate 25 million cores which would not be possible without the upstream community's constant work. So thank you all. And I'm really excited to share more information soon. Another big growth we've had is around public cloud data centers. So there are now 175 global OpenStack powered public data centers, which is incredible. We have a lot of different members who keep coming on and who are supporting us that have OpenStack powered public clouds of all sizes in every region of the world. So it's really exciting to see this footprint that rivals the hyperscalers that really puts OpenStack on the map. And I can guarantee you we will have a map soon that shows where all of these data centers are. But we're definitely really excited to see this growth and to continue telling this story with the community. The other piece that I found really interesting about the survey is that not only do we have all of these clouds that are growing that have been around for a variety of years, but in the last 18 months 100 new clouds have come online. So we have new deployments and alongside the growing deployments and it shows you that more than ever, OpenStack is very alive. There are operators all around the world who are deploying OpenStack at all different kinds of scale. And some are just coming online and just getting their feet wet and others are just bringing another cloud online. So it's really cool to see a wide range of deployments and getting to share those stories soon is something I'm personally really excited about. Alongside Xena this week, we've been talking to a lot of different press and we have been getting some really great coverage actually. So one of the things that I want to encourage everyone to do is check out some of the articles that have been published. Kendall, who's hosting today, provided an overview of Xena to our press community and they were also astounded by not only that we have 24 releases now, but also just this incredible growth and this incredible momentum coming out of the Open Infrastructure community. So personally, just want to thank everyone who puts in all of the work between building OpenStack, running OpenStack, and supporting the community as an Open Foundation member. We just had Microsoft join this year. We've had dozens of other organizations join this year as well and we'll be hearing from a lot of them soon. Speaking of our members, I do want to say a huge thank you to them. With the OpenStack growth is not possible without the support of the Open Infra Foundation members. We rely on them to continue growing the community and to continue growing this global footprint of OpenStack in production. So if you and your organization want to be involved in bringing the next 25 million cores of OpenStack into production, please go to openinfra.dev.join. We'd love to talk to you. We'd love to learn your OpenStack strategy and we'd love to get you plugged into the community. Later this year, one of the other opportunities for your organization to share what you're doing with Open Infrastructure and OpenStack specifically is through the Super User Awards. We will be announcing the winner in November, but the deadline to submit is next week on October 15th. I think that's a week from tomorrow, sorry, quick math. So if you have a customer or an internal team that is contributing to Open Infrastructure, please submit your organization. I love learning these stories. The community really loves learning what other organizations are doing. And it's a really inspiring way to share what you're doing with Open Infrastructure and to get more folks engaged at the highest level. So if you have any questions, you can reach out. But nominations are open for one more week. It's a highly coveted award that we've given out for since 2014. So I'm excited to welcome the new winner next month. And so I keep talking about next month. So what we're doing next month is the open and for live keynotes. So you're tuned in right now to our weekly show, but on November 17th and 18th, we're doing a special edition that will be our big annual event this year. Kendall mentioned we're going back to Berlin next year, but this is going to be our virtual get-together for the entire global infrastructure community. Registration is live and free at openinfra.live. So please join us. We have some great sponsors lined up, some great speakers, including some of those one million core users that we're excited to talk about. So it's definitely going to be a packed agenda, and I couldn't be more excited to kick it off with the global community. But for now, I think if we're ready to get back into Xena, so I'm going to pass it back to Kendall Nelson to run the show. Thank you, Alison, for all the awesome data. I cannot wait to see more numbers next month. I like hearing that there are so many people running over a million core installations of OpenStack is just insane. We do have one quick question from the audience regarding yoga, actually. So where is it? I lost it in the list. Ganesh Khadam from LinkedIn actually asked how can one get into contribution before the yoga cycle? So maybe Gonsham, being the head of the technical committee, did you want to answer this quick before we jump in? Yeah, sure. Yeah, we have a lot of ways you can get involved in communities. And one of the quick ways, every repository, you have the contributor guides. And there you have the details about each team process where their team lives, how you can contact them, and what are their plans or feature roadmap or low-hanging routes you can get involved. So search that or you can search on First Contact SIG which can enable you to any project you are interested in. And also we have the next October 18, we have the PTG. So every project have their separate virtual rooms and they're discussing like what all things they are going to discuss and implementing yoga cycle. And this is the best time because yesterday we released so we have one week before PTG where we all will be having a little bit more bandwidth to help new contributors. So ping us on IRC openstack-dev and we can get you involved in where we need help in a lot of things. Either it can be developer, documentation writer or system admin or anything. Many hands make for light work. We would love to have you join our awesome community. And then maybe someday you can be on an open open Infra-Live episode just like us. So Xena diving back into that. Basically there were three main themes that we saw throughout the Xena release. The first one of which was integration amongst OpenStack projects. So here today to talk about how Manila worked on this what is Tom Barron? Thank you Kendall. We can show the slide with Gotham's name on it and I will explain that I'm not Gotham. The surrounding viewers will note that I'm not Gotham Pascharavi. He is our PTL who is taking a well-deserved break before the project teams gathering starts up and the design kicks off for the yoga cycle. And I used to be PTL so I'm filling in for him. One Manila is the shared file system service for OpenStack and it's about half as old as a little over half as old as the OpenStack itself. So one of the things that was kind of a we'll talk about technical debt later but one of the things missing and Manila was inclusion in the OpenStack client and the OpenStack SDK. It wasn't done for us when these were developed. And we've had a request to for this from people for several releases now from back when I was PTL and it was hard to get momentum on it but Gotham was very creative in this cycle and we as a result we have support for almost all the shared file systems API resources in the OpenStack client. So if you can say OpenStack and in addition to say OpenStack volume OpenStack server OpenStack network etc. There's an OpenStack share section for commands about the shared file system service. And these resources are also available through the software SDK. So this is this makes for a much more helps Manila integrate with the rest of OpenStack and gives users a unified experience. I mentioned that Gotham was creative about doing this and this ties into how integrating with OpenStack also which is we had an outreachy intern and we had three students from Boston University who are senior design students signed up with Manila and rather than just churning out this stuff with the existing people we had a bunch of people design how to do the API resources how to integrate with the client and our interns did the bulk of the coding and this they learned to work with the community they learned to use the tools like Garrett and it went beyond Manila itself because there many of them may go on to work on other parts of OpenStack. We are continuing this in the yoga cycle we have a bunch of interns from Northeastern University who will be joining us and we're going to finish off the missing more esoteric resources that we had in this cycle. Another big feature in this release was live migration of what are called share servers the effectively virtual machines they may be vendor supplied or whatever that provide share service and they can be live migrated and the tricky part of that was moving their network around with them the network allocations and we can seamlessly do that with the neutron networks as well. There are a bunch of other features that we'll mention during when we talk about technical that we could go into as well. I think that's pretty much the summary right now. Oh, I'll just say in terms of participation come over to Manila get on OFTC and come on to OpenStack Manila we're a very friendly bunch and have a reputation for being an easy good way entree into OpenStack even if your ultimate interest is in another area. Thanks back to you Kendall. Thanks Tom. I can definitely agree that Manila is full of awesome people not just Tom obviously but more people as well. I actually have had the privilege of helping Gotham and Victoria Martinez de la Cruz help mentor the students that Tom had mentioned. So it was very cool to see one of them actually go on to get hired by Red Hat once she graduated from Boston University. So an awesome program and an excellent group of contributors over in Manila. Moving on we have more discussion of integration amongst OpenStack projects with Glance and here today to talk to us is Dan Smith. Glance is full of awesome people too. I feel like I need to say that now. So I'm going to talk about Glance's new user quota feature which uses Keystone Unified Limits. So a quota on a resource requires kind of two elements that requires a limit and some amount of consumption of that resource and usually projects have to implement kind of both of those elements in order to provide a quota. Keystone now has this centralized limit functionality which allows you to store those limits the limit side of that equation in Keystone in one particular one place and have the other projects use that as part of their quota enforcement. So Keystone does this by storing two pieces of information one is a registered limit which is kind of the definition of the limit as a named thing associated with a particular service and with some default resource cap that being the cap that applies to everybody unless this second piece is established for you which is a project limit. So this would be a per project per service resource override for that for that value and the benefits of using this are after services adopt them you get this single interface the single location to kind of define all of those default caps and per project overrides in one place being Keystone. You get this consistent kind of error message and enforcement behavior because all of those services that use this functionality are using a library that talks to Keystone to to do that enforcement so you get this kind of much more consistent behavior and so I'm going to really talk about Glantz but I wanted to point out that Nova is working on migrating to this as well and so this gives us gets us towards this goal of kind of unifying this behavior and setup across multiple services. So I just talked about Keystone but the point here is that Glantz the image service has never really had resource quotas. There are some limits that you can set in the config. These are stop the bleeding limits that apply to everybody. They apply universally and they're really just meant to provide an actual backstop limit to prevent a script from running over and just consuming all of the all of the resources or something like that but they're not they're not per user per tenant they're just kind of like limits. So in Xena Glantz has implemented per tenant quotas and out of the gate using Keystone limits functionality to store the actual limit bit. So in this arrangement Keystone stores the upper limit Glantz calculates the amount that you've used whenever you try to do something and then decides whether or not to allow or reject that based on the registered or your project limit that's stored in Keystone. So this is these are the limits that we've got in Xena for Glantz. Image size total the limit on the total amount of storage space that all your active images can consume. So this is kind of like the most straightforward thing you would imagine if you're limited to five terabytes then all of your images must add up to less than five terabytes. Image stage total is a similar thing where you've got a separate storage based quota but just on things that live in the staging area. So if you're doing an import to get an image into Glantz it goes first into that staging area. That's much smaller. It's definitely shared by everybody and it's a little bit more precious of a resource. So there's a separate storage based limit that you can set on images that live there. Image count total is just a cap on the number of images that you can have. Sometimes you want this just to make sure that people don't snapshot their instances every day for years and years and years and even though they haven't run up against their actual storage limit they have 10,000 images. And then this last one image count uploading is a limit on the total number of images that are in bound to Glantz at any given point. So whether they're uploading or they're in the process of an import this is really just a throttle that allows you to kind of limit the amount of stuff that any particular user can have on its way into Glantz at any given point. So this would be either regular image uploads or the total number of kind of snapshot operations that can be happening at any given point. So the first chunk of screenshot here is just listing the the registered limits that are stored in in my Keystone. And just the just the image ones here. So you can see those four registered limits they've got that service ID is Glantz and they've got a default limit which would apply to everybody except in that second chunk there. You can see that for this one project I've got an override of image count total. So anyone in that project instead of being limited to the default of two they're limited to a default of five. And the last last little strip down there is just kind of what it looks like when you run up against this with Glantz if you're trying to to do an upload and this shows the kind of standardized error message that we get from a service that uses this library and unified approach where you get a limit message with the name of the thing and how much you've used and how much you've gone over and how much you're trying to consume at any given point. So that's it for me. Like quotas I know are not the most glamorous thing in the world but it's awesome that we're actually standardizing across the projects and making use of Keystone to do so. So thank you very much for sharing all of that. Our next big theme throughout Zena is support for advanced hardware features. So we will start with the Nova project and have Gibi talk about some of the things that happened there this release. Yes, thank you Kendall. So this will be a common team in the next couple of leaders from the next couple of project leaders. We are we worked on integrating cyborg managed sport nicks into the open stack and it has a long history how it was going. We started integrating cyborg to Nova already in the Uso release and there we supported the basically plugging any kind of cyborg managed explorators via novel flavors with limited support on lifecycle operations on those VMs. And then in later eases like in Volubi we added support for other lifecycle operations like shelf and on shelf with explorators but those explorators could be actual smart nicks and then there was a need that that those smart nicks are represented as neutron ports because neutrones are networking project. So this required the integration between three projects Nova Neutron and Cyborg to have a representation of SmartNick all the way in open stack. So as I said in the past or first we supported Nova VEMs, Nova servers with cyborg explorators via Flavor Extraspec but when this is been this is a nickname they wanted to represent that as an actual Neutron port. For that we needed a cooperation with Neutron to have a new port attribute that's storing information about what what cyborg device profile is needed for for this certain port and then of course in Nova we needed to extend the scheduling and resource accounting code passes to when you create the VEM or schedule a VEM to compute hosts then no one needs to detect that the port one of the one or many ports of the VEM actually has this device profile set and then no one needs to go talk to cyborg get the necessary information how to how to attach the cyborg explorator to the Nova VEM we still have limitations in Xena so you can create servers and restart servers and post them and many simple operations is possible with with cyborg explorators or cyborg sportniks in Xena but you still cannot do resize migration or end live interface or touch to those VEMs with with smartniks that was what I wanted to share with from Nova perspective from the from this feature it's a no easy thing to get three big projects like Nova and neutron and cyborg to work together to land a feature like this so that's a thank you for all of your hard work and to all of the the project teams that have made this happen um so continuing on with like neutron things we have more hardware features and next up we have Laos Katona to talk about what neutron accomplished in Xena yeah hi thank you very much yeah actually for for cyborg integration in this cycle we have we have no job so as as Gibi mentioned the the bigger thing from neutron side happened happened in in Valabi and that was the device profile so so actually it means that when when you create a a port you can you can define a new you cannot a new attribute attribute for the port this device profile and that's that's actually the the cyborg accelerator and and that is that is consumed by the nova and and can do the scheduling based on on this information so it's it's now available on the on the api and on the cli as well so so you can you can use it and and enjoy your accelerators and and and have the scheduling based on that yeah actually that that was all from from neutron for this accelerator the thing so thanks thank you very much short and sweet but definitely something that people are pretty excited about from what I understand so moving on to the last part cyborg we have Cera Wang to talk about the hardware features that were developed in Xena welcome Cera okay thank you I'll talk about the cyborg side the cyborg side implementation is responsible for device discovery and device assignment the device assignment is pretty similar with pretty similar to the to the generic device device assignment so I'd like to talk more about the device discovery the device discovery is a little bit different with the generic discovery as we know for generic PCI device the driver discovered them by CSAFS some are using LSPCI directly but for smartnik we have to use a static configuration file which is showing the signs the configuration file has different section for different device this example shows that the eth2 and the eth3 are connected to Physiognite 1 and they are loaded by by a profile named JTPv1 so the Nick driver can read this configuration for discovery and read the device data to cyborg db so let's take the the X710 Nick as an example this thing supports dbp feature which means the Nick could be loaded with with a program with specific functionality the JTP is a kind of protocol the function here means that the Nick is loaded with program which allow the package of the JTPv1 protocol be passed and be classified on the Nick instead of on the CPU so this will accelerate the package processing and the free app CPU resources so if if the Nick is loaded by another profile the function name should be different so I have also paste some link about the dbp features details and also there is a test reporter about the Nick driver this report includes all the all the operation to boot up program with with this Nick and last the the mean should create the device profile like what we do for for other PCI device so that Neutron pod can use it in order to select the the red Nick yeah I think that's all from Saruk Saru thank you so much for sharing all of that it's a very cool to see how the project integration works hand in hand with the like hardware features that have been developed to this release and show how some of our services are better together even though they're amazing on their own if you're interested in following these discussions further cyborg Neutron and Nova will all be meeting at the open stack or the the open infrastructure project team gathering in two weeks so thank you for sharing all of that and we'll dive into our third and final theme for Zina working on cleaning up the the technical debt that was accumulated during the previous releases first off we have discussion of the technical debt that was cleaned up in Cinder by Brian Ross Meada Hi I'm Brian Ross Meada I'm the I was the Cinder P.T.L. for Zina and I'm going to do that again for yoga actually if you could go back to the title slide for just a second this is a perfect title slide for this segment okay now you can go I'll come back to that in a second okay so first just what is technical debt so the usual way it's thought about it's shortcuts you take to hit a delivery target you got to deliver like we did for the Zina release yesterday and the idea is you'll come back and make them more robust later and make them more thorough fixes so why is that bad well there are always new delivery targets to meet and if things are mostly working you can address the previous shortcuts later instead of right away and while you're doing that you might add new shortcuts and then you get more accumulated debt but why is that bad well because right you just keep going and you get higher and higher accumulated debt and the question final question is well if that's bad then why don't you just pay it off well it's here's the thing right if nothing specific is broken it's hard to get people to pay attention to it and users expect bug fixes and new features and that's what's important to them and it's not exciting to say well we're going to hold off on a couple of these bug fixes and feature deliveries in order to spend some time on stuff that isn't broken yet and if you go back to the title slide again sorry to keep doing this to you right I don't know who put together the slides it was Aaron or Kendall or Allison but this is a genius slide right because if you look at it technical debt colon like what so when I saw that I thought oh I'm supposed to add a snappy subtitle here I'll get back to that later so that was technical debt I took on and the other thing that's interesting about this is how do you fix it well there's there's two ways right we could just edit this and get rid of the colon and then it's a perfectly good slide that's fast and quick or we might say well no you're supposed to have some kind of snappy subtitle like why it's important and how it you know makes open stack grade or something like that so that would take longer to do but might be a better fix so all those kind of things come into play okay you can go ahead and skip ahead too please okay so just some more about technical debt I just want to put a plug in not just because I work for Red Hat but because I want to get this quote in from Tremaine Darby so Red Hat has a podcast and they actually episode before the most recent one was about technical debt so if you're curious about this topic hadn't heard much about it this would be an interesting thing to listen to it's aimed at normal people not developers so there's that too but Tremaine Darby who's a software engineering manager at Red Hat made this point that if you're cutting trees sometimes you have to slow down and sharpen your saw right so even if you're cutting them down as fast as you can you still have to take into account time to just get the tools together and get them straightened out and that's just kind of a good point and it's always hard to get managers to understand that but that's a different point okay next slide please all right there's also involuntary technical debt because the dependencies of a code base are not static so a lot of a lot of coding these days is coding to APIs where you use libraries that other people have created and tested and so it makes it instead of having to reinvent stuff you can reuse code which is always great but those things change they're not static and so as libraries change you may have to change your mode of consumption or they may make different you know assumptions or the library may just stop being maintained so sometimes stuff is forced on you and also the ecosystem of the code base is not static so as everyone's pointed out in the previous discussions OpenStack's a community of cooperating projects and so some changes and some projects may force other projects to make complementary changes or not quite forced I mean we want to sort of collaborate so we can make the overall project better next slide please all right so about the technical debt that Cinder addressed in Xena the three major things that came to mind right away was we removed the block storage API version two so that had been deprecated in Pyke which if you know your alphabet was a long time ago so we finally got rid of it why is that good to get rid of well otherwise we've got to fix API bugs in two places that's always bad it's better to keep the code base small so everybody can be focused on it so that was nice to do we also began the transition away from a technology we used for database migrations called SQL Alchemy toward the newer thing called Alembic and we sort of had to do that that was one of these situations where SQL Alchemy Migrate is no longer being maintained and so we need to start using the more up-to-date way to do these things and we've got to continue that into Yoga and then we also had a two-cycle initiative toward secure and consistent RBAC Gansham mentioned earlier this is a community goal for Yoga we had to do it into two parts so we completed part one in Xena and we're going to get working right away to part two in Yoga if you're interested in what these are please see the Xena release notes for Cinder because we discuss these in detail and they have links into the documentation where more of this stuff is explained particularly that the way we're implementing secure and consistent RBAC in two stages also just want to point out the horse is the mascot of Cinder and someone has proposed at the PTG that we need to give this horse a name so if you're interested in that you might want to join the PTG to get your voice heard next one please all right so we're an open source project we're part of a big community everybody listening to this is also part of the community people who consume open stack or are interested in open stack or whatever so in addition to us having technical debt you do too and particularly in the location of documentation because we try to keep it up to date but new features aren't always documented as clearly or completely as you'd like because we're really busy trying to get them implemented and tested and yeah ready for release so we do have design documents in the Cinder specs repository that help describe new features and the release notes for each release try to explain what the features are and how to use them but that stuff could be turned into real good bonafide documentation if anyone's interested so and what's nice about that is you don't have to be a hardcore python coder or something to do this kind of thing you need to be somebody who's interested in the feature and if you see something in the documentation that's not clear if you work through it and clarify it right that helps all the other users and the team is willing to work with you on that it's just that we're mostly developers and not writers so we don't spend as much time on this right it's kind of painful right and so we don't spend as much time as we would like to but anyway the technical debt that we have is pretty much everybody's so you have an opportunity to help out and reduce technical debt and that's all I got so thanks thanks for joining us Brian I it sounds like documentation in Cinder is like an easy place to get started for those of you out there wanting to get involved in OpenStack the Cinder team is awesome as well that's actually where I started continuing on with the technical debt discussion we actually have more neutron information from LaHos again yeah thank you very much and I would like to thank for Brian as well for the for the nearly academic introduction to the technical depths so I I collected a few things from neutron last cycles actually so I I tried to use the opportunity to advertise some some things which happened around neutron in the in the last year yeah so so actually it's really true that these things can can happen in a longer with long history so for example some of these things which I listed here like changing notifier notifications from published to to notify and and the changing from from unstructured quarks to to callback objects that perhaps was started in in in newton cycle and and it happened actually in this cycle that we finished it so so it was a really a a long story and and actually it's a it's I think a little specific for neutron for for all these long stories because neutron has the has a chance to to support many many drivers and and it has a a really interesting ecosystem of of stadium projects and and more or less newton newtron manage the projects networking projects and and it's hopefully hard to to have every every of these projects have the same code base and use the same libraries and do not break any of these yeah so so anyway we we finish this and now we have unified callback system in newtron and all the networking ecosystem the next one to change to to the new engine facet for for db operations that's that's the same story so it it was started the long time ago and and finally we finished that at the beginning of of this cycle as I as I remember so it's it's a as I found exactly with some some historical job to the original RFE which which was proposed to change the db operations and and I think this sentence is really useful to understand this so it's a clean less problem higher performance db transaction which we which we have these so so it's perhaps helped us to to make newtron operations a little faster yeah we can we can jump to the next slide please yeah so it's it's related to the previous one so in this cycle we had a a lot of performance related changes as well so it's it's it's again a typical technical depth collection so the typical thing is that that everybody want to to push their features and and fix the box and and it it just happens that things getting slower and slower and the performance just make change to terrible and terrible during the cycles and and actually we have a lot of changes mostly from from Oleg Obon that have who who had the time and and the and the mind to to fix and and find the root cause of a lot of these things I actually changed the check the a few really really jobs from from Valabi executions and from Xena executions and actually it seems that some operations like portal listing subnet creation subnet listing we have 10 to 20 percent a better times so so it's it's really a good thing I think that that now these operations are are faster and and things can happen quicker for the operators yeah we can we can move to the next time thanks yeah next one that is actually it's it's it's more useful for the community so it's not something that users can can see but Neutron is a big consumer of community resources for mostly for the CI job so we have big ecosystem of drivers and and the backends and and stadium projects and and yeah it it just happens that everybody wants to wants to test his his features his backends and and and things so it it was a big effort to to more or less clean the jobs and and reduce the the job executions to to keep resources free as as much as possible but that's it it seems that it will be a a long time thing because I just read the report from from the TC that we are still the one of the biggest users of of infrastructure resources so so it's it seems that it will be a really a continuous and and multi-cycle effort to to keep an eye always on on what kind of jobs we execute and in which for which changes yeah yeah actually the the next few things are just clean up more or less so so we we we removed the the send support so that was a long time ago deprecated we with the that that was a community goal to to change to to brief sap from from route well that that was that was again a multi-multi-project thing as as a neutron neutron stadium project projects have to change as well and and as a as a last one perhaps perhaps there's other speech which I which I haven't listed that neutron client python neutron client cli code was long deprecated so if you executed neutron net list or or similar commands you've got an ugly deprecation warning and and now it's it's it's a time to finish this so in the next cycle in the that cycle we will we will delete the the neutron client cli code because we we finished the last gap which which was in in the past to to move finally to open stack client so if you use opens stack client now you can you can do everything what previously can you can do with with python neutron client yeah that's it for for neutron and then what we have done for technical that's thank you very much yeah thank you for covering all of that I know technical debt is another not so glamorous topic but I don't know that there is a project anywhere in the world that has existed that has not accumulated some sort of technical debt so that we have taken the time to focus on that and actually address those issues I think is really good and speaks to the stability and collaboration of our community so obviously these were not the only things that happened in Xena there were a number of things other highlights that happened for a variety of the projects can be seen at that first link the releases.openstack.org slash Xena slash highlights but if you want to go even more verbose than that you can read the complete release notes at releases.openstack.org slash Xena and if you're interested in you know diving in and reading about the new features and deploying Xena maybe go check out the documentation there at docs.openstack.org slash Xena otherwise yeah go download the software and get going with it obviously we now that we've wrapped up Xena we need to move forward and start looking at yoga so I'll invite all of our speakers today to turn their videos back on and maybe talk a little bit about what upcoming plans are for discussion at the PTG or just what what's going on for yoga whoever wants to start yeah I can start a little bit so yeah if people are not aware on OpenStack.discuss mailing list most of the project has pasted their etherpad link so etherpad is the place where we are collecting the topics to discuss per project and you have the PTG schedule also so in in terms of the TC I can say we are having a great topic to be discussed like we are going to have a cross community with Kubernetes also where community Kubernetes steering committee people will come and we'll discuss about the cross community things and few of the like one of the interesting topic we have about introducing or discussing the project level in in OpenStack and a few more like technical depth in TC in terms of the having the common and consistent things to be done across project I can just say I know that in glance the secure RBAC work which is both feature and technical debt will be a huge part of our it was a big part of Xena and will be a big part of yoga and obviously will be discussed at PTG yeah besides RBAC which is also on the agenda of the new PTG we will talk about continuing the integration with Cyborg on the BGPU part and also we'll talk about integration with Manila by the VIRT-OFS support in the VIRT so many things is on the agenda for now next week G.B. Thank you for mentioning that we're very excited about the PURT-OFS work which will integrate Manila and with Nova better it will also it will also help us with edge topologies situations it will help us make CEF networking to storage more secure I'm not going to list there are many many things this is strategic lunch pen for us and if you're interested come talk to us and channel or come to the PTG I'll mention also a number of people probably starting to introduce FIFPS compliance and we'll be talking about that so that's probably of interest even outside the U.S. is something that may matter to your customers yeah and Cinder I think I mentioned earlier we're going to continue working on the secure RBAC stuff and want to complete that we don't have anything major planned we're in similar situation with new trauma we have a lot of third-party drivers where we can't test our software so we rely on the vendors to actually run all the tests for us and they've been good about that in the past but they go through these peaks and valleys of compliance and keeping things going and we're hitting kind of a valley so we're gonna be focusing on trying to right revitalize the community and get them you know running those CI so we can check all our code on every check-in like we're supposed to awesome were there any discussions coming up with regards to cyborg in yoga that you learned to talk about Shimran? yeah for you guys we plan to add more device drivers and we have also an cross-project discussion with edge computing group during the PTG awesome yeah there will be lots and lots and lots of good discussions at the the PTG and I think it's it's not this week it's next week cheese time flies but uh time also flies because we're just about out of time in this episode so I want to thank all of our awesome speakers and presenters today we really appreciate you joining us and also to the audience for the questions that you've asked during throughout the show I absolutely cannot wait to see all of you awesome people in Berlin again in the upcoming June maybe this time we'll be able to go to some beer gardens and shout out on Currywurst and all of those good things that Berlin has to offer next week we have an awesome episode lined up that we're super excited about join us as we discuss the large scale open stack neutron scaling best practices with folks from Exion Ericsson Red Hat Stack HPC Vexhost and Bloomberg another very full episode it will definitely be one you don't want to miss so make sure to subscribe to your preferred platform also remember that if you have an idea for a future episode we want to hear from you please submit your suggestions to ideas.openinfra.live and maybe we'll see you on a future show so mark your calendars hope you are able to join us next Thursday at 14 UTC thanks again to today's guests and see you all next week on Open Infra Live bye