 sorry about all that. Welcome to the NOVA project update. Today I'm gonna talk about what we've we've new changes and features in Rocky first and then I'm gonna tell you about what we're working on in Stein and then after that some information about like how to how to contribute how to get involved and then we'll have a little bit time for questions at the end. Firstly, what is NOVA? NOVA is a compute service. It provides the REST API and components for provisioning servers. It does both virtual machines and bare metal when used together with Ironic. NOVA has been a part of OpenStack since the Austin release. We had 255 contributors last cycle for the Rocky cycle and our latest user survey numbers adoption numbers are that 94% of deployments in production are running NOVA and that's from the 2018 user survey. So you may be aware that during Rocky we we were piloting a new review process that we were calling Rocky review runways. The basic idea behind runways is that approved blueprints could sign up in in a queue for a runway and when a blueprint goes into a runway then the idea is that the core team the core reviewers would all convene on those blueprints that are occupying runways with the idea being that we would have a little bit more organization to look at the same reviews at the same time to be more effective at completing some of these efforts that have maybe been having to be re-approved cycle to cycle. You may have experienced things like that. We also changed the spec freeze from milestone one to milestone two because of runways and the thinking behind that is that since we're using this new review process that we would allow more time for more approvals like as we complete items via runways that we would make room for more blueprints to be approved and go forward. So we've discussed we've like reflected on how did runways go for Rocky and overall I think the effect was positive. It helped complete some blueprints that had experienced repeated re-approvals over previous cycles, efforts like trusted image certificates and moving console token authorizations from the Nova console auth service to the database backend and then a little bit of stats here. These were also posted to the dev mailing list for Queens. The max number of approved blueprints we had was 53. For Rocky we approved 72 blueprints so we were very much more ambitious in the amount of work we were approving. Final completed numbers for Queens we had 42 completed we had 59 completed for Rocky but interestingly the completion percentage was similar at around 80% and we felt like that's a pretty good completion rate considering that sometimes blueprints don't get completed because like the the developer might get pulled into some higher priority stuff at their job and they may not be able to like spend all the time on on that blueprint and it may not like get completed so that 20% sort of allows for that so based on our experience in Rocky we're going to continue with the runways process in Stein same deal spec freeze at milestone 2 which is January 10th that's the at the deadline for as long as we're going to be approving new specs and blueprints so getting into the the new features for the rock lace Rocky release we we started leveraging a new neutron port binding API to minimize network downtime during live migrations another thing that you may be familiar with is that volume backed servers used to report and consume local storage on compute hosts and that is finally been fixed the new servers will no longer report the root GB usage and existing servers will be healed during move operations several Nova Network specific rest APIs were removed so we've been over time we've been working toward removing Nova Network it's been deprecated forever so this during Rocky we remove the OS fixed IPs OS floating IPs OS virtual interfaces OS F ping APIs we have a new Nova Managed DB purge command which and a new option to Nova Managed DB archive deleted rows for purge and what this will do is actually delete out the information from the shadow tables so that you get full deletion like it used to be you archive and they'll just pile up in the shadow tables now this will clear that out for you there's a new option for Nova Managed cells selvi to update cell to disable scheduling to a cell this is useful for when you want to perform maintenance on a cell and you want to take it out of the rotation you can use this Nova Managed command to disable it trusted image certificates is a new feature that is supported at as a rocky glance you can sign images using certificates and glance and then on the Nova side you can provide a list of trusted image certificate you ID's that you are saying like these are the certificates that I trust and so when you go to boot a server where you go to rebuild a server Nova will actually validate the certificate for the image with the trusted certs that you have provided so you can get extra security that way there's a new Nova Managed placement heal allocations command for caching scheduler users to pop populate placement ahead of a migration to the filter scheduler so that'll help you out if you're using the catch caching scheduler you can use this command to start populating placement with allocations and then the placement service now supports granular our back policy rules configuration so you can use the traditional policy file system to control access to placement service API's this is one of the things I mentioned earlier that benefited from runways console token off authorizations have moved from the Nova console service to the database back in and with this I wanted to show you a picture of like how the deployment topology needs to change with this so previously in Queens you had to run your console proxy globally for the deployment along with your Nova console service in Rocky because the token authorizations are stored in the cell databases instead of in the Nova console off service storage you will run your console proxies per cell and this kind of helps anyway like in a multi cell deployment deployment because that way you're not having one console proxy like have to deal with all of the traffic for your whole cluster this way you'll have them per cell here's a list of the new microversions we had in in Rocky the first one it exposes flavor extra specs in the flavors API get post input this is kind of a parody thing in that there is an extra specs API for the flavor API where you could get the extra specs but it wasn't connected to like the get and the post on the put and then also when you show server details it would show you the embedded flavors extra specs so this kind of like closes the gaps where you weren't able to get extra specs from other flavors API calls microversion 2.6 2 as the host and host ID to the instance actions get API and this is useful for being able to correlate failed instance actions with the host on which the failure occurred so this is kind of a operational improvement for people for public cloud is where it came from as I mentioned earlier trusted image certificates isn't now available in microversion 2.6 3 2.6 4 adds policy and rules to the server groups API get and post and this is kind of like adding the ability to have more advanced policy requirements for a policy so now you can do like a max servers per host rule with your anti affinity policy and this new availability of being able to associate rules with the policy that opens up the possibility of adding more in different rules as people come up with new use cases and then finally 2.6 5 adds support for reporting live migrations that are in queue to preparing status previously you can only abort a live migration if it was already running now if it's queued up and waiting and you want to cancel it you can do that so starting into what we are working on in Stein so if you're familiar with previous releases Nova cycles we used to do a thing where we would have cycle priorities a priority setting exercise for every cycle last cycle for Rocky we had thought that since we had the runway system that runways represented priorities at any given moment so we didn't really do a separate priority setting exercise but what we found at the PTG when we discussed it in the Rocky retrospective was that as a team we didn't really have it like a cohesive clarity of the types of like user facing changes that we wanted to land for the cycle so based on that discussion we decided to combine like we'll do review runways but we'll also have cycle themes and the cycle themes are the user facing enhancements that we are our goals for the cycle to deliver that way throughout the cycle even though we are using runways to focus on review priorities we still have in our heads like what user facing changes are we like working on as a whole and and help us like keep that focus throughout the cycle so they're documented here at this link and I've listed them here as well so first theme multi-cell operational enhancements we had one of the things you may have heard about is handling of down or poor performing performing cells and with cells v2 the the access pattern of of the deployment is such that the Nova API needs to talk directly to cell databases and this imposes like a lot of demand on cell databases that previously was not the case with cells v1 if you're familiar with that so to become more robust there the folks at CERN have been driving an effort to add cell resiliency functionality to the code base where if a cell is not responsive responding within a certain amount of time the cells down that will gracefully handle that and at least return like partial results or instead of just like failing to to which respond another thing we're working on is cross-cell cold migration so currently you can only migrate within a cell so works being done to be able to migrate like from one cell to another and this involves having to do things like figure out how to move the neutron ports and the volumes and all that so it's a kind of a big effort the second thing second theme we're working on improving the boot from volume experience you may be aware that people have been asking for a long time to be able to specify a volume type when creating a server and we finally added that so we're going to we're adding more robustness to boot from volume because traditionally it hasn't been like I fully supported first class feature and we're working to close those gaps another part of that is the ability to attach and detach the root volume and then volume backed server rebuild is another part of what we're working on for boot from volume and then finally the last theme is compute hosts able to upgrade and exist with nested resource providers for multiple VGP VGPU types and that may sound kind of nobulous but the basic idea here is like our current VGPU support uses a like a flat topology in placement and we're in order to support multiple VGPU types we would need to have nested resource providers on a compute host because the compute host is the root of of every the compute host is the root resource provider so we're working on being able to migrate from that flat topology to a nested one so VGP use in flat land to VGP use in nested land and then once that is done then we would be able to support multiple VGP types on a single compute host other improvements that we're working on this the cycle the placement extraction you may have heard we're we've extracted the placement code into its own repository it's going to be its own package and we've been ensuring that upgrades can can go smoothly through this upgrading from a system where placement is integrated in the Nova code base to a topology where it is a separate package and code currently the current status of that or where we're at in the in the plan is that we're implementing implementing and testing the upgrade step in triple O and open stack Ansible so the there's folks working on implementing those upgrade steps we want to make sure that the upgrade works smoothly in the major deployment tools as part of this the placement extraction another another effort we're working on this cycle bandwidth aware of scheduling so being able to express that you have network bandwidth requirements when you when you boot a server we've landed some of that code already this cycle another thing that we're looking at doing this in Stein is to move to Keystone unified limits and the Oslo limit library for enforcing quota as you may know quota limits APIs were added to Keystone semi recently and we want to move to that as the more modern way to handle quota limits and enforcement we're working on another thing we're working on is restoring the ability to set over commit per aggregate that kind of accidentally went away in in Okada when the aggregate extra or the aggregate core RAM and disc filters stopped being honored and I I sent an email about that back when it happened and we've been working to restore that ability with placement and we have a plan for that now the cycle another thing is adding configuration for the maximum number of volumes allowed to attach to a single server so you might have run into this where if you there's a limit on of 26 for the number of volumes that you can attach to a single instance using the liver driver so we're going to make that configurable so that you can set it in Nova Con per compute host to so that you can tune that for your own environment choose what number of volumes is appropriate for your environment support for emulated virtual TPM which is trusted platform module that's a blueprint that some folks are working on the cycle new more we're live migration that was we worked on that last cycle but it didn't land so we're gonna I think we're gonna work on that again the cycle try to get it done and then finally AMD seven cryptid instances so currently the memory of the virtual machines is stored in the clear and this this AMD technology for secure encrypted virtualization would allow it would it encrypts the memory of the VM so somebody's working on adding that this cycle cross project work that we're doing the cycle we were working with a sender team on a new re-image API to support the volume-backed rebuild I talked about earlier so that will that will help us do volume-backed rebuild neutron we've been working with them on the band with the worst scheduling there's we have folks that are working on that have a feature demo on Thursday if you're interested in checking that out the keystone team we've been working on with unified limits and also limit transitioning to that system and Nova there's a there's a lightning talk on Wednesday about unified limits and also limit if you're interested in learning more about how that works we've been working with the ironic team to leverage ironic conductor groups in Nova to partition Nova compute services for a group of ironic nodes cross project work continued we've been working with the Cyborg team on trying to design the Nova interaction with Cyborg so they we've been actively reviewing a spec for that you you should check that out in the know specs repository in Garrett if you're interested and there's a related presentation on Thursday about Cyborg and and everything they've been working on and then finally multiple projects we've been working on trying to work out a an approach for transfer of ownership of resources so if you've had a situation where you would like to transfer an instance from one tenant to another that's kind of that's what that's all about and it involves all the projects because there's moving neutron ports moving volumes all those sorts of things there's a forum session about that on Thursday that you should definitely attend if you're interested in that beyond Stein these are the more longer efforts that were we may not get to get to the cycle but on the radar accelerator management is going to be another you know a thing that we're going to be working on this cycle and beyond Numa modeling with placement so currently we have Numa modeling and Nova but we're not able to like combine like VGPU with Numa we we need to work on modeling Numa and placement in order to to do things like that affinity modeling and placement currently you might be aware that we have a late affinity check for dealing with races during like parallel requests for if requests for affinity and that late affinity check can't work with multi-cell and split message cues so like looking forward we really need to be able to model that in placement to handle that properly another thing that's come up more in the edge computing space is the ability to partition resources and placement this is kind of the situation where you have a shared placement service and you have say like multiple Novas talking to it and putting allocations in there currently there's no way to tell like are these allocations my Nova's allocations or are they somebody else's you're just going to get all of them if you ask for like how much VC CPUs am I using so being able to partition that is going to be required as we go forward with edge use cases and then finally a proper handling of shared storage this is an old painful one I'm sure everybody knows about we have done some of the work to handle this with placement but there's still a lot more work to do so that is definitely something we're going to be pushing for going forward so how to give feedback where we're very interactive community really want people to get involved and talk to us one of the ways you can do that is to report bugs if you experience a problem with Nova like open a bug and then ping us I if nobody's responding to your bug just ping us like come in the channel tell us about it we definitely want to want to hear from you we're very active on the on the mailing list and traditionally these have been the open-stack dev and open-stack operators mailing list but if you've seen the announcement they're going to combine all the mailing lists soon so on December 3rd they said the old the dev and the operators mailing list aren't going to be accepting posts anymore so it's going to be open-stack discuss make sure you subscribe to that before December 3rd and use the usual Nova tag and such Nova dev tag depending on what what you're hosting so how you're let us know how you're using the compute service anything that's missing any barriers to entry just come talk to us and here here's a list I've compiled of the Nova related sessions this week Souls v2 updates already happened but we've got NFV and HPC pain points tomorrow there's a session about the boot from volume improvements I talked about so if your boot from volume user that's the place to to hear what the plans are give your input make sure that's going to go and in a direction that will work for you getting operators bug fixes upstreamed I think that would be interesting for anybody that's using Nova this this discussion is going to be around like how to improve the experience for operators to connect with developers and get things fixed because I think we we could always improve there there's a session about concurrency limits for service instance creation and this is kind of about like multi-create I think and the limits around that so if you if you're running a Nova deployment where you have big bursts of like instance creates at the same time that would be an interesting session to go to and participate in as I mentioned earlier changing ownership of resources there's forum session about that where you can weigh in on the direction that that is going to go and the approach and then finally there's an update on the placement extraction from Nova that's a forum session if you're interested in learning the step more details about the status of that definitely go check that out and here's a link the at the bottom I've got a link to all the forum ether pads in case you're not aware the the forum sessions are very interactive sessions the ether pad is a place for like everybody in the room to write comments real-time ask questions make notes comments on any of the discussions going on so definitely check that out and and add your input to those ether pads how to contribute we have some contributed documentation in our docs as I mentioned earlier mailing lists very we're very active there because sometimes depending on your time zone if you come into the channel there may not be much response depending on the time if that happens send a mail to the dev list and we'll definitely see it there with the Nova tag chat with us an IRC or channels very active and don't feel intimidated I've sometimes people have told me they feel like they can't jump in because people are talking about other stuff and I just want to encourage you like just talk you know don't feel intimidated by anything in the channel and we want to hear from you want to get to know you so please chat in the channel don't feel shy we have weekly meetings in the open stack meeting channel we alternate the time to accommodate the EU time zone and the US time zone so that might be interesting to you that's that's a good place to let's say you have like a spec or a blueprint that you you'd like to get some direction on or get some eyes on you can put those in the agenda in the open discussion section that's kind of a habit that we have for people to bring to attention something they're working on that they want us to look at that we haven't looked at yet or that we whatever like if you haven't received any feedback in a while that's a good place to bring it up you can also do that in the channel at any time help with bug triage this is a big one where we very much appreciate help we have some pretty nice documentation like on how to do that but if you have any questions definitely like let us know and bug triage just means being able to take a look at it and like does this look valid how bad is this like is it really bad is it high is it low like inconvenient problem but work aroundable this is a big help to us if more people come and look at these bugs and kind of help us mark whether they're very severe or not and there's a forum session on Thursday about bug triage and how we might be able to improve that that process for everyone so that that that should be interesting and then finally we have a nova project onboarding session after lunch this is a good place to this is a kind of like small group where you can come and ask questions we're going to give you like an overview of how the architecture is and how we do things process and that's a great place to come in just ask questions and talk to us and find out like anything you're confused about about contributing just come come to that and and talk to us other ways to contribute help with code reviews this is this is always a huge help and sometimes people think like why should I review code if I'm not like a core reviewer but this actually helps a lot if you like let's say there's a bug fix that you're interested in that your organization would really like to have you review that and then just like let us know to be like I this bug fix is proposed over here and we really like we really need this and could somebody like look at this that's that's always helpful getting more people looking at the reviews and talking talking together about the ones that are interesting or critical or that's always a help to me when people let me know about interesting bug fixes that I may not have seen yet a good one that is a good way to contribute is helping to clean up the docs if you're looking at the docs and you see something that doesn't make sense or is missing like let us know better yet push a patch to fix it yourself and let us know you you've proposed the patch try to break things and report bugs we want to make sure this that Nova is working well for people so if if you found a problem please report a bug and tell us scale testing bottlenecks proposing backports yeah so if you're running an older version of Nova and you're thinking those fix landed on master and I I really like to have it in an older version you can just propose that that's totally normal there's a that I think that link is to the documentation on how the stable branch policy works so take a look at that and you definitely propose your own backwards and I think I went through that really fast but if anybody has any questions I'd be happy to try to answer them now yes I think that that falls into the affinity modeling with placement unfortunately I guess because that's going to be a big effort that's gonna be a hard problem to solve but so I would think earliest well earliest it could start we could start working on it is next cycle T but it might be even later and a lot of this is driven by like we we definitely like get a read from operators as to like what is most important so if we're seeing like this organization that organization affinity rack affinity is really the top of the list then that's kind of how we choose what to work on next so you know that kind of ties into why it's really important like to just let us know what what are the features that you most need because that's how we choose what to do first any other questions about anything contributing any of the features that I mentioned okay I guess if there's no more questions then thank you for attending and sorry I was late again