 Yeah, okay. Welcome to the Nova project update. My name is Matt Redamon. I work for Huawei and I'm the Nova PTL for the Pike release. This is Dan Smith works for Red Hat Nova core Jay pipes for Mirantis Also Nova core. So what does Nova do? Nova is the compute service. I assume most people here know what Nova is Hypervisors running VMs Nova was founded in the Austin release besides Swift it was the other first project. I guess an open stack and the Okada release there were 233 contributors Submitting patches. I would be interested to see if raise with a raise of hands who contributed a patch and Okada, thank you Latest user survey adoption numbers Say that I think Keystone was the other very high one. It's 98% of clouds in production or using Nova Also, that I would share the some of the other stuff from the latest user survey. We were given the opportunity to ask a question to respondents and the one that we picked for Nova was How important is it to be able to customize Nova and your deployment? And this is pretty varied so things would be like your own network manager Compute manager quarter driver There's hooks. Can you are you using a lot of hooks? Are you plugging in API extensions? So the majority is that it's not all that important that most deployments are using Stock Nova with maybe some bug fix patches or something just while it's waiting to get upstream a Smaller number, but still somewhat substantial is It's somewhat important and these are for things that we sort of expect like people putting in their own scheduler filters And those are things that we support And then the very small 10% was I heavily customize it I do a lot of patching and replacing of maybe core components This was another interesting one that I wanted to call out. There was a question this is a apparently compared to the 2016 April survey how many cells are people using in deployments and So the majority it says is not using any but they're The thing that stuck out to me was there was this big jump from the 2016 survey and I'm I Have actually asked the foundation about do people actually when they're people when people are answering this question Do they know what they're? What they're responding to because I interpret this as cells V1 Traditionally cells V1 is when you talk about cells and Nova it sells V1 the thing that was added in grizzly and Some large deployments like CERN and rack space and go data you're using But traditionally we've always said upstream the development community has said if you're getting started with Nova Don't be using cells V1 because there's a lot of issues and just things that don't work in the API So I don't know if people are thinking that this is regions or if one cell means that I've just got my single deployment, but really what I how I would interpret this is Are you running the Nova cells service? It's an optional service. So if you're running that thing then you're running cells V1 So I'm going to try to work with with the foundation to get Little bit of clarity and for future surveys actually sort of defining some of these things because cells V2 is a thing now But it's required in Okada and most deployments aren't anywhere near Okada yet So people are just starting to roll up to Newton is that are people using actual cells? Yeah So I was going to go over a little bit of the Okada highlights since that's that's the last released thing And here's the release notes at the top if anybody's interested in finding release notes I was going to let Jay talk about some of the placement service features in Okada. We now have the placement service required for scheduling if you're not familiar with it the the placement service is a very simple thin rest API service that's useful for storing inventory and an allocation information We introduced it in Mitaka Newton okay, and we've slowly been sort of integrating more and more of it into insinova The the big thing in Okada is that the the scheduler is now calling out to the placement service to Help it make decisions about where a particular launch request will end up So we're going to continue adding more and more integration pieces between Nova and the placement service in in pike and beyond Another big thing with cells V2 in Okada. So Dan will talk about that so we've been working towards getting this more integrated newer Cells arrangement baked into the core of Nova for a while and so Okada was the first release where everybody has a record for Being a cell even if they've only got one cell so Previously if you had cells V1 you had a bunch of extra services that you are running and your deployment looked quite different as of Okada Everybody is starting to have all of the things in their deployment that will be attributes of cells a multi-cell or a single cell deployment which gets everybody unified on the same set of code and has records in the database Describing their cells even if that's only one and gives us an upgrade path to being able to split out Cells in the future from a deployment without having to rearchitect so Everybody is configured that way in Okada. You can't actually create a second cell. I mean you can but it won't work But it's a major Milestone in the journey towards getting everybody to that point and Because of that there are several new things that you had to do when you were upgrading to Okada Creating those records and arranging things in the database to set the stage for that and So we brought a new tool called Nova status command that is intended to give you a pre-flight check of Whether you've done all your homework basically from the last release to be able to roll forward to in this case Okada smoothly and that's something that we're looking to do on a continual basis every time so that you have this kind of ability to Run the pre-flight check for the next release against your current deployment to hopefully identify all of the The bumps before you roll forward and cells had a lot of those things so it was the the thing that Gave us the need for that that pre-flight check Also cells V2 is the other thing with cells V1 is you didn't have a lot of the API features like security groups floating IPs aggregates cells V2 is a feature complete API Some other highlights from Okada. There were quite a few API improvements One one of the big things was the We've had Jason schema validation of the request body, but we didn't have any Jason schema validation of the request parameters That was added so we can do Microversion between different microversions we can validate what the actual query parameters are Another thing was the sort and filter query parameters were just sort of this Wild west of if it's in the data model you can query for it and 500 year API So we actually put a whitelist on that thing. It's and restricted a lot of the like being able to Sort by joined tables or things that are in the Oslo database or Oslo DB model That you shouldn't know about and we'll give you a 500 out of the API And then over time I think we're gonna try to shrink that whitelist down to things that actually makes sense And the simple tenant usage API now supports paging. That was another It's a sort of a drain when you have to pull in everything For simple tenant usage. I think which horizon uses as soon as you log in. It's part of the home page Some other random improvements Dan already talked about the novice status upgrade check That came up yesterday. There was a operators forum session about Like every summit every every time everybody gets together. There's a session about upgrades in the pain involved with upgrades and We pointed out that This is an Alcada which again like nobody is at Alcada yet, but we're really interested in getting feedback on How how people think that this actually helps them before they start upgrading and it's also item potent So you can be running it after before and after you upgrade to make sure that you got everything, right? You'd run it like from a vn for a separate container or something like that against it so that way you can you can run it without having altered your Right the question was how do you if you're on newton, how do you run this code? That's an Alcada and the answer is put it in a virtual environment or a container or something Always profiler there's been a patch that's been around for several releases that was never merged that got merged in Alcada I know People were taking that patch and cherry picking it out of tree so that they could be doing rally and profiling testing Internally so that's finally now part of the actual main path The vendor data v2 metadata API. I believe that was added in Newton At the Barcelona summit we identified some gaps in the metadata API and Michael still implemented several basically enhancements to this like service user Tokens and the ability operators said that I want the ability to if When I'm booting an instance if I can't Actually communicate with my vendor data metadata API. I want the build to stop I don't want it because it could be required for authentication or something So a lot of that and it's it's documented in the spec has more details on what the actual improvements were There were a lot of feature parity improvements to several vert drivers notably Hyper-V and ironic got a lot of basically just feature parity improvements to match up more with like Zen and live vert and I believe virtuoso has live migration support in Alcada. That's why I called them out here And it's also now possible to use a service token for long-running Operations between Nova and Neutron and Nova and Cinder we don't we're still working on that for Nova and Glantz For example doing a long-running snapshot of a large image, which would The token would expire during the snapshot This is Disabled by default. There's a config option. You can turn this on but then you basically provide Service user credentials and it will re-authenticate with Keystone using this service token if you set it up OSIC was OSIC was working on this and there were plans to do some Scale testing and endurance testing with this thing to see how it actually improves long-running live migrations or just snapshots or Things like that since OSIC is now no more. It would be great for us if anybody You know plays around with this provide feedback about how this is working for people So these this is a Slide from the foundation so for all of you product and project manager types in the audience This is I guess we're some of the themes that we're working on for the pike release I'm not exactly sure what modularity means, but I asked and didn't get an answer Don't know. We don't know maybe so Good question So we'll go over some of the new features and enhancements that we're working on for pike first one is Scheduling and placement which Jay will talk about so Couple of the the big pieces that that we're working on in the placement API We've we've merged already Much of the support for traits which are the qualitative part of the request So resource classes are the quantitative part you get you know forced VCPU and Gig a memory or whatever the qualitative part of the request. Do you want you know SSD disk or whatever? those are what we're calling traits and You're now able to decorate a resource provider Which is a thing that provides resources with a set of traits. We've created a Standardized library or a library of standardized traits called OS traits which people are contributing to which is very helpful Another another major piece that we're working on in pike is support for shared shared resource pools or shared resource providers the canonical example of this would be a shared storage pool right right now if you Use an NFS share for your your instance disk storage the reporting of Disk resources is wildly inaccurate Basically, you just multiply the number of compute nodes that are using that shared storage pool and it like blows up the amount of a perceived capacity of disk so shared resource providers are a way of saying hey, this is a a resource provider that has Two terabytes or whatever of disk space and it shares that disk space with a set of other resource providers Compute nodes is an example via an aggregate association. So then we're we're currently working through the patches for that The next thing is moving the process of allocating resources From the compute node to either the scheduler or the conductor where we're not entirely sure where we're going to put it And that's that's called claims in the scheduler. So Sylvan If he's still here, but so Vaughan's leading leading that effort This should dramatically reduce the amount of time that scheduler retries Can can occur so that's one of the big big enhancements there I'm also trying to get support for nested resource providers. So that's dependent on a number of other things and that is things like sRIV vfs and new metapology that kind of thing so And then finally yeah Testing for performance and scale and resiliency things are about the integration between Nova compute and the placement API as well Nova scheduler and the placement API so Still working on cells. So Dan's gonna talk about cells so In a caught everybody is on cells V2 everybody is a cell of one Kind of finally making that a true statement However, you can't create a second cell and have things work And and that's really just because part of cells V2 is baking into all the components of Nova the Understanding that we we don't just have one database. You can't just assume that if you go look for an instance It's in one place and so Now that we've merged the kind of core bit and Okada pike is You know going around to all of the different components and making sure that they all get enlightened to this fact that That we can have things in multiple places and or you know talk to components via different cues and and That's a lot of work. It's something that cells V1 never actually Did and it's why in cells V1 things like flavors don't work or aggregates or security groups So that's the that's the major cells focus for this cycle is teaching all of those components About this new arrangement and then of course getting the CI testing that we do in the gate to actually test a multi-cell environment as the Standard way all of our jobs run as much as possible. Obviously with our CI workers We don't have thousand node CI jobs. So it's slightly synthetic But at least we partition the things such that hopefully all the things that need to cross partitions are tested and working and then another another big thing is Transitioning the way we we do our quota usage calculation in Nova instead of doing all of this accounting in the database that Is in sync for at least five minutes until it gets out of sync Moving to this this mode where we kind of count things on the fly and that works a lot better for us once Things are all in different places where you can't use you know constraints to make sure that you didn't run out of space and whatever So ensuring that all of our Components and APIs are cell aware making sure that they are hopefully doing the higher performance Scalable way of you know Striping requests across all of those cells and then any kind of hardening issues that come up out of the the developers rolling to a kata and hitting those new things they need to add for Manageability of the records creating their cell information updating it all of this kind of Making sure that it all works with multiple cells now that we've got everybody on the core in pike You know I forgot to mention during the Ocata Piece that we worked quite a bit on there's documentation entry for both placement for upgrade impacts and things that have changed In addition to the release notes, and we've also done quite a few Pages of documentation on upgrading to cells v2 and sort of base install scenarios upgrade scenarios We're gonna need help with cells v1 to cells v2 migration, but I'll solicit Request for help there and also we forgot to mention that the placement's getting in API rough so that's good Yeah, yeah, we're working on API reference for placement There's a question Either in pike you will be able to create a second cell and expect it to work it will be a Release where people that are currently running cells v1 are probably not going to be able to replace You know replace their deployment with a multi-cell cells v2 purely from a hitting all of the performance You know Making sure that all the components are a high-performance in a multi-cell environment But it's it's going to be enough for you to run pre-prod that way and help us find Correctness issues scale issues that can be right And the Yep, the question was about the status upgrade tools. Yeah, we plan on Basically doing currency on that tool for every release. So there are certain things Like the control plane makes decisions based on the compute the minimum compute service version in your deployment So we're doing some things where we you know like require a minimum newer compute version. So Like if you're running pike with Kilo computes you're gonna get some you're gonna get some red flood flags from that thing Another big effort that's work. That's sort of finally making some traction in pike with actual code is Related to volume multi-touch the cinder team provided a set of API's in Okada in a micro version 3.27 that Nova is going to be leveraging in pike To try to abstract a lot. There's there's still a lot of legacy tightly coupled code between Nova and cinder from when cinder was Nova volume and We've been talking about multi-touch since serious time yeah, since seriously like at least mataka and And We probably burned at least one or two releases of how do we shoehorn volume multi-touch into the existing technical debt that we have and we Just realize this is not going to be maintainable and it's just sort of a terrible fit so We've been working at like it's been at least a year we have weekly meetings between the Nova and cinder team To be defining these new API's and how we can put a lot of it so that Nova doesn't have to be Maintaining data that cinder actually should be the source of truth on so things like connection information and connector off the compute and stuff like that so Cinder has this new attachments API that Nova's going to be using To try to clean up a lot of this flow between all the different like swap volume and migrations and just the normal attached flow And we're working on that in pike For multi-attach so we're not so for this one we're not to multi-touch yet This is really that well The original implementation proposed for multi-touch and now this is rebuilding the current things, right? Yeah, it's because Nova was going to have to have a ton of conditionals of Is this a multi-touch type thing and it was going to be really ugly But you do have participation from the people that are interested in actually using multi-attach Right. I mean like yeah oracle oracle is contributed a developer That's really been helping out with keeping the patches moving and we're trying to move a lot of the we're working it really back We decided at the PTG that the way we're going to work on this is Sort of backward so we start with supporting detach and then nothing really ever turns on until you start doing attach in the New flow because everything is keyed off the way that you attach the volume in the first place This is all supported through rolling upgrades because we're using microversions and service versions checks Oh And then for grenade we can do upgrade testing so you can attach a volume the old way on the Ocata side Roll it through to pike and then detach it and make sure that doesn't explode We do need some like nobody's actually there are some gaps and grenade that we need help with but They're at least known known issues Some other improvements are that we're working on for pike. There are a couple specs that actually OSIC owned but now are a need of an owner. I think can it she might be working on these now, but Well, you are now that it's recorded There are a couple API improvements for like controlling live migration timeouts Another one that had been around for a couple of releases. This is actually a very I don't know how many duplicate bugs we've gotten for the second one, but We used to never do validation of projects with the flavor and quota management APIs so when you add flavor access or you Update quota to quota values for a given project. We never validated that project actually existed in Keystone So it wouldn't fail. It just Didn't do what you wanted it to do and we would just get a ton of duplicate bugs about that So that's actually now fixed in pike We're deprecating more we deprecated some old proxy APIs and Newton and things related to Nova Network. We're continuing like There's a lot of API code in Nova and we just sort of find new things all the time So we're still going through and like cleaning house on old things that aren't used This is really about reduction of technical debt and complexity Something else that's been asked for for a while is Embedding the actual instant we we've always stored the flavor that you use to boot an instance But what you might get out of the API for that flavor a year later could be different from what you actually created the instance with So this is a this is a thing where instead of just giving you the flavor ID, which might be totally different We're actually going to embed the Flavor and in the server response body that was used to create the instance Another thing is specifying tags when creating a server you can you've been able to apply tags to a server that's already created for a while But this is actually Applying tags when creating a server. It's a pretty simple It's not a simple change to make but it's a logical continuation and Then another thing that came up was I think this actually came up in the Tokyo summit Cinder added the support for extending the size of an attached volume. Well extending the size of volume, but Nova never supported the ability to Extend that attached volume while is attached to an instance. So that's being worked on now So Cinder will actually call out to Nova and let us know that hey this thing's been extended You need to go down to the compute and toggle the guest another another big one is This may seem small, but it sort of has come up over time about manageability, but Sort of like the config option cleanup that's been going on for quite a while. It's It's not like sexy or anything, but very useful is an OSIC was running this too But actually documenting the policy so pointing out for this policy rule This is the API that uses it sort of a short description of how this is what this actually means because if you just look at the policy Jason file, it's not clear at all like you basically have to go into the code to figure out if I'm gonna tweak this policy rule What might I be breaking and you have to go look at the code? This is really for operators to just Not have to dig into code really So I was asked to talk about maybe some things that we're gonna be working on for Queens There was a spec for supporting resource tracking and scheduling of virtual GPUs that didn't quite make pike, but I think we're gonna be Putting an effort into getting that cleaned up and we just actually had a session this morning Talking about this. We really need to operate. There's it. There is a spec We need operator input Comments just or thumbs up that yes, this this is something I need and please move forward with it as Jay talked about support for full shared storage and network reporting and then Affinity support in the placement API so Server group affinity and I affinity that's sort of this weird Something Dan didn't go into but there's something we've been talking about which is up calls We want to we want to stop doing up calls from the compute service up to the control plane especially when you start getting into cells different cells be to deployment apology and today because of the The way that the server affinity and I affinity group stuff works There is this like last check on the compute now that calls way back up into the API database to say Is everything actually okay because if it's not I need to fail and it's also constantly passing Updates between the computes all of the computes to the scheduler to make sure that the filter is making the right decision We would really love to just Right kill all that stuff put it in placement and let sense placement is external to the deployment Help us make the right decisions when we actually do scheduling Another thing that's been coming up is Access policy improvements, so Junk our but has a speck about this and there's a couple other sessions this week about Fixing some of the policy issues and and are back One of them would be how do you determine? The global God state admin basically and then like a project specific admin This is sort of like the dedicated hosting case And there's there's just some like legacy I guess warts and Nova where even if you specify your policy to say Non admins can do this action There's actually hard-coded code down in the database API that says Like I don't care what your policy says. I'm not gonna let you if you're not an admin this also affects how Service user like locking of an instance so exam for example trove locking an instance If it doesn't actually own it, but the service user should be able to lock it We have to fix up a lot of the policy work for this If we don't I don't think I Mean I will be honest I don't think we're gonna get to multi-attach in pike because the amount of work that has to be done to actually upgrade all of this code to use the new cinder APIs and In pike, but I think if we get that done which will be a pretty major accomplishment For like the end user will have no idea that we did all of this But that will set the stage for finally actually supporting multi-attach and testing it out in Queens Another thing that we had a session about yesterday was this idea of using cinder as an ephemeral storage back end So operators saying I don't ever want you using local disc for anything I want to use I have cinder and I've got a bunch of storage that I bought I want to use that for everything I Think we're gonna have a spec coming out of this There were a few different options yesterday. We talked about there's very Short-term things that we can do which maybe are hackier and then there's the very long-term more Complicated things that we want to do but don't really have owners to drive those things So we're gonna have to make some decisions about what we're gonna do there Integration of limits and Keystone is gonna be something we're gonna look at for Queens in the pike release there is now a spec and a concept of Instead of all of the different projects like Nova Cinder storing quota limit information actually storing that in Keystone where project hierarchy is stored and Eventually moving towards all the usage information would still be calculated in the projects But like let's do limits the same at least so we're it's really Shandeg trying to start with something that we can at least all agree on because we've been talking about hierarchy or hierarchical quota support for a long time and it generally gets bogged down in Like implementation details and project specific just different ways of doing things and it We really just sort of want to start small because this is a massive change for every project really that's doing anything with quotas And then cells be too hardening which is gonna be I think mainly in performance and manageability improvements Like like Dan was saying and then including right now you can only migrate instances within a cell We would try to look at being able to actually migrate instances across cells in Queens So there's a lot of work to do We do need help and some of this is just giving us feedback answering questions. These are the questions are mainly when the User survey committee asks the PTL's for you know, you get one question to ask in the user survey Generally, it's very much for right. It's much easier. It's also much easier if you ask a Question that can be quantitative or multiple choice instead of what don't you like about Nova? You know because that's everything Right, you're not gonna get a great answer for that but things I've personally been very interested in as Are you're using user are your users using micro versions anything beyond 2.1? They've been around a while, but people are just you know, most appointments are on mataka starting to roll to noon like And if operators are keeping track even of usage Like this would if anybody is mining data on this like please provide feedback because it'd be good to know Because I think we're up like 2.50 something now in pike like we are doing quite a few micro versions each release Another good feedback item was would be have you started evaluating or testing the placement service It was optional in Newton. It's required in Okada. I know people aren't really at Okada yet but if people can be Kicking the tires on any of this stuff in like a Newton pre-prod sandbox or something, you know, give us feedback Same thing with sells me to have you started evaluating any of that stuff again at same case It's optional in Newton, but there is code available that you can start kicking this around and another thing that's been getting worked on for several releases is Adding support for version notifications. So we've always had notifications, but they were never versions. So we were always You could maybe add things to the notification payload, but you couldn't remove thing It was just it's sort of the same thing with micro versions. You need to have versions so that the consumer can Know what's coming in and be able to handle it However, I really don't know who's I asked this the other day in one of the operator sessions And I don't know who's really using notifications in general or how they're using them Or if anybody's even using version notifications I'm also interested in performance impacts of of doing it because are we needlessly generating load and nova just to emit something that nobody's actually consuming a Developer request we already touched on we need performance profiling and scale testing. Oh sick Had large labs to do scale testing and load testing. We don't have those anymore I don't have a thousand nodes in my basement. You don't know mine's in the shop. Okay. Sorry new tires So if people are doing I know people are running rally and profiler like on maybe 400 nodes somewhere and In pre-prod if if people can be sharing up with us like I don't know what you change Like it doesn't need to be super specific, but it's like I don't know what you changed between February and March, but all of a sudden it takes I Don't know 30 more seconds to boot a server or something like that like that would be very valuable data It's us so that we can dig into this kind of stuff because we don't have scale testing in the gate And then the we also need operators that are running cells v1. We called this out in the cells v2 session the other day But we and I think go daddy and CERN said that they would be helping us out See I recognize some faces, so I know some of them are here But this is really about we don't have we don't have like grenade testing for cells v1 in the gate that grenade is the upgrade Framework We are gonna need help with people that have cells v1 deployments helping us with the migration path to cells v2 and the validation that v2 is Knowing when v2 is a suitable replacement. It's gonna be something that you know someone with a large deployment has to Yeah, so I put in here a bunch of these are actually all sessions that come after this one So these are all sessions that are related to Stuff we've talked about so if you're interested in any of the stuff that we've already talked about You know come to these other sessions later in the week some of them are later today and some of them are just Presentation so we've only got a couple minutes left. So you know a lot of questions Yeah, so Jay's giving talks on placement and resource scheduling Dan has a talk this afternoon on cells v2 Here's a few more if anybody has questions. I Use the mic would be great, but if if you just want to shout it out we can repeat it So the question is we already have sep as ephemeral so this is really For other this is yeah the idea with this is you create a new image back end that is cinder It's a proxy to cinder That doesn't exist right right right right We need a warm body to actually that knows how to write code to create this thing. That's where it so yesterday The big thing was This is the utopia we would want because it would replace the sep driver the scale IO driver And it would add support for 70 other vendor drivers and cinder We really want that thing But with nobody actually driving it or owning it. It's you know who knows what's gonna happen So yes, definitely Yeah, right Any other questions two minutes left? No, all right if not Just asked the Neat wrong guys and I will ask you There is a problem when we have two physical networks On different hosts and there is no interaction between Neatron and Nova and they actually slightly pointed to you It says right field schedulers We have one network available on host a one network available on host B when we schedule Some instance to running on network a Nova schedule says oh, I want host B and there is no way to Take this an account as far as I hear there is same thing with cinder It's not my topic. Do you have some kind inter-project scheduler interaction because I think that If something is not available, there is no reason to schedule there There's gonna be nobody it costs found error. You want me to answer? well, I mean the general answer is that There there is a series of work that is Currently being worked on that adds Network and switch network group and switch tags to the port binding profile so that When when you do a nova boot and you pass in dash dash nick port id equal blah blah blah it will Build a PCI device a set of PCI device requests in that PCI device request will contain the network group and Switch group from the port binding profile. We'll use that to match against information that we have on the compute hosts I'm not saying it's done now, but I can't remember who is That's correct Yeah, I was Carl Baldwin had the spec on Routed on routed networks network stuff to be able to use placement for aggregates tied to Network pools. It's the shared storage pool, but for network problem, right? Okay, only migrate within a given aggregate It's Nova Nova would be asking placement everything feeds all of their information into placement the Nova s placement Neutron would be feeding into placement I have these two separate pools of things and so when you assign one You know which one you got right and only migrate within that thing don't send it somewhere else where it's not going to get networking anymore Yeah, yep And there was there is a sort of overhaul to that's part of that that's been worked on for a while Which is network where scheduling and moving some of that Croft out of the compute service and into conductor service, but OSIC was driving it and it needs love basically So any other questions? No, I think we're Right the question Right the question was well placement and the scheduler stay within Nova placement is going to be split out That it's been written as a it's a separate service and it's a separate endpoint It's been written as a completely separate thing that we can just lift out Nova scheduler will stay in Nova. So this isn't like gantt if you're familiar with gantt from a few years ago And Chris Dent has been looking at what? Sort of looking at what it would look like when we pull the placement service out and what is going to be needed there The placement is its own Entry in the service catalog and can be used externally by other things already all of the Nova services talk to placement as though It's already external right there. There's there's even separate separate packages as well for for it the the Single issue right now is that it does use the top-level API database in Nova. So that's just config wise Right. Yeah But yeah, the plan is to have placement split out and the second question Historically in Nova supported automatic network Usage if you have one network there is actually hard-coded if we have one network We know we know what to do if user didn't supply Network you eat but the problem problem is that if we have many networks and we want some logic to Choose one of them on operators side without forcing this Process of choosing proper network on users is any kind of mechanism where we can describe if users come and ask for instance, use this network we operators know he should go there because a problem is if users have Configuration file with hard-coded network name or hard-coded you eat and we change anything in installation Every automation user automation fails if they move to a different region the same problem you eats are different and No, it's working is any kind of out of selection mechanism. Do you have any plans for this? There is so there is Armando The old new Trump ETL has a session this afternoon about get me a network enhancements So if you are unfamiliar with Somehow Compatible with existing tools around open stack because people know two ways to specify network by name and by you eat And that's all and any kind of additional tools They will not appear in my right like out of no this isn't a separate tool I think what you're asking for is maybe tagging a network as you know of these four options This one is the one by default take this one. Yes. I would bring that up in the get me a network Discussion this afternoon with Armando because it's really about Enhancing this experience right and I both of us did a session yesterday about get me a network So if you want more information on that it's already posted, but it's it's definitely not a separate component It's interaction between Nova and Neutron But I know what you're saying today if if you don't specify and you have more than one you get it it fails, right? Yeah Anything else? No, thanks everybody