 Anyone else who's coming anyway? Hey, good morning, can everybody hear me? Maybe, possibly? Cool, I'm glad you all survived Stack City last night. I can't see really what's going on out there, so I'm gonna assume that the whole room is full of people who passed out here thinking it was their hotel and just haven't left yet. So hey, my name's Kevin, I work for Cisco. And today we're gonna talk about migrating Novenette to Neutron. This is something that we did. And are working towards doing in production as well. We've done it in the labs. And so far it's worked pretty well. Not much tenant downtime, if any at all. But we'll get to that in just a minute. So why would you wanna do this? If you're running Novenette, chances are at this point, there's still something that you want about it. It's stable, but it lacks a lot of the advanced features that Neutron has. And so a lot of people are trying to move towards that. As a community, we're trying to deprecate it, but it just won't die, it won't go away. I think it was actually deprecated once, and then they undeprecated it because there were still enough people using it, like way back in Grizzly or Folsom or something back there. It's more difficult to find help when you need it, because again, nobody's running it except you. So you go to ask somebody else, and they're like, I don't know, man, I haven't used that in forever. Actually, I sort of came into this project as an aside, and that was, I spent probably two or three weeks just getting back up to speed on it because it had been so long since I'd used it. So that was a lot of fun. Most of the cool things are in Neutron anyway, back to lacking the advanced features. So you really wanna be there. I love the idea of green-fielding things and just maybe letting Novenette die on the vine, but unfortunately, that's not always an option. And who doesn't love brown-field migrations? So that's another reason why you might wanna do this, because it's a lot of fun. So what I'm going for here is, I think there's not, nobody's scenario is gonna be exactly the same. Everybody's got different things set up in different ways. And so this is sort of like a talk about what we did, and hopefully it spurs some ideas. If you're in, I assume if you're here, you're in a boat where this is something that you might wanna be doing. So kinda get started, this is what worked for me. So I had to determine my scenario. What does my environment look like? Where am I gonna go next? And then I documented all the use cases. You have floating IPs, do you not have floating IPs? Do you have like a flat tenant network? Or do you have per tenant networks or something like that? So I defined my beginning state, so I looked at everything that I had and I said, well, this is how this looks. Then I was able to replicate that in the labs and get everything set up exactly how it looks in the eventual production state. And then I defined what I wanted it to look like. So now I knew where I was and where I was going and then the shortest way between two points is a line. So we went from A to B. And that was basically developing a bunch of code. So for me, what it looked like was I started with Googling where every good project starts. And then when that didn't turn up a lot because refer back to nobody's running over Net anymore except for you. I asked the ops mailing list and I managed to find some obscure GitHub, somebody who had done something similar. And so we went ahead and so I went ahead and looked at that and it looked like it could do what I wanted. So I forked it and then it became this code here which is this ice house tree. That code was originally set to work in Juno, possibly Kilo. So it had some issues like they were missing dependencies and whatever else like some Oslo classes had changed and stuff like that. So some imports weren't quite right. So I had to change it but it also was set up for a very specific scenario. Again, back to that's what's gonna happen. If you guys go and try to do this, your scenario is probably gonna be different than mine. So feel free to take my code, hack it up, fork it, whatever you need to do. Finally, once I'd gotten everything set up and working, I automated it. I have this repo of OpenStackTools that I've been working on since like 2011. So some stuff is really old but feel free to pop that out, look in the Ansible directory. There's a lot of goodies in there. Most notably the plays that I'll show you later in the video. So our specific scenario, we were using Linux bridge for layer two. We had non-overlapping tenant networks so every tenant had like a slash 24 or 23 or whatever they had which is standard for NovaNet. You can't have overlapping subnets in NovaNet which is one of the cool features of Neutron you probably want. We used, in this current situation, we used a hardware gateway for layer three which means that there's like a CSR or whatever else upstream and I don't really have to worry, in this migration I didn't have to worry about the layer three aspect of it. So I wasn't trying to move floating IPs around. I wasn't trying to do anything like that. We're working on a procedure because we do have some situations where there's software layer three and that will definitely incur some additional downtime just because if nothing else, like you have the art problem where you're switching an IP from this device to that device or whatever the case may be. So if the MAC address changes at all, you know, that's gonna be an issue. So something to keep in mind but for this particular scenario, hardware layer three which made my life a lot easier. So a high level look at the procedure. The first thing that you have to do is block client API access. This is very, very important. We still need the APIs for the stuff we're gonna do. I tried to defer to the APIs as much as possible because I figure Nova and Neutron know how to set things up far better than I do. I don't wanna be manipulating databases directly if I don't have to and praying that I got all of the joins right and all of the whatever else is correct. So, but you don't want tenants changing things while you're trying to migrate. So you need to block client API access. So that is a caveat I should point out is that there's not a control plane outage but tenants can no longer make API calls during the time of the migration. Their VMs will still work. They can still interconnectivity between the VMs, upstream connectivity through the hardware gateway all works, they just can't make API calls, can't launch new VMs, delete them, whatever. Then we gather up all the info we need from NovaNet. This is because there's the network cache stuff and if you're in the middle of something and like a bit of data expires then you will find yourself in a bad place. So we take all that data and that's stuff like the instance IDs, the MAC addresses, the IP addresses, all that kind of stuff. We gather all that up. We install and we configure Neutron. Excuse me, we start Neutron services. You know, fire up the API, et cetera. Then there's a bunch of scripts which we'll get into which create network subnet supports, security groups, all that information that we pulled out of NovaNet it basically replicates it in Neutron which is running in parallel. At this point NovaNet is still in charge of everything. In our situation we wanted to make sure the DHCP servers stayed on the same IP addresses. I would recommend that. You don't have to do it. What'll happen is normally when you spin up a DHCP server in Neutron it'll just pick the first address available in the allocation pool and then it'll just work. The problem with that is if it's not the same IP a VM will go to renew, the address won't be there and then eventually it'll just do a full release and renew cycle but that often times tears down the network stack and brings the stack back up on the VM which could cause some problems for you so I recommend trying to keep it on the same IP address. That's one of the things that we do. We'll talk about that in a minute. We attach the ports and basically and I'll get into this again too is that that just updates the port binding in the Neutron database. We don't actually want to attach the ports on the compute nodes but again I'll talk about that in a minute. Rename the bridges and the interface on the computes to use Neutron vernacular so you've got BR whatever, VNet whatever, it needs to get changed to like BRUUID and TAPUUID and that kind of stuff. So again it's like the idea that we're going for here is we're gonna yank the tablecloth out and all of the things are still gonna be there. The table's still gonna be there so when Neutron takes over everything's as it expects. And then lastly we clean up the Novenette resources, get rid of all those old bridges and whatever else that we don't need anymore. So digging into that that's like the quick high level overview so digging into that. We start on the control nodes and this is kind of a control nodes logical. We touch the compute nodes a little bit here but everything comes out of configuration files so there's well two places. We store some stuff in the database, we create a transient table to do that and then but it's like just in any file for configuration network configuration stuff for the migrations so we create those and there's scripts in the directory to do all of that so it pulls the information that it needs out of the database, out of your NovaComp, out of your Neutron comp and it writes all that information out into a configuration file and then we generate network data and that again is what shoves the stuff into the database. Again that's a script that's as part of the Novenette to Neutron code repo that I showed you earlier. Then you install all the Neutron packages, you sync the database which just puts the scheme in place, you set up IP tables. In our case that's we've got very specific firewall rules so and a lot of this will make sense when I'm gonna show a video of me actually doing the migration later and so you'll see all this stuff running in the plays. We update the Neutron and the NovaConfig files on the control nodes and on the compute nodes and again that goes back to that port binding stuff we have to update the compute nodes to use the fake compute driver otherwise when we do the port attaches in Neutron it will actually attach the ports which you don't want to happen that causes trouble for you. Not that I know from experience. So in our case I leveraged our existing automation and config management to do a lot of that stuff so like installing Neutron and putting the config files in place and stuff like that. We have a custom thing called spine that we use but if you have puppet, chef, ansible, whatever it is that you use. I also wrote some ansible plays to do this stuff but I didn't include it because one spine does it for me and two that wouldn't really be useful for you guys anyway because this is our setup. So however you do that that's sort of a black box. There's this configure Neutron and get everything stood up properly mythical unicorn that you have to accomplish so and I did it using our existing stuff. Finally you run the migrate control script, the migrate security group script and the update DCP server script and again that like literally just makes Neutron API calls and says Neutron net create and it does all the things Neutron subnet created attaches it properly, Neutron port create, Neutron port attach, all that stuff happens. So now at this point your database should look how it would look had you been running Neutron the whole time but no one out is still in charge. It's worth pointing out that the security groups and the DHCP scripts, they actually have to manipulate the database directly. Just found that to be the easiest way to do it and it's a really, they're really simple calls. It's just create the things but it was far less efficient to do that with what with the DHCP it was I couldn't do it at all from the API with the security groups it was way less efficient so it just shoves that stuff into the database and everything seems pretty happy. So when you get done with this page like when you've done all the things on here your control plane is up and running you can run Neutron commands, you can run that list, you can run subnet list, all the things will show up and everything will look happy. Your VMs are still running because we haven't touched the compute nodes other than changing that config variable. Your VMs will still be running attached to NovaNet bridges and all that sort of stuff and your VMs will still be running at this point and your network will still be up and working. Now we move to the compute nodes. Up until this point everything you've done you can undo without causing really any damage. We've only modified the Neutron database, we haven't done anything to the compute nodes again other than changing that fake driver variable. So everything up to this point is undoable. Even I've had to undo it a lot of different times I just had a for loop that would go through and list the nets and subnets and all those things and just delete them. Once you start here this is when you're in this is when you're on the dark side of the moon and you can't really come back without causing a lot of trouble. So when we go to do the compute nodes as I said here and I mentioned earlier that the migrate control also issues port attach API commands and that's why we have to switch to the fake driver. And again that just lets the Neutron ML2 agent set everything up in the database and make everything connect properly. So from its point of view it actually is the one who attached the ports. That way we know it's just working well. I tried this a lot of different ways and ultimately I came up with this idea that every compute node needs to have its own individually configured file. Config file, I'm sure there's another way you can do it where you add a tag in there and the compute node figures out which things it needs. This is the way I decided to do it, it was just easier and I'm super lazy, so it works great. But what that means is that we basically have to run it on the control node because we need Neutron config file which hasn't been created yet. We need the Nova config file. We need to pull stuff out of the database. In our situation the compute nodes are using conductor so we don't have direct access to the database from the compute nodes. So on a control node I run this generation script and then I pull all the config files to each individual compute node so that way everything gets on there super happy. I did it all in Ansible so again you'll see it when I run the video it all happens just really quick and automatically. So then we run the migrate compute script and now this is where the bridges and the VLANs and all those things actually switch over. And I mean it literally just issues IP commands, it issues V config commands, it issues VR control commands, renames everything and just that's where you're pulling the rug out. And so then finally the NovaNet bridges are still gonna be in place because the Neutron ML2 agent is what created everything so the TAP devices are gonna be Neutron, the VLANs are gonna be Neutron but you're still gonna have the VR, usually common vernaculars, VR and then VLAN is the name of the bridges that NovaNet creates and so those are still gonna be there with nothing on them. And so we can just go through and remove that so I have a script that does it, it's hit and miss so sometimes I have to go through and I wanted to make sure I didn't accidentally delete something that needs to be there still so I made it safe and it gets sort of the majority of all that old craft and then I go through and clean everything up manually if there's anything left. So that gets us to the point so like at this point everything is actually done. Your control plane is under Neutron control, if you're issuing commands it's going to Neutron and Neutron is talking to Nova on the compute nodes Nova has got, it's set up to use Neutron as well so it's gonna be using the Neutron ML2 agent so at this point everything you do is under Neutron control, you're basically done so the last procedure is you make yourself a drink because you deserve it. If you've gone through this you have probably already started drinking but I highly recommend you continue. So all right, demo time. So what I'm gonna do here is I have about a 13 minute video it's unedited with the exception of there's one point where our config management is running and it takes like 15 minutes and so I just sped it up so it takes like 45 seconds instead of 15 minutes but otherwise the video is unedited and it's just me doing the migration so stop me if you can't see this, hopefully you can and I'll sort of walk through what's happening here. So you can see here that we run you see all of the tenants are there I'm gonna ping one of the VMs here you know just give it a couple of pings and you can see it's up there it's running so at this point this is still NovaNet nothing else has happened we haven't really started doing anything, ping another VM just because I'm thorough and so then I'm gonna jump into this Ubuntu machine so Syros doesn't have screen installed and I just didn't want to deal with trying to get it installed so I spun up Ubuntu VM excuse me I'll hop into a screen and I'm gonna ping 8888 from a screen you can see that it's working there so this VM is now going up through the hardware gateway and out to the internet so that's all working so we'll start it again I'll detach the screen and I'm gonna leave that running the whole time we'll come back to that later that's a fun little surprise for later so we'll look at our VMs again so I can get the IP addresses I also can't type you never feel more vulnerable than when your screen is on display for everybody and you're typing things wrong so that screen down there is one of our control nodes and so at one point during the control plane migration you'll see that those pings will stop and that's because that's when the VLANs get removed from the control plane so there is a slight period of time where the VMs won't be able to reach the control plane the only time I really think that could be an issue is if you have really short DHCP leases because it might take more than a couple of minutes depending on how long it takes for your config stuff to run so you can see what's happening over here in the Ansible plays so the first thing I do is I clean up my Git environment to make sure that I'm pulling the latest stuff I check out the code it runs create comp you can see over there that Neutron is not running I generate the network data and so you can see the third or fourth task down there it's running the generate network data stuff you can see Nova Network is still running Neutron was not running I'll clear that screen and I type dollar signs instead of ans that's awesome so this is where our config management stuff is running and so you'll see that those pings down below they're gonna start speeding up this is where I sped up the video it goes from like a hundred to maybe 800 or 900 pings so this is where you would have Puppet or Chef or Ansible or whatever thing you have it's installing Neutron it's putting the config files in place it's giving it all the correct value so it can talk to the database and your message bus and all that goodness ah there we go, hey it's done if only it actually really ran that fast so now the database is actually this is what actually instantiates the database so we create it and we stick the schema in place we stop all of the services this is because of the way that we use that we do high availability basically I'm trying to set things up how my config management system wants to see it so I stop everything from running I re-enable IP tables I had to block it earlier so that stuff wouldn't get flushed and restarted so we start the Neutron APIs we load all of these IP tables, chains you can see down there restart Neutron Server rather so now you can see Neutron Server is running but there's no endpoints created and that's what's happening right now is that we're populating the Keystone data so as soon as that finish now you'll see Neutron Netlist returns but it's not returning anything because we haven't actually stuffed any values into it into the database we haven't created networks or anything like that so we do lots of starting and stopping and restarting and again that's all just normalizing the system to be how our HA system would want it to be set up so you can see there the VMs are still active and running everything is happy so now we update the compute drivers now we're doing stuff on the compute nodes we update the compute driver to the fake driver that was what I was talking about so we can issue the port attaches we fixed permissions on directory so now that playbook is completely done and our control plane is prepped all the config files are in place everything's written out to disk the proper APIs are stopped and started so one of the things you'll see here and I think I call it out is some of the VMs are changing to no state eventually that's going to happen to all of them that's not a bad thing basically what is happening is there's no longer a valid compute driver the fake driver doesn't return any information for the VMs that are running and so it and so the NOVA what gets the state of the VM in the API or rather in the database as returned by the API just turns to no state that'll turn back to running when we fix it later so trust me you're fine here VMs are still running do not panic don't forget to bring a towel so now we're actually running the control plane migration so again we make sure that the config file is there and that's just sort of a way to make sure that the code is checked out and then it's been created so we you can see there I did the there's the run migrate control there's the run migrate security groups the run updates the DHCP servers and then we restart the DHCP agent and then all of that finishes and then at this point now you've created all the networks you've put the DHCP servers on the proper IP addresses and then we revert the MHVs which are what we call compute nodes and that changes it back to the real driver so now we're going down over here you can see that when the VLANs got removed those hosts became unreachable but now we're looking at you can see that the DHCP agent came up the namespaces were created we'll jump into one of the namespaces and you'll see that I can ping one of the VMs from this namespace showing there's connectivity now between the control plane and at this point your VMs have connectivity back to your DHCP servers so fear not you can renew your IP address which is really important so now at this point this is where we finish that first slide the detailed procedure for completing the control plane migration so everything is back to normal in a second here I'll go back over and you'll see that the no state is turning back to running eventually they'll all change back to running as the actual real compute driver starts returning proper states there you go they're all back to running so now we're going to prep the compute node migration so this is where we take where we put all of the neutron values in our nova.com for our computes and actually switch the compute nodes over to be talking to neutron so you can see there that's where we generated the files on that control thing and then I copied them all we fetch them and then I'm going to copy them all to the correct compute node so everything's going to be in place so when we actually run the migration it'll have all the information that it needs so copy them into place then you can see back over there again pings are still running everything is still up and your network is pretty happy so when we do a net list over here you'll see now the neutron nets are there the ports are all there so there's an when we do the subnet list you're going to see so what we did here is I created we created a bunch of different allocation pools because there's no reserved IP option in neutron whereas there is a nova net and so some people have like a piece of hardware or something in their tenant space that they don't want that IP address to get assigned to a VM so it's a little bit ugly in the subnet list but basically what that is is it created allocation pools that just surround any reserved IP addresses so neutron can't assign anything so now you can see over here we're on a compute node and everything is set up for nova net like it's got the br with the vlands and you can see like the vnet adapters and all those things so then we run the compute migration so this goes through it makes sure all the config files are in place and it actually runs the migrate compute now we jump back over and you can see everything is switched so no longer is it under nova net vernacular now everything is under neutron you can see the old nova net bridges are still there but all of the vlands have been changed to neutron stuff all the tap devices are changed all the bridges are attached to the proper neutron bridges and all that goodness so then we clean up the bridges this is that step I said is kind of hit and miss in this particular case it gets all of them but one so now when we be our control you can see we got rid of all of them except that one and that's mostly just to be safe just don't want to don't want to go through and accidentally blast something we did such a great job of keeping all these VMs up I don't want to go through and accidentally take one down so at this point you're basically done everything is under neutron control everything is happy nova net is not even running anymore its bridges have more or less all been deleted and destroyed we can see that everything still looks great so all that IP information is being returned from the neutron API now it's not being returned from the nova net API and then as the kudita will jump back into this ubuntu VM and we'll see how our ping to Google is doing spoiler alert it's doing okay so there it is you can see it's still running it's still pinging when I stop it you can see there were 1,602 packets transmitted and what is that 1,600 received so I dropped like two packets in the entire thing which is tan amount to 0% packet loss so then I take a little bit of time here and figure out how to create a new VM because I don't often launch VMs from the control from the CLI so I had to like look up the commands but we'll boot a VM and we'll see that it comes up and that it works properly and again at this point everything is under neutron control so we're completely migrated and the VMs had as you could see a modicum of downtime if you could even call it that so at one point in here when you go to so like now that you've done this you could as we move forward with doing software layer 3 we're going to this would be the point where we would migrate floating IPs and stuff like that so it's kind of a building block approach which is why we've only done the hardware gateway stuff so far so we'll hop into one of the specific tenants that you can see there it's got two VMs running and then I'll figure out what a nova boot command looks like and had I been less lazy I would have edited this better for you to not take a bunch of time doing that you know what can you do right I had 40 minutes to fill and about 25 minutes of content so that's a joke I'm just kidding yeah so as we move forward to do software layer 3 this is where you would do that now I'm starting to work on it I haven't really come up with exactly what we're going to do yet but I mean it's going to be procedurally it's going to be a very similar thing where we take the information out of NovaNet stuff it into Neutron and start agents and whatever else needs to happen so that the IPs move over and then again you can prep everything ahead of time the downtime I anticipate happening will be when you actually do the cut over and you know like the switches are updating ARP or whatever so you can see over here I'm slowly figuring out how to launch a VM creating my security groups and all that kind of fun stuff we'll just name it something that is obvious that it's a post migration VM booting and hey that seemed to work but does it actually plum again the spoiler alert is that yes it does ah I'm the worst my kids hate me at Christmas so you can see it came up there it's active it's running it's got an IP address we can ping it and it returns pings you can SSH to it if you actually type the correct IP address the first one will I'm going to the wrong VM so I get a key denied but then I realize my mistake and fix it jump into the VM and it can ping the world and everything is happy so now you have a situation where we're running Novenet now you're running Neutron you can create VMs everything is happening the way you would expect it to happen and your boss hopefully is going to give you a raise that would be nice wouldn't it so that's it that's how it happens that's how it works are there any questions and if you have questions please go to the mics because they're recording this and I can't see if any oh I think I see a silhouette I have two questions the first one is just because I was five minutes late to the session sure the migrate control script you talked about is that something that's out and ready for people to use or was this met as a proof of concept of a future thing yeah so I'll make the slides available I've got a bunch of code repos in there with those Ansible plays also with all of that Novenet to Neutron migration stuff but the caveat that I gave at the beginning was that I mean you can probably use it it should work but you might have to modify it because I originally found it somewhere and it was for like Juno and Kilo and I had to heavily modify it but please like take it actually I can go back to the if you want to take a picture of the of the thing there you go so like those are kind of the repos where I pulled it from what I did and then my Ansible stuff so yeah feel free to go for that but I wouldn't recommend just running it you're probably going to want to look at it and make sure you understand what it's doing first but I mean you can hit me up on Twitter which might be out of scope is that every time that I revisit the issue of migrating away from Novenet I'm told by Neutron people and also by the network guy at my work that Neutron doesn't support our current use case which is having a single pool for all internal IPs I think that it does we can probably talk about that more if you're interested like I would have to understand more about your specifics in the area what did your Novenetwork topology look like we had individual tenant nets and obviously they're not overlapping but I know that Novenet supports like the flat network model like the quote unquote flat DHCP and I'm pretty sure you can do that same thing in Neutron I haven't so that's why I'm a little hesitant to say for sure but I'm pretty sure you can but there wasn't like a big announcement that this thing that didn't work before now suddenly works I know where he's thank you Hi in during your development cycle how often what percentage do you say that the DHCP servers changed IP addresses from what it was supposed to be almost every time and that's just because again it takes the first especially after I implemented that allocation pool stuff because it goes into the database unordered and so and then the DHCP the DHCP agent just pulls the first available IP out and so especially after I did that it would come out to like 154 like every time and stuff so I mean it's fairly repeatable like you can see what it's going to do and how it's going to do it but at the same time when the data gets shoved into the database that is completely random so you know so that's why I just went ahead and did the way that I did it and basically what happens is I update the database restart the agent it tears down everything and brings it back up with the correct IP address and so I just found that to be a lot safer and safer yeah it is thanks you said this was an ice house is the neutron you're running still ice house as well it is and then when you oh is it oh I'm sorry my mistake yes it's Juno neutron yeah thank you chat and then when you switched what is there bugs that cropped up or bugs that got fixed or anything like that operationally that would be interesting between the two you mean between ice house and Juno no between no the network and neutron yeah but I mean more than anything it's really just we're trying to normalize our system so that we can so that everything can do because we have some deployments that are running neutron green fielded and some there now are obviously going to be brownfielded neutron so that was really the biggest impetus for it was to get everything flatline and we also have tenants who just want to be running neutron for you know whatever other features it offers so that was that was the big thing as far as bugs go I mean that's a whole other discussion you know why would you run no vanette over neutron or vice versa and I know that she would probably love to talk to you about that as well right right exactly it's I mean it's the windows versus Mac thing again right it's like which one's better it doesn't neither is better or worse it's just what's what what are you trying to accomplish and what's the best tool for the job we started with neutron I was just curious though yeah I'm sorry so he said that they're bait it's basically apples and oranges was what chat was saying that you know but most importantly is if you're running if you're running no vanette and you think you need to go to neutron don't just take someone's word for it you should probably stand it up greenfield it somewhere make sure it does what you want it to do and make sure that it does it like you know well or whatever and then make a decision if you actually want to be running neutron over no vanette but also keep in mind that again the community is sort of moving away from no vanette which a lot of people find to be an impetus but we also will keep supporting it as long as we have to but your support may be slightly diminished thanks yeah thank you sir so you're running ice house nova network with vlan manager and with an external gateway l3 type setup correct it was a hardware external running multi hosts with nova network no no so you weren't doing the per nova compute correct and snadding and dhcp correct all that happened on the computer on the control nodes and then we had a pacemaker running h8m manage all of the failover and what not are you aware of floating ip l3 stuff aside but at least on the every nova compute host and multi host runs its own dhcp server and fakes out all using the same dot one address in the gateway with eb tables rules and all that stuff if that would migrate over that's a really good question I don't know and so when you would migrate over conceptually the database in your new neutron database would reuse all the dot one dns mouse server ip yeah so that way you would have one kind of dhcp server host per network pertinent network yes so it's not that it wouldn't migrate over it just would migrate over with different caveats in architecture right yes yeah that makes sense the second question I had which was kind of related to the previous one a bit um given what I understand of neutron in the ice house juno time frame there's a pretty different network security feature set between nova network and neutron like thing uh ip address spoofing mac address spoofing did you put any mitigations in place to prevent that because I believe the stock neutron stuff allows allows just anybody to spoof at that point yeah so we did previous to this um I'd done a lot of work to help with that because I mean I know one thing in particular that you're talking about is the ARP spoofing and stuff like that and so we basically kind of had to pretend we had to do fake layer 2 protection at layer 3 um and some stuff like that and so we've been able to mitigate it I wouldn't say it's certainly fixed but yes you're absolutely right that that that's that's but you're mitigating it up at the l3 level not at the yeah so basically what happens is you can poison bridge level where like if you're if you've got two VMs on the same host one could spoof the others address or well there there's so there's layer two protections because it's all it's all VLAN segregated so you can't like if there's two within the same project VLAN yeah so so a tenant can do but because we have pertinent VLANs and there's nothing shared um we felt like you can still so what can happen is you can still spoof so like VM 1 can still spoof VM 2 but then we've stopped it at layer 3 so it won't come back so like you can't actually steal any information out of it we felt okay with it because um it's only a tenant and if there's a compromise in the tenant then I mean that's a different problem to solve anyway but then on that same topic any major issues with security groups moving over between yeah but we just shove them in the database and and start the layer two agent which creates all the proper IP tables rules and everything works alright we're about out of time so this will be the last question oh yeah I was just going to add that um we're kind of still stuck on Nova Network in Kiello because we use it in multi-host mode and DVR look like network witchcraft you're not wrong we have like a couple hundred tenants in some of our clusters and we weren't confident that it would scale to um you know 200 neutron routers on every single compute node um across hundreds of compute nodes it's like you know M times M kind of disaster waiting to happen so um I just wanted to add that to the discussion that really complicated things a lot for us and we're probably looking at like a green field kind of rebuild to get to neutron right which kind of goes back to what chat was saying like at the end of the day you got to look at your scenario and that was why I started out by saying you know you gotta look at what your start is and what your end is and figure out how you can do it and if you even should do it um this isn't necessarily for everyone but this is what we did and I think it's a pretty a pretty sane path so I appreciate the comment thank you guys I really appreciate it enjoy your summit now if the most important session is out of the way you can get drinking