 Good morning, everybody. Thanks for joining us for this early bird special here. Today we're going to talk to you about our experience with Open V Switch and how we went about moving to using Linux Bridge in Havana. My name is Kevin Stevens. I'm on the Rackspace Private Cloud team. I started at Rackspace in 2006 and been on the Rackspace Private Cloud team since 2012. I'm James Denton. I've been an RPC network engineer since 2013. I've been an OpenStack user since 2012 in the Essex release. Oh, and if DevOps was like a spectrum, we'd be pretty far over on the operator side. We're both on our way around Python and config management tools pretty well, but still. Yeah, not developers. The approach in this slide comes from a history of system administration and not so much the programming. So just to provide some context around our talk, we'll give you an overview of the history of networking and the RPC product, what we had, what we ran into with Open V Switch, how we swapped it out, and what you can expect using each one. Rackspace started building private clouds in 2011 using Nova Network. Yeah, first real release was the Folsom release. We then began architecting and building private clouds in customer data centers, not just Rackspace DCs. 2013, we experienced a growth up to over 100 customers with our Rackspace Private Clouds. And just recently released our V9 product based on Icehouse. And the big takeaway there is the 4.9 SLA uptime. Yeah, so our main concern really is stability and reliability and uptime. Yep. Great. So our evolution of networking in RPC looks like this. The first release was being Folsom, was Nova Network based using Linux Bridges. When Grizzly came out, we adopted what was then Quantum and Open V Switch. The next release, Havana, still moved to Neutron, which is Quantum with a new name and Open V Switch as well. And based on some of the issues we've had, we decided to settle with Linux Bridge and the ML2 plugin in our most recent Icehouse release. So people asked us, why Neutron? Why Quantum? So at the time, Nova Network was listed as being deprecated in the next release, and that's kind of been the cycle for the last two years now. At the time, Nova Network was referred to as Old and Busted, and Quantum was sort of the new hotness, right? Everybody was moving to Quantum. Why Neutron with Open V Switch? So you guys may know that Open V Switch is heavily pushed by the community. OpenStack.org, if you were to go to the installation documentation, you'd find most of it really does relate to OBS. Not a lot of mention of Linux Bridge. Packagers prefer Open V Switch. And really, we thought we wanted and needed overlay networking. That's what our customers were asking for. If it dies, it dies, right? So there's every OpenStack operator. That's kind of the idea behind it. And when you're talking about compute resources, that is the idea behind it. You know, if an instance dies, you create a new one. If a compute node goes down, well, you know, you should have built your application to be cloudy. Now, when you're talking about the infrastructure, however, you need that to be as stable as possible. Yeah, and just brief overview of the problems we had. For example, one of our customers, we were running on 1.10, and their hypervisors were crashing quite often. And so we upgraded them to 2.0.1, which was, I think, what was the latest available version at the time. And then we saw data corruption all the way up through the stack to layer 7. So we downgrade them back to 1.11. And at that version, we're still running into seg faults. So all the instances on a particular node, you lose connectivity because the OBS V Switch D process would die. The OBS plugin doesn't know, and all your flows are gone, basically. But, I mean, if that's what we have to choose from, that's kind of where we... Yeah, I mean, you know, Open V Switch now is at 2.3. A lot of these problems have been addressed in the more recent code. But when, you know, customers are experiencing these problems and you upgrade them, and you still have issues, you know, they start to lose faith in that and they're looking for a more robust solution. One other point that we'd like to bring up is the introduction of broadcast storms into the environment. So when you would see that is when you... The Neutron plugin agent is responsible for building the flows. And so if you restart the plugin agent, or I'm sorry, if you restart Open V Switch without restarting the plugin agent, the switch is default to normal mode. And then because you have the tunnel bridge and the integration bridge and the provider bridge is effectively cross-connected, now you've introduced the ability to broadcast traffic and forward it out through all ports. And when you have a couple of hosts in this situation, you know really quickly once your environment starts to lag that you've got a storm of some kind going on. One more thing that might be worth mentioning is the extra complications that come around having to compile the OBS kernel module for the kernel. So every time you upgrade, you know, that's just added complexity. So why do we go with Linux bridge? Basically for us, reliability and stability are more important than being on the cutting edge. There's less moving parts, it's easier to troubleshoot. Yeah, and Linux bridge is still a supported plugin from the community, right? So if we do run into an issue, we can look at the community to assist with us. And when you look at the knowledge base out there, you know, Linux bridge has been around for a long time, even prior to OpenStack. I think any Linux admin is familiar with bridging. And Open V Switch, while it may have a lot more features than Linux bridges, you know, your normal administrator may have a hard time grasping the concept of flow tables and such. But basically it's tried and true technology. It's been around over a decade. So we do lose a few things by moving away from OBS like, you know, of course the flexibility that you get using overlay networking. I would like to add though with that, you know, prior to the introduction of ML2, if you use the Linux bridge plugin, you had no ability to build overlay networks. ML2 brought with it the ability to use VXLan overlays with Linux bridge. Open V Switch on the other hand supported GRE, VXLan and STT if you're using NSX. And in the Juneau release, as it currently stands, we wouldn't be able to take advantage of the DVR functionality. And basically anything that you could do outside of OBS, or outside of Neutron, excuse me, like QOS and stuff like that. Yeah, Open V Switch has a very broad feature set, right? But not a lot of it is utilized by Neutron currently. Here's a sort of the thought process that went into... Right, so we had to figure out if we wanted to migrate customers from Open V Switch to Linux bridge, we planned out a couple of different scenarios on what this might look like. Most obvious choice would just be to blow it all away, you know, delay your networks, install Linux bridge agent, remove OBS, and recreate everything except that, you know, customers don't really want to hear that. They're not real keen on you guys, you know, having to recreate all their networks. Second possibility would be, you know, obviously to stand up a migration environment. And there's definitely different ways to go about this, but one way would be to snapshot everything that you really care about, your base images, import those into the new environment, you know, maybe R-sync some data that you need, and, you know, build out the new instances, copy over data, and then cut over. Yeah, but doing so is pretty big capital investment, right, especially if you have a large cloud already. If you are looking at standing up new hosts and migrating, well, then you have to have an exact replica of that hardware. Not everybody has something like that on hand, so it becomes a very expensive proposition. The third approach might be to try and figure out how to switch it out on the same gear, right? So we thought, well, it should be easy, right? You stop the services, update the database, change some configuration around, restart things, and it just magically works, right? We could see Indiana there trying to swap it out without anybody noticing. If you guys remember the movie, that didn't work out too well for him. And I can tell you we destroyed a few labs trying to work out this process, but we've got something down pretty good. So some of the issues with migrating, right? And the third approach is what we ultimately decided to go with, based on our success in the labs, of course. Some of the issues we found out and discovered along the way is that, you know, the monolithic OpenVSwitch database schema is not the same as the LinuxBridge database schema. So ML2 solved that problem by building a common schema that all the plugins get to use. So it became a matter of trying to figure out once you were using the ML2 schema what needed to change. Another issue with migrating is, you know, in OBS we were utilizing GRE tenant networks and for LinuxBridge, the opportunity to use overlay networkings may not be possible. VxLan was not introduced into the kernel until around 3.8, 3.9 I think is where most of the functionality was there, so you would have to upgrade all of your machines to a 3.9 or higher. And then anything in the database that referenced GRE, we needed to figure out a way to convert that to VxLan or VLANs. We decided to convert everybody from GRE tenant networks to VLAN tenant networks. What we found was that most of our customers, while they were utilizing GRE tenant networking, they weren't really truly multi-tenant, so when we deploy an environment, we might build a VLAN provider network that would sit in front of a neutron router, build them one GRE tenant network, and they may end up using that for the life of the cloud. They may spin up a couple of other networks along the way, but we're not talking hundreds of networks here, maybe a dozen. Like 40 at the most. Yeah, we did do a migration where the customer had about 40, and so when you do migrate from GRE to VLAN, you do need to have 40 VLANs available that you can trunk down to the hosts. And the reason we didn't use... We decided not to utilize VxLan in this migration is because of what you just said, that we're not utilizing the overlay networks too much anyway, and if we're having to upgrade the kernel and set up VNI and all that stuff, that's a lot more impactful maintenance. Yeah, if we are intending to do this migration on live gear, we certainly don't want to introduce reboots for kernel upgrades, the L2 population driver that you would need for VxLan, and a lot of other things that just would have introduced potential issues. We wanted to limit the scope of the maintenance, so we decided to convert them to VLANs. There's sort of a walkthrough of our whole thought process when we decided to move forward in figuring this out for our customers in production. Yeah, so in preparation, we needed to determine what sort of dependencies would be needed. This would include any sort of Python modules that LinuxBridge plug-in and ML2 needed. Python 6, Steve Door, a couple of models there. We needed to figure out some way to convert the existing monolithic OBS database to the ML2 schema. We needed to then figure out, once that database was converted, what fields and tables needed to be updated to change it from OpenVSwitch to LinuxBridge. Also needed to figure out which configuration files needed to be changed. That would include maybe nova.conf, the addition of an ML2 in LinuxBridge.conf, and the neutron.conf. Which services would need to then be disabled, so OpenVSwitch would need to be disabled there. And then all of the networking services would need to be restarted. And we also needed to develop a rollback plan. This was very important because, you know, while we had tests of this in the lab, there's really no guarantee that anything's going to work when you try to implement it in production. So we did our best to implement a rollback plan, and we'll try and talk about that a little bit. Yeah, for example, we had one customer that was running the load balancer agent. And that was just another... Through a wrench in the system there. So then part of our maintenance plan is to define what defines a successful outcome. So we go through all this preparation, determining what's needed, but then how do we verify that the changes we've made are actually indicate that the migration worked. Some of the tests that we would want to execute were verifying that instances were able to gain a DHCP lease after the migration. Verified that instances were still accessible through floating IP functionality or directly, and that the instances still had outbound connectivity. We needed to verify that security groups were still functional. And more importantly, we needed to verify that the instances themselves were placed into the correct bridge. Linux Bridge and OpenVswitch use two different bridge names. So there's some stuff that we'll see a little bit later on where, if it's not placed on the right bridge, it's just not going to work. And finally, will the changes survive a reboot? So what we have pictured up here is your standard OpenVswitch network diagram for the network node. We've got our DHCP and router namespaces, two different networks there. All connected to the integration bridge, which is then connected to your provider bridge for VLAN traffic, and then your tunnel bridge for tunnel traffic. The compute node looks very similar. The instances are connected to QBR Linux bridges, and that's to provide security group functionality if you have it enabled. Those bridges are then cross-connected to the integration bridge, which is then cross-connected to both the provider bridge and the tunnel bridge. So the first steps here, this was all preparation, right? We haven't made any changes to the environment yet. First step, back up everything. This includes all the databases just in case you make a catastrophic mistake and all the config files that will be modifying. Then we had to figure out a way to migrate the database. So somebody in the community developed a script called migrate to ml2.py that was meant for Icehouse and now Juno, but we're running these environments in Havana. And so if you try to execute that script as is, it's not going to work. So we had to kind of debug that in our typical sysadmin way, probably not the most efficient way, but we got it working. And we used that to modify an existing copy of the database. So we created a database called neutron ml2 migration and executed the script against that database. Once the data was migrated, we would then run manual SQL commands to update the appropriate tables. And this would include the network segments table, the network ports table, and the VLAN allocation table. That would involve changing anything that referenced GRE to VLAN, changing the segmentation ID from an arbitrary GRE segmentation ID to a real VLAN ID in your data center, and then setting a provider bridge where one didn't exist, because tenant traffic would have flowed through the tunnel bridge previously. It's now going to flow through a provider bridge instead. The next steps would include installing the Linux bridge plugin, which at this point doesn't have a configuration. Ubuntu will start that plugin automatically, but there's no detrimental effect there. Sorry, it starts the agent automatically. Update the SQL strings in the configuration files, which includes neutron.conf. Configure an ml2.conf in a Linux bridge configuration file. The Linux bridge configuration file, in the ml2 configuration file, would look very similar to the previous OBS config file. We'd keep the same provider bridge label, because that's how our networks are created in Neutron. We don't want to have to change any of the networking at all logically. And then we'd have to instruct Neutron and Nova both to utilize ml2 or Linux bridge, VIF drivers versus OBS drivers. That would ensure that Nova was aware of the plugin used and would place the tabs in the correct bridge. So at this point, the maintenance begins. We're going to lose connectivity to the environment. We stop Neutron services on all nodes so no more changes can take effect. We would remove the data plane port from the OBS bridge, which is removing the physical interface, like eth1, from the provider bridge. We would then pull the instance tap interfaces out of the QBR Linux bridges. And the reason for this is, this is part of our rollback plan. One, we need to ensure that the Linux bridge agent is able to take those tap interfaces and place them into the BRQ Linux bridges. But at the same time, we don't want to destroy any of the existing open B switch OBS DB information. This is important when you're rolling back because what we're doing here is sort of manual manipulation of the environment. The agents don't really know what's going on, so we may be forced to manually place these tap interfaces back into those bridges. The minute you start destroying integration bridge, provider bridge, there's really no easy recovery from that. Next, remove the router and DHCP interfaces from the OBS integration bridge on the network node, and then stop open B switch altogether. So now our Linux bridge agent kind of has a clean slate to work with. This next slide shows you what it looks like when the tap interfaces have been removed from the QBR bridges. Your instances lose connectivity. It also shows the eth1 interface being removed from the provider bridge. Next slide here is stopping open B switch services removes the bridges. So now we have our VMs sort of sitting in the nether and ready for a restart of services. So here's the moment of truth, right? Start on neutron services and restart your Nova compute services and the Linux bridge agent, if everything's been configured appropriately, will start to plug everything in. When all goes well, your post service restart here of our network node. Notice that we've sort of reduced our bridge count significantly. There's one bridge per network and our router is plugged into both networks, the QR ports of the router. The QG port is plugged into a bridge that corresponds to our provider bridge. Notice that GRE traffic that was previously more or less tagged by open B switch in flows is now utilizing VLAN subinterfaces in each bridge. So maybe we had GRE tunnel ID 1 and 2. We converted those to VLAN ID 200 and 300. And our DHCP namespaces are plugged into each of those networks respectively. Compute node looks real similar. One network per bridge and there happens to be one VM per network. The star on those tap interfaces is used to represent security groups. So if you were to look at IP tables after the migration, you'd see that the names of the chains have changed, but the functionality is still the same. And if we added additional virtual machines to existing networks, you would see multiple tap interfaces in those bridges. Success, right? Spock detects win. Now when things go wrong, a couple of failure scenarios here, and these were all figured out in the lab and really the first sort of live migration here. So if your instances are unresponsive, right, you want to do the normal troubleshooting methods of making sure your instance is actually sending DHCP requests. So you can do TCP dump on the tap interface, the bridge and the physical interface to make sure that that connection works all the way through. And then, you know, if you can verify the traffic, you're seeing the traffic on the physical interface, but you're not seeing it on the other hosts, well, then you want to make sure that your VLANs were trunked appropriately. With GRE, you didn't have to worry about VLAN tagging with Linux Bridge and VLANs you do. One other scenario that we found was that post-restart, the IP addresses disappeared in a nova list. Now, you could do a nova interface list and see the interfaces and the corresponding IPs, but this, you know, panic onset here for your admin when they do a nova list and don't see addresses. And what we found was that there is a table in a field in a table in a nova table called instance info caches in a network info field that sort of stores, it caches information about the network that corresponds to that particular instance. That is where the information for this output comes from and it also tells nova how to plug the instance in. So, oh, can you back up? Right, so if that is missing, one way to get that information back is to do a hard reboot of the instance which will rebuild that cache or you can add an interface to the instance which will also execute a rebuilding of that cache. And subsequently delete that interface, not needed. If you're unable to boot new instances, you get an error state, normal troubleshooting methods apply, maybe that you have exceeded the quota or, you know, nova log might tell you kind of where to look. But we really haven't run into any issues with that. If connectivity to your instance is good, you're seeing DHCP traffic all the way through but you notice that your DHCP tap interfaces are not in the bridge, you might find errors that refer to a binding failed message in the DHCP log. And what we found on Ubuntu is that there is a file that's referenced in the Neutron service, you know, in the script that references Etsy default Neutron-server. And in that file, you'll see a Neutron plugin configuration file that points to OpenB switch. So this was sort of a hidden configuration that we needed to go back and replace with the ML2 configuration path. Lastly, if your BRQ bridges aren't being built on your compute nodes, then you want to verify with the Neutron agent list that the agents are actually checking in and that, you know, you actually installed the Linux bridge agent. So we got some benchmarks here and I kind of take these with a grain of salt. We're using Intel 10G X520 Nix, no overlay offloading, best we can tell. And so we compared traffic between OpenV switch with GRE, VXLAN and VLAN, as well as Linux bridge with VXLAN and VLAN. What we found was that VLAN performance with OBS and Linux bridge was very comparable. Because we don't have offloading for overlay networking, we took a pretty big hit on VXLAN for both OpenV switch and Linux bridge, as well as GRE. So what we're seeing here is an IPERF 3 benchmark between two hosts and, you know, aggregate traffic, maybe exceeds 9 gigabits per second for VLAN and between 1 and 2 gigabits for GRE, any overlay. The next benchmark we have was an SCP file transfer, 10 gig file between hosts. And you'll notice that on the next slide, GRE between two hosts for a 10 gig transfer took about 90, 110 seconds, almost 2 minutes, and about 90 megabytes per second. We moved to VLAN. We got about half the time, 60 seconds, and almost 170 megabytes a second. Now, we're very well aware of, you know, new hardware out there, Melanox, new Intel Nix that do provide offloading. Their performance is very comparable to straight VLAN, so we'll be moving to these Nix in the future, but these are what a lot of our customers have installed, so these were very realistic benchmarks for us at this time. So OBS is definitely, you know, we think it's the way of the future, but for right now, you know, it wasn't working for our customers, so this is what we chose to do. I mean, Linux Bridge gives us pretty much everything that we'd want out of Neutron at this time anyway. We showed you how we went about migrating an existing Havana environment to Linux Bridge, and how we improve stability and comparable performance with OBS. Yep. So, a wise man once said, open stack is hard, right? I think we can all agree on that. Our job in support and architecting at Rackspaces is to simplify these environments for our customers, right? Customers and users want to consume the cloud. They don't want to have to worry about the stability of the infrastructure, and so what we've done here, and while we know that, you know, Juno, Kilo, all the later releases and the new Open V-Switch releases are going to provide that stability, you know, it's not here yet. And our customers, you know, on Grizzly and Havana, the six-month release cycle and upgrade cycle is kind of tough, right? So you're going to have customers that stay on releases for a year or two, but they're going to want to take advantage of some of the benefits of the later software, so we've done what we could, right? We know that Open V-Switch was a problem for these customers, and if our goal is to stabilize them, then we want to move them to a platform that they can benefit from. So for them, it's Linux Bridge. So any questions about the methods or the presentation? You know what? Yeah, we do have the new Migrate script, the Migrate.ml2.py for Havana, available at this GitHub repo, as well as a text file that contains, basically from start to finish, all of the steps that we would execute for this maintenance. Most of our steps are done in Chef, but, you know, these are manual steps using DSH. Right. We had initially done everything manually, then because of our current 4.2 product, it uses Chef for config management. We Shefferized it, and then have gone back and actually figured out again how to do a manual deployment or migration. So what you can see there is pretty dirty, but it does work, and if you were to use it on your own environment, you'd want to take great care to make sure that it fits the needs of your environment. Yes, sir. Unfortunately, I just missed Betty first five minutes where you guys probably explained why you're doing this stuff. So Betty first impression, I got like, okay, Rex, what are you doing now? I mean, entire community towards the SDN, right, and then fully utilize OBS. And I don't know, you know, the main stream is towards the SDN where they use coming back to Linux bridge. Probably, I don't know, did you guys really work with some OBS community? What kind of problem can be really, because this is kind of a ridiculous direction, right? Sure, well, I don't know. I think SDN, that term gets thrown around a little too much. So if we're looking at really how this is operating and we're looking at the Neutron agent being responsible for building flows that open V-switch knows to forward traffic appropriately. You know, the agent is really creating a flow that translates a v-line on the integration bridge to a v-line on a provider bridge or creates a flow that translates a v-line to some sort of tunnel endpoint. And that's really all it's doing. It's not really leveraging the full scope of features that open V-switch might provide. And so if we're really looking at this from a Layer 2 standpoint and plugging taps into bridges and giving them Layer 2 connectivity, we feel that Linux bridge can offer those sorts of features. But, you know, moving forward, we might find that it's more limited as open V-switch gets, or Neutron takes advantage of more of the open V-switch features. Just a quick question. So did you guys use multi-processing of OVS module? Are we what? I'm sorry? Multi-processing capability of OVS module. So in the later releases of OVS, like the post 2.0? Yeah. So what we found and we did mention this earlier was that, you know, during this process we had upgraded customers to what was then maybe the most recent version of OVS or the most recent version that Cloud Archive had and, you know, they still experienced issues. And you get to a point where your customers, when they're utilizing these for production, they start to lose faith, right? So what we're discussing here is more of a stopgap for some of the customers that either can or don't want to upgrade to 2.3 or later. That may have possibly even been churn risks, you know, that are looking at OpenStack and taking in the problems and blaming OpenStack for this rather than just one small component of it. But it's a very important component, right? Without the networking, none of it works. So, you know, while we are advocating LinuxBridge today, we are certainly not opposed to open V-switch at all. In fact, our public Cloud uses it heavily and they have the ability to stay real close to trunk and they work real close with the development team there at NYSERA. Thank you. You just mentioned that your public Cloud uses OVS. Vandy Hill described that yesterday. What I was wondering is, do you have clients that ever migrate, you know, workload from that public Cloud into a private RPC? And if, when they do, do they have VLAN issues or do they have any, you know, issues and patterns in that migration? So, if we're talking a straight data migration, most of the network changes are really transparent to them. Our private Cloud environments look a lot like dedicated environments in terms of traditional firewall, hardware firewall, hardware load balancer, and maybe then the private Cloud hypervisors. We have not experienced, you know, what you mentioned, customers experiencing VLAN issues or anything like that. We do have for a product called RackNet that allows public Cloud users to route traffic to private Cloud over back-end networks. In fact, we did just release a new version that utilizes NSX and customers are able to create networks in public Cloud that can then span to private Cloud. I can't speak to that too much because it was just released, but I suspect with anything we will run into issue. You said at the beginning that one of the motivations for moving to OVS was that customers were asking for overlay networks, but then you've kind of moved away from that, and I was just wondering why customers thought they needed the overlays and why they don't now. Well, I think at the time, and especially now, you know, people that aren't very familiar with this hear the buzzwords, right? Oh, I need overlay networking. What does that really mean? Overlay networking, the greatest thing since sliced bread, I think, right? It offers tenants to create networks without having to touch the infrastructure, and that was one of the main motivators by moving to Open V-Switch at the time, and it still is a big motivator, but what we found over the last year and a half or two years in using OVS is that while the customers sort of came in wanting overlay networking, and they, you know, maybe didn't understand why, and it really fits a multi-tenant model and a dynamic network creation model, and we have found that a lot of our customers sort of stick to the networks that they build initially, and so the benefits of having overlay networking really aren't there, right? Especially when you look at the performance hit that we've seen on some of these NICs, too. Just to clarify, though, our latest RPC release, we do have support for overlay networking. Right, yeah. So part of the reason why we didn't support overlay in this migration and sort of converted users from GRE to VLAN because we didn't want to introduce, you know, kernel updates and anything else that we weren't really as sure about. So simplify. That was kind of the message. They weren't taking advantage of it anyway. Right. Yes, sir. Well, I just want to question about, well, you guys serve the level two connectivity and stability, but what about the level three? Level three network like DVR, you losing capability of DVR, right? So do you have some problem with the scale out of S3 agent or the routing of your network? By moving to Linux Bridge? Yeah. Not that we've experienced. No, so I mean, we understand that, you know, by settling on Linux Bridge, especially with the new release, we won't be able to take advantage of DVR in Juneau, but I suspect that either the community will more or less port that functionality to Linux Bridge and we'll investigate it then. Right now it's really not a big concern for us. I think we're more interested in the VRRP capabilities, the HA router capabilities there. And we do have a lot of customers that don't leverage floating IPs, so they actually leverage some external gateway, usually a hardware firewall or load balancer. I just wanted to mention that by the time we get to Juneau, we would probably be using, or we'll always be able to use OpenVswitch again. Yeah, maybe, right? Nothing's off the table. I had a question about just that. Have you looked into what it's going to take since you're doing custom migrations? Have you looked into how you're going to get back on like Trunk or Master? If you had to go back from where you're currently to like Icehouse and Juneau, isn't that going to be like an issue for people who do this migration? So unfortunately the folks that are on our Havana environments right now use a different deployment model. So if they wanted to move to Icehouse, it's sort of plan B, right? We stand up a migration environment and they would migrate their stuff over. We don't have an upgrade plan right now for Havana to Icehouse directly. So the migration here is really to help the customers that don't want to migrate to Icehouse sort of gain some of the stability that we've found in that release. Well, I give you guys credit for going back to LinuxBridge because we bring up a good point that stability is number one priority for customers and my experience over the last several years with OBS is that it's a pain in the ass and it's not stable and how can you run production systems on it. I've recently tried to actually get LinuxBridge to work in Icehouse and Juno using the VXLAN L2 population drivers and just had no luck but it sounds like you guys were able to actually get that working in your test environment even though you went with VLANs instead. Is that correct? Well, so we are actually using LinuxBridge with ML2 and VXLAN with the L2 population driver in our Icehouse release. One of the things to... You may want to verify that you're running appropriate kernel. There was a problem with VXLAN. I can't remember the exact kernel version. Part of the problem with some of this stuff is trying to keep track of what's compatible with what. Maybe the first thing to look for and then we're happy to talk to you after this maybe or we can keep in touch. I saw the VXLAN modules loaded that I didn't have to upgrade the kernel so maybe I talked to you after or I guess just a general question it's really difficult to find any kind of reference implementations very little documentation so if you guys have gone through that documenting it somewhere it would be a huge, huge help. Yeah, absolutely. We do have a blogging site on our developer side and so after the summit one of the things I'd like to get back into is providing these how-tos to the community one of those could be getting LinuxBridge with L2POP working properly. Thanks. Yeah. Great. Well I think we've hit our time. If you guys have any questions feel free to reach out to us down here or we're at the Rackspace booth throughout the day so enjoy the rest of the summit. Thanks for coming. Thanks guys. Thanks.