 Okay. Good morning, everybody. How's everyone doing? Good? Good. It's last day. It's about done, right? Great. I'm Somaik Behara. I was one of the founding co-developers on the Quantum Project, which became the Neutron Project. And we did a similar version of this tech about three years ago, saying why we need Neutron. Recently we have had Neutron. It's gone to about quite a few releases. And there's still the question, is it right for me? What does Neutron do for me? Or does it even work? Or when does it work? So this is my attempt of reiterating those first principles and seeing how much, how far have we come. Are we actually meeting those expectations? And when does it work? Who is using it? And then take some questions and hopefully give some clarity on this topic. Great. Let's get started. So if those of you remember the keynote address from the first day, you know, our second actually, Troy from Rackspace said, you know, we in the OpenStack community, we are the Rebel Alliance, right, on the left side over here. It's like we're disparate groups who love to argue together, but argue to come together, and then we align and we move forward. So the reason I show that, and that's the Federation's Droid Army. So for those of you not Star Wars fans out here, the Rebel Alliance fights the Federation. The Federation is stoic, a single model. Everything is, you know, one cookie cutter. They're very robotic, monolithic. And here you have very disparate opinions. You have choice. You know, you have dissent within, but you come together to fight, to fight together. So the reason I bring this is philosophically, you know, as I'm going to present, that's the Nova Network was the Nova Network model. Very rigid. You can do whatever. You can have any flavor up till it is this way. And we're going to talk about those three flavors. You can do anything with networking up till you put them on a flat network. And Neutron was very different. It's like you can do everything you want. You can use any networking vendor you want. You can use open source technology plus some proprietary vendors and create anything you want. And that gives a lot of choice. A lot of power addresses way more use cases than Nova Network. It could be possible in any configuration. But it also brings some challenges. How do all these things work together? How can I lock down, even though I have all the choice, how do I lock down to the models that actually work or which are validated? So that's going to be a theme here. We believe that this model is more powerful at the end. In the short term, there might be dissent. There might be arguments which is the better way. But longer term, you have a lot more choice and the good models will rise up. And we can talk about some of those models in this talk. So that's a rough outline to follow. Nova Network versus Neutron, high level, what is it? What is Nova Network? Why not Nova Network? Or why Nova Network? Neutron. Why Neutron? How does it differ from Nova Network with the open source components? Why would I need some kind of vendor integration? What can I do with Neutron with me mixing and matching all of this with what I couldn't do with Nova Network? And finally, who is using this stuff, right? There's been a lot of talk in the community, oh, I don't want to keep using Nova Network. I like it. I love it. I want to hug it. I've been using it for such a long time and I don't know how to upgrade it. It's generally challenging. So let's see who actually is using it in the field. And I'm an engineer. I think probably most of you here. So we're going to look at some numbers, like the user data around the usage. And I'll let you decide what do you think is right for you, right? So let's talk about the dried army a little bit. What does the dried army give us? The federation's dried army. How is networking done before Neutron was there? So in the beginning, networking was subsumed in Nova or in the compute fabric. And that was how networking has been done for a while, be it any other virtualization platforms in the past, or be it when OpenStack started out, any other cloud management platforms. Networking never had a first class citizen. It was subsumed in an auto-provisioning self-service environment in the compute stack. And that's what Nova network offered. It is still there in that model, but it is integrated, but at the same time, it's very limited from scale feature set, a lot of choice perspective. So it is still available. But the reason we wanted Neutron is, like I was telling you about, you can have anything you want. Any way of doing networking applies to this three ways. First, limited topologies. Only flat. So this is a single flat L2 domain. That was the first supported topology. And you see the router over there? That is actually a virtual appliance or a virtual machine. It's called Nova network. Initially, even firewalling security groups were implemented with IP tables in that box. It was not at the hypervisor level. So all the traffic gets dragged over there. If you know a little bit about networking, this is a single L2 domain. L2 domains have scale limits unless you fill up the switches, scam tables in a cloud, highly dense environment if you have too many VMs. And in a cloud environment, it's very dynamic. VMs are not static. They come up and go down. Every time they do that, you do broadcast on the physical network. And that's what fills up the scam tables on the switches, which means the more dense your environment is, if you use an L2 only kind of model, you have to keep on buying newer hardware and newer gear because your network is going to get stressed. So that was one way of doing it. We have ways to overcome it. I'll talk about it even with Nova network. But it still leaves us with some challenges. Then there was flat DHCP. Same thing. Except we have a DHCP server on it. The reason we had a DHCP server on it is because Windows VM images tooling was not kind of very open and supported. So the way you can allocate IP addresses for Windows VM in an open stack environment was a lot easier to do it with DHCP. This programed the DHCP server with Mac IP binding and it does DHCP discover and gets the IP. Same thing. But now we've gone to a Linux only cloud, to a real cloud, have been able to provision Windows. And then this was called the VLAN mode. That was, it is one of the kind of the preeminent mode for kind of more larger scale, more complex workloads. It's not a single kind of SaaS app, but you know enterprise environment, real cloud, mix and matching tenants. That's what most people adopted, which meant you can have all of these multiple networks here. Let's see. Anyways, you can have all of these multiple networks. You dedicate a VLAN on a pertinent basis. You start, you know, you set up a, in the config file, the start number of all the VLAN ranges you're allocating to the cloud. You know, everyone creates a network. They get their own isolated environment. That's a VLAN for them. Great. You know, they're not, these tenants or applications have the isolated space, very, a lot more secure. You know, it's still, still had this, no, a network centralized virtual appliance problem. We got around it. I'll talk about that. But that, that was the story from a customer use cases. That was pretty powerful. Except once the open stack cloud started becoming bigger and bigger and becoming successful, you know, the two problems which popped up. I don't know if you know about VLANs, for those VLANs, and even you know a cloud environment where VMs can be anywhere in your data center. And VLANs were made for departmental groups. So they were programmed on one VLAN for a certain set of switches, for another set of switches. But in a cloud environment, I can have a tenant which has VM spanning anywhere, which means, first, now to trunk all my VLANs on all my switches. I would touch every physical switch and they can access any VLAN. It stresses all of my network environment. Configuration, this becomes challenging. Troubleshooting becomes challenging. But you can do it. Right. And then VLANs have a theoretical limit of 4096. So if I have more than that many apps or tenants, I'm going to be in trouble. But you're probably going to feel the pain way before that because most modern switches don't go up to that limit. You know, lower end switches, you're going to probably start seeing issues within 100 VLANs, 150 VLANs. And that was a big deal. A lot of these cloud environments needed multi-tenancy, needed their own dedicated kind of networking environments. And this is the only way to do it. And that's just, you know, 100, 150, even 4,000. Not very cloud-scale number. It makes it challenging. I mean, people will argue you can go back to the flat model. And here we have the security groups, firewall things. It becomes manageable, they burden, but I can have a little more room. I can have more tenants. That's what a lot of people are propagating. But challenge again here, no overlapping IP address space, no private networks. So applications have to be re-IPed. So that model didn't work where you can have a lot more tenants. So the flat is great for a single application environment. And VLAN, obviously, like I said, was a challenge. And so we were with the dilemma, how do you go fix it? Or how do we fix it that we don't have this dilemma again? And, you know, and yeah, the challenge I was saying, no three-tier apps. If you want like, you know, dedicated web DB app tier, all isolated. It was very hard to do. Like, you can't dedicate that much, that many VLANs for a single application. Unless, you know, that application has like, I don't know, a gold-plated app-selling app which can afford it. So that didn't work. Scale, like I was saying, the single L2 domain, that was a challenge. IP address management, not that bad, but it was dependent on a single database, you can externalize it, smaller problem. Security profiles had a scale issue because it was centralized. We solved that. We actually, after Neutron came about, no network when we solved it in Nova Network by removing and putting IP tables in the hypervisor. Not that much of an issue, but for most purposes, we still needed to be on the same L2 domain. I'll talk about something called multi-host after this. But network services, they were limited. If I'm a tenant, I want to create my own environment, you know, logical switches, routers, my own L3 topologies. There was no self-service element for L3. Or I need Vips for load balancing. There was no self-service element for load balancing as a service. Not only that, there was no framework to enable that for my tenants. That was a challenge. It was not that it wasn't there. I mean, Neutron doesn't have a lot of things either. But there was no framework. How do we enable the service? If I have developers, and this is really hurting me, as a cloud provider, didn't have a framework to work within and enable it. And that was becoming a big challenge as OpenStack was gaining momentum, right? People wanted to solve a lot more use cases than the traditional, you know, put VMs on the network and get something up and going, kind of use cases. So, yeah, VPN as a service, firewall service, everything comes under that advanced services category. There was no mechanism for that. So, that was a big problem. And when I don't have these services, I think I'm gonna use my existing network services or my vendors who already have it. I'll use OpenStack for what it has. Challenge, but there was no way to integrate anything else. Any other secondary open source project? If it provides, say, you know, there was the Viata routers, or there was some kind of Quagga, or something like some other open source project. How do I integrate it? There was no framework to really easily integrate these third party services. Open source or F5 load balancers, or, you know, I have really highly available applications. I need this, you know, vendor specific appliance, but I want OpenStack to be my cloud management framework. I couldn't do that, right? All of these challenges was where we were trying to solve with Neutron, right? Monitoring troubleshooting integrations were a lot more challenging because there was no single resource point where I can get all network-related information. So you can integrate traditional network management and monitoring tools into it. All those challenges of being networking kind of embedded into the compute layer, right? So why was that? So this is a little deeper dive into the VLAN mode of Nova networking. So everything I said, you know, I said it happens. We can see why it happens. So here we have, you know, the compute nodes. All of these switches across all your data center in VLAN mode are probably trunked with all of the VLANs that are accessible everywhere. So really at a network level, there's no isolation. All the traffic from all tenences going on the same physical switch, stressing your physical infrastructure. That's one problem you see here. All the compute nodes are connected to the VLANs. There's IP tables in the compute node you see. So we saw the security profile kind of bottom like a little bit, but we still had this thing called the Nova Networking node, which can be co-resident with the compute node, if you want, on the left side. So, sorry. So this Nova Network node is where all the network services ran. So this is where we had Natting. We did floating IPs. We did these IP tables to do that. There was DNS mask. We plugged those in for IP address management. But again, it was only one DNS mask, a HA pair, because it was really one network. So you didn't have that many address spaces. But that could get overloaded sometimes because once this became pretty big, and that was about it. And you can run this in active standby mode. You can use Corosink or DRDB to replicate both of those, and that's it. You'd realize, since it was built by people from the compute heritage, the semantics of a lot of semantics around HA and deployment were at application level. So HA was an issue, and that was application level HA, which is great, so it's an API server, not so great if it's like a false-tolerant network service. And a network HA doesn't mean that you can take outage, right? You essentially will failover it to get outage. So there are a lot of these challenges and this was a choke point, right? All your cloud going through that network node, very challenging. We did create something called multi-host, which take took all of these services, or majority of these services, and put it on the compute, on every compute node. So, you know, all the traffic doesn't have to drag to the network node. But we still have a challenge that all the L2 VLANs have to be trunked everywhere because you're placing the VMs. You want to flexibility to place the VMs anywhere in the data center, if it's really a general-purpose cloud environment. So we could solve everything by the old model, but it was still kind of holding us back, right? There was not a lot, and we couldn't integrate anything externally to solve it or solve a particular pain point. I didn't have a framework, we had to rewrite, refactor. And as part of the rewrite or refactor we thought we would do, we were like, might as well do it in a way that anybody can reuse it. A cloud provider can reuse it and you can plug in something in it. And that's where Neutron comes along. You know, promises everything, even though it was a Rebel Alliance, and it really was the Rebel Alliance. When the Neutron project was founded it was a lot of users or rack space involved. A lot of large vendors, I was at NYSERA back then and there was Cisco, there were some other startups, there was folks from Jinnipur. Like, you can consider the Rebel Alliance. And these people were, you can consider them as Rebels within their organizations to be participating and to be able to create something where all of these kind of disparate elements can work together. And it was, I guess it was like actually a couple of days to finalize this layer of Neutron API. And the motivation is that applications are programming to this API and you have decoupled it from anything underlying. So no matter what we do underneath for networking, how we evolve it, or a cloud service provider changes it, the applications remains unchanged. It's a core turn of virtualization, you decouple first and it makes innovation faster because you can do anything internal, you're not impacting the whole stack. You can pool all your resources. Because of this, all your networking environments can have a single shared capacity. We can do use overlay networking, like software defined networking because all of them is a single, there's a single point of definition for them and the network capacity is pooled. You don't have to trunk every VLAN everywhere, we can overlay a capacity on top of it. And it gives you choice, choice from deployment topologies I'll talk about, kind of self-service, L2 network, self-service, L3 elements, firewall service, very load balancing service, F5 just recently did a demo of how they are very seamlessly integrated into Neutron LBAS. So the tenants can self-serve within the kind of guardrails, the provider kind of specifies and self-service as much as they can, all of the networking services. That was from the back end. And then on the front, oh, that was on the front end. On the back end, you can use the open V-switch plugin which we had developed and then you can use your physical networking, you can use some kind of proprietary vendor solution or you can swap out with something else which we don't know is gonna come in the future, which was very hard when Nova Network was hitting its limit to do. So we provide a framework which provides choice on the provider side and choice on the tenant side. So choice and aligning a different models together was a big deal, right? So why did they use, how did it provide all of those? So it provided the rich topologies I was talking about, the three tier app DB web topologies where these networks, a lot of them, a lot of providers could do it because they integrated network virtualization solution which means not a dedicated L2 VLAN on a per private network in this space in the logical topology. Because you have some of these technologies we integrate in neutron, decoupled the physical network from the virtual network element. You can scale a lot better. You can have tens of thousands. Rackspace is a product called Cloud Networks. Pretty high scale. The way they could do it instead of using plain VLANs is using some of these leading edge technologies for network virtualization which enabled to create a lot of these networks where PayPal was talking about using Neutron. They had scale challenges but they could accomplish their goal and some of these customers are running super high scale as well. And security, security profiles, we redid it. We still use IP table since we had fixed no one network to use at the VNIC level or the VIF level, the filtering. That was a good model, right? The filtering is distributed. We adopted it. We embraced where we could, leveraged what was there and only made changes where we had to, right? The network backend was not very pluggable. We had to change that. That's where it became the Neutron server where there's a plug-in model. There's a plug-in API internally which is independent of the Neutron API. Elbas is a service. Quite a few vendors are offering the Elbas API. It's not core. But the tenant-facing API is hardened. People can start using it. Plug-ins are evolved. There are a few of them. You know, F5 is a big one who demoed their plug-in and there is HAProxy open source kind of implementation plug-in. But from your tenant perspective, they're using this hardened Elbas API and you can swap out which backend plug-in you want based on the application. All of the high-level network services were not possible before. No kind of easy framework to integrate into them without kind of refactoring all of the code. Now, I mean, there's a single point. There's a plug-in-pluggable framework. You don't like something. We can integrate into it. Doesn't scale? Sure, we have taken the first step. Some use cases are probably newer than the more honed use cases. We will have worked together as a community to kind of scale it up. But it just increased the possibilities for an open-stack cloud from kind of three to pretty much like it's N. I call it N because there are so many combination. It's a probably exponential combinatorial matrix of everything you can do. But that also means being the responsibility as a cloud provider. If you're running a particular combination, you have to validate it and test it. That comes with choice. You have the choice, but you have to put it together that it works for you and it's validated. The Neutron team validates all of this to some of the common topologies that everybody uses in the CI environment, but not every possible combination could be validated. That was a challenge that has come along. We have to figure out how we solve it through maybe a CI, common CI infrastructure where the vendors can put gear as well, which is integration testing. See how it's been on time, cool. So VLAN limitations were removed. I said the default open V-switch plug-in as of some of the more newer network virtualization SDN plugins in Neutron allow you to use overlay tunnels. The default one uses GRE tunnels, so you don't have to trunk every switch with all the VLAN numbers. This can be just, it needs not be a L2 network even. It's just plain L3 connectivity, L3 fabric. All these virtual networks you see on the top right side are essentially created using overlay tunnels. So there's an agent on every hypervisor, creates a tunnel to another hypervisor, and it creates this, essentially it implements the logical topology on the top right side. So you have the stress from your network is kind of uplifted. You don't have all the broadcast going on the physical fabric. So you have 1000, let me give you an example. So you have 1000 tenants, and everyone has 10 VMs, right? 10,000 VMs. So before, you would have about 10,000 VMs came up, 10,000 broadcast went to the physical network. Now, depending on which technology you're using, that 10,000 broadcast can translate into a single one because it's going in the tunnel and tunnel is set up. So, and if your tunneling engine is a lot more intelligent, it can even figure out where the destination is and unicast it so there's no broadcast on the physical fabric. It's just an initial setup infrastructure set up so it can be translated into just a single broadcast. It takes out a lot of load from your physical network, gives you a lot more life out of your physical infrastructure investments. And honestly, right, this infrastructures were not made for the cloud era. They were not designed for it. So, since we created this problem, we say it's only kind of responsible to get a solution to the problem. So waiting for hardware life cycles, which are a lot longer to help solve this problem. And that was only possible in Newton. There's no way you could do that with no one network. There's no framework to integrate basic switching and routing into a different switching and routing model besides VLANs, which were baked in into all the way into the Nova compute hypervisors. Like there's logic assuming VLANs in the compute code, they're so baked in, like talk about, you know, kind of fully locked in. So this talks about, you know, the broadcast, it need not be on the physical fabric. It goes from point to point because of tunneling if you're using the network virtualization provider. And hypervisor to hypervisor, the GRE tunnel can be created on demand when a VM spins up on the second end host. So the second part was, like I was saying, choice. What does it support? On the top, we saw it supports all these tenant facing APIs, networking as a service, logical layer three services, on demand, you know, load balancing VPN, firewall and what have you can, it's a framework, you can even put anything you want. A lot of these APIs go usually to be experimental mode. So the community can use it. And once we see there is enough adoption, it actually becomes core and a supported platform. So a lot of things you see, like VPN as a service, even LBAS, they have not become full core because there's not enough people actually adopted it. There's not a big, good ecosystem. But it is actually, it is there as a playground. It's there, the plugin frameworks there. People can put code up there. You can develop your own plugin for each of these services as a cloud service provider. And you can plug it in. But not only that, we enable choice at the bottom, which you couldn't do before. You can use the open source default plugin. You can use open V-switch and GRE tunneling for switching and routing with the L3 agents. You can use HA proxy for LBAS, right? Or you can use a vendor plugin for a certain thing, which, you know, maybe a certain scale that doesn't work or certain environment, certain application. You can use F5s for load balancing. You can use NSX for switching and routing, software-defined technology. Or anything that comes up in the future you could use in this environment. That was not possible. So I'll go over a little bit on the default OVS plugin architecture, right? We went over a little deeper into the NOVA network architecture with the open V-switch plugin that is there by default in Neutron. That's a default open source plugin that we ship it with, we test the CI environment where you have compute nodes. Now NOVA network has been replaced with this Neutron network node. Key differences, we use something called network namespaces now, which is an isolation, which is a containerization or Linux kernel C-group level isolation, very lightweight isolation mechanism. So we can have all of these networks we saw, you know, with their own DNS mask. It's not a single network, it's a lot of them, but we can create them on-demand. It's software, it's just a software container. So we can pack in about 500,000 on this single network node of these instances. Obviously, how to manage it and figure out a very good way to manage it if you're packing so many of them. You can even spray it out. You can have multiple of these network nodes where you can put these L3 agents and OBS DHCP agents. Obviously, if you're running at scale, you'd probably want to make sure you have the expertise in-house to troubleshoot it. It's not the deployment, it becomes easy. The configuration, once you get it right, it keeps on working. But you need to have the expertise to troubleshoot it or you have to have somebody working with who has the knowledge to see if it increases the choice or if it increases the complexity of things you can do. I mean, the value to customer is great, but at the same time, manageability needs a little more understanding. It's not a simple model, so you need to know what's going on so you can troubleshoot it. But once you know about it, since it's been out two releases and we recommended that Neutron can be used in real kind of test air production environments, there's not enough knowledge out there. So you know, make sure that you have to kind of understand it, put it in a lab environment, work through your topologies and see how it actually works so you can actually fix it when you have an issue. So you can see here the core things we did is take all of the services that were there in NOAA Network, made it essentially multi-tenant, like you can have a lot of them. And you can scale out these network nodes so you can really scale to a larger number. So, but you have to look at it from the vantage point of NOAA Network. You go from one to two, that's 100% scale up. So you can go to 500. Now people are gonna say that it is not as easy to scale to like 1,000. I have like five of those nodes. I'm doing straight synchronization. You have increased your scale from one to 1,000, that's 1,000%. Obviously your management overhead is gonna be a little more than what it was with the older model. So you need to know how you're gonna maintain and make sure this cluster is highly available. Are you gonna use some other technology to wrap it around or just go saying go to good enough or how you're gonna design it such that the failure domains are kind of isolated. We need some thought if you're doing a super large scale environment. But compared to NOAA Network from one to 500, it's a lot more simple. You had the same architecture. Now you can get 500,000 of them with the same kind of fault characteristics, but with more flexibility. But if you're scaling more, you're probably gonna look at commercial solutions. You're gonna look at somebody to help you support that because you don't want to be on the hookup for that. So the last part of the topic was people keep asking us a lot. This has been coming up in the press that is neutron ready for production, right? Like, oh, that's great. You guys solved a lot of things and is it ready for production? And I thought I'm gonna be probably pretty biased having pitched neutron, right? And creating it. And even though we work with a lot of customers, I've worked with all of the early adopters. How do I actually convey or how do I even understand if it's ready for production before I can tell an audience? So thankfully for me, the user survey came out yesterday, so I had some fresh data to look at. So I took a look at that and I saw, you know, this is the slide from Dev QA Environments, the survey of about, I don't know, 700 or so of OpenStack deployments worldwide that the foundation puts together. If you will see over here, you know, which services are folks using? There is Nova on top and neutron over here in the middle. There were 169 neutron deployments, 204 Nova deployments. So I assumed the delta between 204 and 169 is 35. So people who are using Nova but not using neutron are probably using Nova network because you need networking in a cloud environment. So that's a pretty good number, which is the delta L sign that it is not all that bad. It's got a decent uptake. That's 169 neutron deployments compared to about 35 remaining Nova network deployments. And I don't see some people going away from Nova network for a while because upgrading is just challenging in OpenStack yet. There's no in place upgrade. So some of these are early adopters, probably going to be, they'll have to stand up a parallel cloud and do a migrate. And those Nova network numbers are going to keep on showing up. But if it's, that's a pretty big five is to one, a roughly difference. So that was a good sign to me that I think neutron, you know, you can probably, I can probably recommend users and customers to take neutron to the next level and try it out. Next. Then I looked at this survey around what backends to folks would run on neutron. People might say, oh, neutron's ready, but you have to, you have to only use a vendor proprietary backend and that's the only way it's integrated. So this was on the networking technologies people use. So you, and that was also a surprise. I didn't, I had, so look, I just found out this data yesterday. So I didn't know. I was looking at it, says OpenVSwitch is the leading networking backend. And that's what is the only backend that uses GRE tunneling and the mesh of networking in the back behind neutron to implement logical networking. That's also a good sign that most of these folks are actually using the OpenVSwitch plugin. Sure, I mean, there are use cases, like I said, where you'll need a vendor proprietary plugin. And a lot of folks are using NYSERA and Cisco for that or the Linux bridge for the legacy models of networking. So, okay. Oh, this, I skipped this. This was the data on production deployment. So the way the foundation tracks the deployments is POC, Dev QA environments, early workloads and production workloads. So POC, I just skipped it because it's not very relevant. People POC have all kinds of things. I looked at it, people have put a decent amount of Dev QA workloads, five is two, one on neutron versus no one network. Looked at production, people are more conservative, takes longer to go into production, right? There is about 135 neutron deployments compared to 51. No one network is still pretty in green because these people probably started an open stack journey quite a while ago. But that's not pretty bad. That's 2.5 neutron deployments per no one network deployments. So that was also a positive sign to me is that yes, no one network is still here. People who started early on in the journey are still using it. But neutron starting from POC where there's a higher uptake to Dev QA that's even higher uptake or Dev QA that's a little lower uptake and then to production where it is a little lower. But it's the growth rate compared to no one network is pretty high. So based on this data, I think, even the folks were saying that neutron might have issues but the numbers I look at from users tell me a different story. So I would think that there are probably issues in any software, there are bugs in software. We have an open framework in the community to help us drive together. But this is good enough to start deploying more complex workloads and topologies than what is possible before. So with that, I wanted to summarize why would you want to use no one network versus neutron? Told about choice, the federation droid model. You can have anything you want up till it's my way. And the neutron gives you infinite choice. Actually, sometimes I get scared it's too infinite. And so you have to lock it down actually with neutron you have to validate the whole choice matrix. Deaf test deployment, we saw no one network still have some footprint. Neutron has five is to one about kind of lead for deaf test workloads. Production workloads, no one network is actually better. It's more stable. It's been there for a while in the code base. So it's 2.5 is to one kind of mix for that. And use cases supported. Three use cases versus here pretty much self service, networking service L2 to L7 and a framework where you can create a networking service. Maybe you'll create your own CDN as a service integrated this for a framework you add your value add as a cloud service provider to your open stack deployment, right? Lot more use cases and services supported. So with that, I guess I would like to call out the rest of the people who are holding back with no one network. Come John the Rebel Alliance, right? Let's make this better. And I'm not gonna take some questions if anybody has any questions. And there's some resources here for people to look at. Go ahead. You wanna come into the mic? So go ahead. Oh. Okay, I'll just repeat the question. Okay, there we go. The Linux bridge open source driver is actually feature parity with OVS, just so you know. It's feature parity with OVS. VXLan and gray tunnels and everything. And creates tunnels. Yep. Okay, cool. And you can use both with Neutron, right? You have the choice that you can create a driver with Linux bridge only and you can integrate it. Yeah, I guess you can use both. Well, if you use OVS it actually uses Linux bridge underneath it. Yeah, for some functions. And then it ties IP tables via Linux bridge as well. Right, right. Or you can just use Linux bridge and not use OVS and things are a lot more simple. And you still get tunneling and everything, so. Sure, you can use all you want. That's the power of Neutron, right? You have all the choice. See what works best for you. Sure. Yeah, I just want to add a comment that the Neutron team is trying to make it make it in parity with the NOAA networking by Juno. So the NOAA networking whatever is available we should be able to do it in Neutron. Yes. I mean that's definitely a priority for the team. It has parity, but just corner cases which are holding some of these 35 users you saw back and that's a big priority that will probably happen by Juno. So if you remove all the corner cases. Remove all the cases and then we'll be deprecating the NOAA networking. Yeah. Okay. That's fair. All right, so you showed the for NOAA network versus Neutron for production and QA kind of deployments and you showed the distribution of plug-ins for the QA deployments. Did I miss the, was there a slide showing? Oh, I didn't, I didn't. But it is at the user survey, open stack user survey. It shows a distribution of plug-in for the QA deployment. I just didn't pick it up. I meant for the production deployment. For the production deployments I did see the distribution of plug-ins on the priority. Majority of it's open V-switch which is a default open source plug-in and in the other plug-in category there is Cisco and NYSERA which is the NSX plug-in. Those are the top two. Okay. I think we are, I'm saying we're running out of time. But we'll take one more question. Yeah. So Neutron actually has the network nodes can scale out. You can, and within the single host you can have namespaces. So you can scale out within the single host. So imagine we can have 500 NOVA network nodes in the single host because of namespace isolation and then we can scale out by adding new Neutron network nodes and within each one of them we can have 500 of them. That's not recommended because then you know you want your traffic to, which is leaving the North-South data center to be centralized. Otherwise you'll have to drag your WAN connectivity to your compute nodes which from a networking model becomes complex. Great. I think I'm saying I'm running out of time. Thanks everybody for coming. And we'll hope to see you as Neutron users.